xfreeze makes it possible to ship arbitrary Python programs, as
executables, to people who don't have Python.  xfreeze Version 1.4 is
an upgraded version of xfreeze designed to work with Python 1.4.

Based on Siebren van der Zee's (siebren@xs4all.nl) work on xfreeze and 
Guido van Rossum's (guido@CNRI.Reston.VA.US) work on freeze.

See README.freeze for info on how freeze is supposed to work.
xfreeze 1.4 has 2 important additional options to the normal freeze.
They are:	-O		This says go ahead and optimize the byte code
		-n passes 	Tells the optimizer how many passes to make
				  over the code. (defaults to 2)
		-d 		Tells the optimizer not to strip doc
				  strings.
Changes to xfreeze 1.4:
	Modified optimize.py to only require one pass over binary
        expressions to compeltely optimize constant expressions.
	Added -d option to indicate not to strip doc strings.
	(LOAD_CONST/POP_TOP byte code sequence)
Changes to xfreeze 1.3:
	Added BINARY_POWER and BUILD_SLICE to opcode.py
	Writes out .pyo file to speed up future freezes.
	Modified optimize.py to recognize complexType as a numeric type
	Modified optimize.py to evaluate BINARY_POWER opcode's when
	  arguments are constants.
	Merged in Guido's 1.4 version of freeze
	Added -i option to force initlization of builtins of modules freeze
	  didn't find via its syntax checking.
	Added -n option to configure name of optimization passes to preform
	  over the code.
	Added -s option to strip line numbers from resulting file.


Files that are not in the Python 1.4 distribution include:

    Programs:
	da.py		-- a disassembler
	om.py		-- a standalone optimizer
    Call either of these without any arguments to learn about their
    options.  This behaviour is consistent with freeze's.

    Modules:
	decode.py	-- Working horse for da.py
	ltup.py		-- instruction en/decoding
	optimize.py	-- def optimCode(code): return better(code)
	opcode.py	-- transcription of 1.4 Include/opcode.h,
		defines 'opcode' as a dictionary mapping instruction
		names represented as string to instruction numbers,
		defines 'cmp_op' as a tuple mapping the argument of a
		COMPARE_OP instruction to the python operator as a string,
		defines HAS_ARG(number) as a function returning whether
		the instruction has a two-byte integer argument.
	magic.py	-- Contains the version string for python byte
		code this version of the optimizer expects.
	revmap.py	-- Contains a class used to change the co_code
		and co_names members of a code object.

    Stuff:
    	Makefile	-- Ehrm... Feel free to ignore this file, straight
			     from xfreeze 1.3.
    	README		-- This file
    	optimtest.py	-- Something that optimizes very well.


The following optimizations are implemented in optimize.py:
 *	Redundant SET_LINENO instructions are deleted.
 	Those are the ones repeating the previous linenumber without any
 	jump landing in between them, and the ones immediately followed
 	by a SET_LINENO.  Optionally, all SET_LINENO instructions are killed.
 *	The co_consts member is optimized.
 	The code objects of functions found here are recursively optimized
 	Constants no longer used because of other optimisations are discarded.
 *	Variable names which are no longer used because of other optimisations
 	are stripped from the co_names member.
 *	Several operations on constants are no longer evaluated runtime.
 	This includes some unary and binary operators (but not compare_op)
 	and the loading of tuples and lists whoes members are all constant
 	(it is not possible to do this for dictionaries, unfortunately).
 	This can increase the total size of a frozen binary, as it replaces
 	a three-byte instruction by a more complex constant to marshal upon
 	its first import.  But it usually makes co_consts shorter..
 *	An instruction sequence LOAD_CONST, POP_TOP is removed.
 	As a side-effect, this strips doc strings (I personally like this).
 *	Unreachable code is removed (this is why om is 2-pass).
 *	Jumps to a jump are optimized.
 *	Jumps to the next instruction are eliminated.
 *	STORE instructions immediately followed by a load of the same
 	name are replaced by a DUP_TOP and a STORE instruction.
 	(Real dataflow analysis is as of this writing left as an excercise
 	to the reader).


	How good are these optimizations?

Here's what Siebren van der Zee (siebren@xs4all.nl) said on the
subject:

This partly depends on the code at hand.  'hello.py' cannot be
optimized, for example.  My timings show a speedup of 3% to 5%
for frozen versions of 'om'.  The pystone benchmark however is
only sped up by 1.5%.  On the other hand, I have seen speedups
of 6-7% for short running code.  I suspect this is because the
loading phase of such programs is relatively longer than that
for long-lived processes.  I assume the big difference is simply
marshaling the raw bytes into objects.  And since the output of
om is about 15-20% smaller than it's input...

Now these timings may not impress everybody, but then you can
gain another 1% if you are willing to sacrifice the linenumber
administration, or implement real dataflow analysis.

