This directory contains my functor module.  With it you can easily and
efficiently wrap any callable object together with a partial (or
complete) parameter list to form another callable object.  The new
function object can then called with the missing parameters.

The functor module exports two primary interfaces: functor and
xapply.

   xapply(func, *args, **kws)
        I like to think of this one as a lazy apply.  It works
        pretty much like apply does but rather than return the result
        of the function call, it returns an object that you can
        later.

   functor(func, args, kws)
        Wraps 'func' into a new callable object using 'args' and 'kws'
        to partially construct the argument list.  The returned object
        can be called with the remaining arugments.


Conceptually at least here's what it give you.

   def xapply(func, *args, **kws):
        return functor(func, args, kws)

   def functor(func, args, kws):
        return _Functor(func, args, kws).__call__

   class _Functor:
        def __init__(self, func, args, kws):
            self.func = func
            self.args = args
            self.kws = tuple(kws.items())

        def __call__(self, *args, **kws):
            args = self.args + args
            for k,v in self.items
                kws[k] = v
            return apply(self.func, args, kws)


Some Analysis:

This works reasonably well, but is quite inefficient both in terms of
memory usage and especially in terms of runtime.  A single instance of
a callable generated using this mechanism takes looks like this
in memory:

        MethodObject:   5       (returned by __call__)
        InstanceObject: 4       (holds func, args, and kws)
        Dictionary:     6       (used by the instance)
           entries:     3*s(3)  (s(3) = 7, it takes 7 entries to hold 3 objects)
        TupleObject:    2+n     (to hold the n args)
        TupleObject:    2+k     (to hold the k keyword pairs)
        TupleObject:    (2+2)*k (the pairs)
        malloc:         6       (assuming k=0)

All sizes are in words and each line represents one malloc'ed chunk of
memory. With malloc over head at just one word (yeah right;), and even
when there are no arguments and no keyword arguments, we are looking
at 5+4+6+21+2+4+0+6 = 48 words or 198 bytes just so that we can call a
function without passing it any arguments!

I'll leave the detailed runtime analysis to someone else, but my
experiments have shown an increase in runtime that is in the area of
four to five times that of a simple function call.

My implementation, however is not nearly as straightforward.  (I would
love to see the look on Guido's face when he first sees my code ;) But
results in a callable object equivalent in functionality to the one
described above that runs only twice the time of a normal function
call and uses:

        FunctionObject: 7       (a newly constructed function object)
        CodeObject:     11      (a newly constructed code object)
        TupleObject:    2+4     (to hold the constants in the code object)
        TupleObject:    2+n     (to hold the n args)
        TupleObject:    2+k     (to hold the k keyword pairs)
        TupleObject:    (2+2)*k (the pairs)
        malloc:         5       (assuming k=0)

or (with the same example given above) 7+11+6+2+2+0+5=33 words.

The end result, in most cases is a callable object that uses about 30%
less memory (in one fewer malloc chunks) as the one shown above but
greatly decreases the function call overhead from an intolerable four
to five times that of a simple function call to mearly 2 times that of
a simple function call (YMMV).  Of course, the performance could be
even better if more was done in C, but this is nearly standard Python.
I really doubt that the memory savings are going to convince anyone of
anything, but the function call speed-ups could help out your Tkinter
programs (or any program that uses a lot of callbacks) quite a bit.


The Implementation:

This functionality does come at a price however.  It requires the
"newmodule" which is turned off in the default configuration.
Unfortunately, it also requires a small patch (included) to the
newmodule to enable new.function() which was broken a couple of
releases ago.

The functor module is just a straight forward application of a very
simple yet powerful play on Python's implementation of function
objects.  I have encapsulated this concept in what I am now calling
the template module.  The template module exports an function
called

        patch_consts(func, name=None, globals=None, locals=None)

which takes in a function object and returns one that is is very
similar ;)

Here is a real simple example of what you can do with it:

    def _map_template(arg):
        return '__map__'('__func__', arg)

    def make_mapper(func):
        name = 'map_' + func.__name__
        return patch_consts(_map_template, name, globals(), locals())

The function _map_template is obviously useless by itself. Running it
will certaintly result in an error.  Luckily, patch_const() knows just
what to do.  Given a "function template" and an environment (local and
global dictionaries), patch_const will generate a new function from
the template by substituting all of the "__funny__" looking string
constants with the result of eval()ing the trimmed down string in the
given environment!

So,

    seq_upper1 = make_mapper(string.upper)

is nearly equivalent to

    def seq_upper2(arg):
        return map(string.upper, arg)

            
but there are some real important differences.  In particular, the
make_mapper version generates fewer instructions and runs much faster.
A quick look at the disassembled code will show why:

Here is the hand written version.  It takes 25 instruction slots and
does two (expensive) LOAD_GLOBALs.

>>> dis.disco(seq_upper2.func_code)
          0 SET_LINENO          1

          3 SET_LINENO          2
          6 LOAD_GLOBAL         0 (map)
          9 LOAD_GLOBAL         1 (string)
         12 LOAD_ATTR           2 (upper)
         15 LOAD_FAST           0 (arg)
         18 CALL_FUNCTION       2
         21 RETURN_VALUE   
         22 LOAD_CONST          0 (None)
         25 RETURN_VALUE
>>>

The template generated version however looks like this:

>>> dis.disco(s.func_code)
          0 SET_LINENO          1

          3 SET_LINENO          2
          6 LOAD_CONST          1 (<built-in function map>)
          9 LOAD_CONST          2 (<built-in function upper>)
         12 LOAD_FAST           0 (args)
         15 CALL_FUNCTION       2
         18 RETURN_VALUE   
         19 LOAD_CONST          0 (None)
         22 RETURN_VALUE   

...only 22 instruction slot and *no* GLOBAL_LOADs.

What patch_const() did was to replace the string constants from the
template function with much more useful "constants".  It did this by
first making a copy of the function template and it's associated code
object. It then looked through the new list of constants for any
strings of the form "__xxx__".  After striping off the leading and
trailing double underscores, it fed the string to eval().  The result
was then inserted in place of the original constant.

So, strictly speaking, the two functions are not the same.  If
bindings to either map or string.upper change, the hand written
version will track those changes.  However, the template generated one
will not.  Elimination of the GLOBAL_LOADs accounts for the decrease
in runtime.

More examples of how to abuse patch_const(), take a look at the code
in functor.py.  Give me a while and I'm sure I'll come up with more
interesting things to do with it too.

--
Donald Beaudry                                         Silicon Graphics
Compilers/MTI                                          1 Cabot Road
donb@sgi.com                                           Hudson, MA 01749
                  ...So much code, so little time...

