Multi-Line Lambdas in Python Using the With Statement

Python does not have multi-line lambdas because Guido dislikes them aesthetically. However, with just a bit of introspection, code like this is possible:

>>> with each([ 12 , 14 , 16 ]): ... def _ (x): ... print x ... print x + 1 ... 12 13 14 15 16 17

I'll say a bit about my motivation for creating code like this, show how easy it is to write, and then I'll argue that code like this is both pythonic and aesthetically appealing in some circumstances.

A Mystery Solved (By Holmes Himself!)

When I first saw code using the with statement, my hope was that it would be able to be used somewhat like Haskell's Where clause or Ruby's blocks. When I dug into the spec, I was disappointed to discover that if it was possible, it wasn't easy, and I pushed the thought aside.

That was a couple years ago, and I didn't give it a moment's thought until I saw a blog post by Richard Jones that uses a with statement in exactly the way I had considered impossible up to now. I spent a few hours trying to figure it out, but I was stumped, so I put up a question on Stack Overflow to see if somebody could show me how he did it.

Within a few hours, Alex Martelli himself chimed in with a wonderful solution. The gist of the answer is that you can use the inspect module to access the context manager's calling scope, and figure out what variables have been defined between its __enter__ and __exit__ functions. I'm glad I asked aloud, because even if I had stumbled close to the solution, I surely wouldn't have come up with one as complete as his.

The How

Once I had Alex's proof of concept code in hand, I went to work making it do what I'd had in my head so long ago. In about an hour, I was able to write code that looks like this:

@accepts_block def each (iterable, block): for i in iterable: block(i) with each([ "twelve" , "fourteen" , "sixteen" ]): def _ (x): print x @accepts_block def bmap (arr, block): return map (block, arr) with bmap([ 1 , 2 , 3 ]) as foo: def _ (x): return ( float (x) + 1 ) / 2 print foo # [1.0, 1.5, 2.0]

What you see above are two functions which use a decorator giving them access to the function defined within the with block. The decorator passes the block to the function as its last argument just like in Ruby.

To understand how this happens, you need to know how context managers work. Context managers consist of a class with __enter__ and __exit__ methods which are called upon entering the with block and upon exiting, just as you'd expect.

Alex's solution involves scanning the scope of the calling function from the __enter__ and __exit__ methods, and pulling out the differences between them. These differences will be all the variables that were defined in the with block. A sketch:

class FindInteresting ( object ): def __enter__ ( self ): f = inspect . currentframe( 1 ) self . already_defined = dict (f . f_locals) def __exit__ ( self ): f = inspect . currentframe( 1 ) #pick out the differences between f.f_locals and self.already_defined

When we pick out the differences between the two, we need to be careful to check for names that have been redefined so that we don't miss out on new functions that reuse old names.

def __exit__ ( self ): f = inspect . currentframe( 1 ) interesting = {} for n in f . f_locals: newf = f . f_locals[n] if n not in self . already_defined: interesting[n] = newf continue anf = self . already_defined[n] if id (newf) != id (anf): interesting[n] = newf

After this function has run, interesting is a dictionary which (probably) contains all the names and values of the variables that have been redefined in the with block.

Because we have to use the id check to determine if a name has been redefined, and Python sometimes caches objects in memory, our function can be fooled. In this case, interesting will not detect x because it's being redefined and cpython caches the low integers, so id(x) will be the same for both x s.

x = 1 with FindInteresting: x = 1

In general, the cpython runtime is not aggressive about caching, but you should know that this possibility exists. If you use this technique, I recommend being strict about checking only newly defined functions, since there's no way to be sure if you missed any redefined names.

To make the teaser code at the top of the article work, I just wrapped Alex's code into a decorator that returned a context manager, then called the function being decorated with the definitions that we found in the interesting dictionary. The context manager's __call__ function gets overridden to allow you to pass in arguments for the function being decorated.

def accepts_block (f): class BlockContextManager ( object ): def __call__ ( self , * args, ** kwargs): self . thefunction = functools . partial(f, * args, ** kwargs) return self def __enter__ ( self ): #do Alex's magic, just as above def __exit__ ( self ): #make the interesting dictionary, just as above if len (interesting) == 1 : block = list (interesting . itervalues())[ 0 ] assert isinstance (block, type ( lambda : None )) self . thefunction(block) return BlockContextManager()

It looks complicated and nested, but all it's doing is saving the function and all its arguments, grabbing the definitions from the with block, making sure there's only one definition and it's a function, then tacking it onto the end of the arguments list for the function and calling it. Phew.

The code above handles the case where you don't need to store the result of the function being decorated:

@accepts_block def each (iterable, block): for i in iterable: block(i)

But what if we want to store the result? Turns out, we can further abuse the with block by hijacking its as clause. Because a variable defined in the as clause gets detected by Alex's code, we can use the inspect module to change that variable so that after the with block it reflects the result of our computation.

First we check to see if we probably have a block and a variable in the as statement, then we reach in and store our result there if we are in that case:

def __exit__ ( self ): #exactly as before; frame = inspect.currentframe(1) if len (interesting) == 1 : #exactly the same as before elif len (interesting) == 2 : block = None savename = None for n,v in interesting . iteritems(): if isinstance (v, type ( lambda : None )): block = v else : savename = n assert savename and isinstance (block, type ( lambda : None )) frame . f_locals[savename] = self . thefunction(block)

Which lets us do this:

@accepts_block def bmap (iterable, block): return map (block, iterable) with bmap([ 1 , 2 , 3 ]) as result: def _ (x): return x ** 2 print result #[1,4,9]

This time, we're really taking a leap by assuming that if we find a callable and any other variable, that the variable is where we want to store our results. This can lead to somewhat unexpected results:

with bmap([ 1 , 2 , 3 ]): not_a_result = 12 def _ (x): return x ** 2 print not_a_result # [1,4,9] instead of 12

That's my extremely long-winded description of how to abuse the with operator. If you want to see the full function and the super-lame test code I wrote, you can head on over to github and check it out.

Aesthetics

It should be clear from all of the disclaimers I've had to put into this article that this technique is of limited use in Python as it stands today. I'd like to make an argument that it suggests some nice syntactic sugar for python to support someday, while remaining totally ignorant of the actual difficulties of putting it into the language.

To do so, I'll start by posting the motivating example for decorators from the relevant PEP. It argues that this code:

def foo ( cls ): pass foo = synchronized(lock)(foo) foo = classmethod (foo)

is not nearly as readable as this code:

@classmethod @synchronized (lock) def foo ( cls ): pass

The main problem with the readability of the first snippet is not that it requires 2 redefinitions and 4 repetitions of foo . Rather, the main problem is that it places the cart before the horse by putting the function body ahead of the declarations that are required to understand it.

Similarly, when we define callback functions before we use them, we're required to define the function body before the place where it will be actually used. Often, we see:

def handle_click ( self ): foo = self . flim() if foo: self . flam(foo) onClick(handle_click)

When it would be clearer to write:

with onClick(): def _ ( self ): foo = self . flim() if foo: self . flam()

Which I find much more appealing.

Conclusion

I expect that there's no way that syntax like this could be officially supported by Python, both because of syntactic constraints and the BDFL 's aesthetic concerns. I do think it is a neat exercise in pushing the Python interpreter and syntax past where they want to go, and I hope that it gives some food for thought on an interesting Python syntax.

I'm excited to see where Richard Jones goes with his project, and the design choices that he makes in it, since he's pushing the boundaries of Python design. Many thanks to him and Alex Martelli for sending me down quite an enjoyable path.

Finally, in case you missed it above, go ahead and take a look at the code on github.

If you want to leave a comment, I suggest leaving it on reddit.

Update:

Someone has taken this technique a bit farther, using some bytecode hackery.