Python Programming, news on the Voidspace Python Projects and all things techie.

mock 0.7.2 released

There's a new minor release of mock, version 0.7.2 with two bugfixes in it.

mock is a Python library for simple mocking and patching (replacing objects with mocks during test runs). mock is designed for use with unittest, based on the "action -> assertion" pattern rather than "record -> replay".

The full changelog for this release is:

BUGFIX: instances of list subclasses can now be used as mock specs

BUGFIX: MagicMock equality / inequality protocol methods changed to use the default equality / inequality. This is done through a side_effect on the mocks used for __eq__ / __ne__

The most important change is the second one, which fixes an oddity with the way equality comparisons with MagicMock work(ed).

With the MagicMock class a lot of the useful python protocol methods (magic methods) are hooked up and preconfigured to either return a useful value or are themselves MagicMocks. __eq__ and __ne__ are allowed to return arbitrary objects and so were setup as mocks where you could configure the behaviour (through side_effect ) or the return value (through return_value ) yourself.

Here's how it works in mock 0.7.1:

>>> from mock import MagicMock >>> m = MagicMock () >>> m == 3 <mock.Mock object at 0x58c770> >>> m . __eq__ . call_count 1 >>> m . __eq__ . return_value = False >>> m == 3 False >>> m . __eq__ . call_count 2

The issue with this, as you can see above, is that MagicMock() == anything returns a mock object, which by default has a boolean value of True. This has the following effect:

>>> m = MagicMock () >>> if m == 3 : ... print 'Uhm...' ... Uhm...

Unfortunately this is how unittest.TestCase.assertEqual (and all sorts of other code) is implemented. This means that by default MagicMock would pass an assertEqual test against any object. This made it hard to write useful asserts with MagicMock .

The change is that MagicMock now has __eq__ and __ne__ setup with side_effect functions that implement the default equality / inequality behaviour, based on identity. You can still customise the behaviour in the same way as before if you want.

With mock 0.7.2:

>>> from mock import MagicMock >>> m = MagicMock () >>> m == 3 False >>> m . __eq__ . call_count 1 >>> m . __eq__ . return_value = True >>> m == 3 True >>> m . __eq__ . call_count 2

I've also been working on the next major release of mock, which will be 0.8. There'll be an alpha shortly, which will be by no means feature complete but will give you a chance to try out (and find bugs with / complain about) some of the major new features.

Just to get your appetite whetted, here is the changelog (so far). It will require a blog entry to explain the features, and the documentation is not yet updated, but some of these are pretty cool:

patch and patch.object now create a MagicMock instead of a Mock by default

and now create a instead of a by default Implemented auto-speccing (recursive, lazy speccing of mocks with mocked signatures for functions/methods). Use the autospec argument to patch

argument to Added the create_autospec function for manually creating 'auto-specced' mocks

function for manually creating 'auto-specced' mocks The patchers ( patch , patch.object and patch.dict ), plus Mock and MagicMock , take arbitrary keyword arguments for configuration

, and ), plus and , take arbitrary keyword arguments for configuration New mock method configure_mock for setting attributes and return values / side effects on the mock and its attributes

for setting attributes and return values / side effects on the mock and its attributes Protocol methods on MagicMock are magic mocks, and are created lazily on first lookup. This means the result of calling a protocol method is a MagicMock instead of a Mock as it was previously

are magic mocks, and are created lazily on first lookup. This means the result of calling a protocol method is a MagicMock instead of a Mock as it was previously Added ANY for ignoring arguments in assert_called_with calls

for ignoring arguments in calls Addition of call helper object

helper object In Python 2.6 or more recent, dir on a mock will report all the dynamically created attributes (or the full list of attributes if there is a spec) as well as all the mock methods and attributes.

on a mock will report all the dynamically created attributes (or the full list of attributes if there is a spec) as well as all the mock methods and attributes. Module level FILTER_DIR added to control whether dir(mock) filters private attributes. True by default. Note that vars(Mock()) can still be used to get all instance attributes and dir(type(Mock()) will still return all the other attributes (irrespective of FILTER_DIR )

added to control whether filters private attributes. by default. Note that can still be used to get all instance attributes and will still return all the other attributes (irrespective of ) Added the Mock API ( assert_called_with etc) to functions created by mocksignature

etc) to functions created by Private attributes _name , _methods , '_children', _wraps and _parent (etc) renamed to reduce likelihood of clash with user attributes.

, , '_children', and (etc) renamed to reduce likelihood of clash with user attributes. Removal of deprecated patch_object

namedtuple and generating function signatures

Kristjan Valur, the chief Python developer at CCP games (creators of Eve Online), has posted an interesting blog entry about the use of exec in namedtuple.

namedtuple is a relatively recent, and extraordinary useful, part of the Python standard library. It provides tuple subclasses with access through named fields instead of just by index.

Kristjan's blog entry is cool because of its opening words alone: In our port of Python 2.7 to the PS3 console... This is almost certainly related to the recently announced EVE Online FPS Console Game DUST 514.

As the blog entry goes on to point out, namedtuple is implemented by generating and exec'ing code for the classes it creates. I have a natural developer's distrust of exec , but as Raymond pointed out in a recent talk: execing code is not a security risk, execing untrusted code is. Whether or not you like this particular use of exec , it is a core language feature and I'm surprised that namedtuple was the only thing that broke when they removed it. (As namedtuple and a couple of additional uses discussed below demonstrate, exec is also a perfectly valid metaprogramming technique.)

All that aside, it is interesting how much of the core functionality of namedtuple (with lots of the bells and whistles missing) you can get in just 11 lines of Python:

from operator import itemgetter def namedtuple2 ( name , names ): def __new__ ( cls , * values ): assert len ( values ) == len ( names ) return tuple . __new__ ( cls , values ) def __repr__ ( self ): return ' %s%s ' % ( name , tuple . __repr__ ( self )) attrs = { '__new__' : __new__ , '__repr__' : __repr__ } for index , _name in enumerate ( names ): attrs [ _name ] = property ( itemgetter ( index )) return type ( name , ( tuple ,), attrs )

>>> Name = namedtuple2 ( 'Name' , 'one two three' . split ()) >>> n = Name ( 1 , 2 , 3 ) >>> n Name(1, 2, 3) >>> n . one 1 >>> n . two 2 >>> n . three 3 >>> n = Name ( 1 , 2 , 3 , 4 ) Traceback (most recent call last): ... AssertionError

The most obviously missing functionality here is keyword argument support in both the object constructor and the repr. For a full implementation of namedtuple without using exec see the patch here: http://bugs.python.org/issue3974

I would marginally prefer a version that didn't use exec, but implementation maintainability is a much more important consideration and Raymond (who is the creator of namedtuple ) feels that the current implementation is better from that point of view.

Clearly namedtuple can be implemented without the use of exec (or eval), however some of the functionality in the decorator module by Michele Simionato can't. The mocksignature functionality in the mock module suffers from the same problem and gets round it using the same technique as the decorator module.

What they're both doing is building functions with the same signature as another function, those "generated functions" then delegate to another function. Both the decorator module and mocksignature do this in order to provide a new function that has the same call signature as the original.

If you don't care about the call signature then there is an easy pattern:

def function ( * args , ** kwargs ): return delegated_function ( * args , ** kwargs )

When function is called it calls delegated_function with exactly the same arguments as function was called with. The issue is that from an introspection point of view you have now lost the call signature. Generating the code (or an ast, but same difference) and then executing it seems to be the only way round this problem in Python. This does point out a weakness in the language, but I can't even imagine what the "missing language feature" should look like, so I don't have any solutions to offer.

The issue is that named arguments become local variables in the scope of the function. So to write code that uses those arguments you need to know the name of the arguments - which you only know at runtime. You could look them up with locals() , but that is very bad for other implementations (for both Python and IronPython accessing locals() will switch off JIT optimisations). Even if you could build a function with a runtime specified signature you wouldn't be able to provide generic code that uses those arguments (passes them onto the delegated function); that code has to be generated too.

Nothing is Private: Python Closures (and ctypes)

As I'm sure you know Python doesn't have a concept of private members. One trick that is sometimes used is to hide an object inside a Python closure, and provide a proxy object that only permits limited access to the original object.

Here's a simple example of a hide function that takes an object and returns a proxy. The proxy allows you to access any attribute of the original, but not to set or change any attributes.

def hide ( obj ): class Proxy ( object ): __slots__ = () def __getattr__ ( self , name ): return getattr ( obj , name ) return Proxy ()

Here it is in action:

>>> class Foo ( object ): ... def __init__ ( self , a , b ): ... self . a = a ... self . b = b ... >>> f = Foo ( 1 , 2 ) >>> p = hide ( f ) >>> p . a , p . b (1, 2) >>> p . a = 3 Traceback (most recent call last): ... AttributeError : 'Proxy' object has no attribute 'a'

After the hide function has returned the proxy object the __getattr__ method is able to access the original object through the closure. This is stored on the __getattr__ method as the func_closure attribute (Python 2) or the __closure__ attribute (Python 3). This is a "cell object" and you can access the contents of the cell using the cell_contents attribute:

>>> cell_obj = p . __getattr__ . func_closure [ 0 ] >>> cell_obj . cell_contents <__main__.Foo object at 0x...>

This makes hide useless for actually preventing access to the original object. Anyone who wants access to it can just fish it out of the cell_contents .

What we can't do from pure-Python is*set* the contents of the cell, but nothing is really private in Python - or at least not in CPython.

There are two Python C API functions, PyCell_Get and PyCell_Set, that provide access to the contents of closures. From ctypes we can call these functions and both introspect and modify values inside the cell object:

>>> import ctypes >>> ctypes . pythonapi . PyCell_Get . restype = ctypes . py_object >>> py_obj = ctypes . py_object ( cell_obj ) >>> f2 = ctypes . pythonapi . PyCell_Get ( py_obj ) >>> f2 is f True >>> new_py_obj = ctypes . py_object ( Foo ( 5 , 6 )) >>> ctypes . pythonapi . PyCell_Set ( py_obj , new_py_obj ) 0 >>> p . a , p . b (5, 6)

As you can see, after the call to PyCell_Set the proxy object is using the new object we put in the closure instead of the original. Using ctypes may seem like cheating, but it would only take a trivial amount of C code to do the same.

Two notes about this code.

It isn't (of course) portable across different Python implementations Don't ever do this, it's for illustration purposes only!

Still, an interesting poke around the CPython internals with ctypes. Interestingly I have heard of one potential use case for code like this. It is alleged that at some point Armin Ronacher was using a similar technique in Jinja2 for improving tracebacks. (Tracebacks from templating languages can be very tricky because the compiled Python code usually bears a quite distant relationship to the original text based template.) Just because Armin does it doesn't mean you can though...

Using patch.dict to mock imports

I had an email from a mock user asking if I could add a patch_import to mock that would patch __import__ in a namespace to replace the result of an import with a Mock.

It's an interesting question, with a couple of caveats:

Don't patch __import__ . If you must monkey around with imports use a PEP 302 loader. Wanting to mock importing inexorably means you're doing dynamic imports, most probably local imports inside a function. This is sometimes done to prevent circular dependencies, for which there is usually a much better way to solve the problem (refactor the code) or to prevent "up front costs" by delaying the import. This can also be solved in better ways than an unconditional local import (store the module as a class or module attribute and only do the import the first time).

That aside there is a way to use mock to affect the results of an import, and it has nothing to do with patching out __import__ . Importing fetches an object from the sys.modules dictionary. Note that it fetches an object, which need not be a module. Importing a module for the first time results in a module object being put in sys.modules , so usually when you import something you get a module back. This need not be the case however.

This means you can use patch.dict to temporarily put a mock in place in sys.modules . Any imports whilst this patch is active will fetch the mock. When the patch is complete (the decorated function exits, the with statement body is complete or patcher.stop() is called) then whatever was there previously will be restored safely.

Here's an example that mocks out the 'fooble' module.

>>> from mock import patch , Mock >>> import sys >>> mock = Mock () >>> with patch . dict ( 'sys.modules' , { 'fooble' : mock }): ... import fooble ... fooble . blob () ... <mock.Mock object at 0x519b50> >>> assert 'fooble' not in sys . modules >>> mock . blob . assert_called_once_with ()

As you can see the import fooble succeeds, but on exit there is no 'fooble' left in sys.modules .

This also works for the from module import name form:

>>> mock = Mock () >>> with patch . dict ( 'sys.modules' , { 'fooble' : mock }): ... from fooble import blob ... blob . blip () ... <mock.Mock object at 0x...> >>> mock . blob . blip . assert_called_once_with ()

With slightly more work you can also mock package imports:

>>> mock = Mock () >>> modules = { 'package' : mock , 'package.module' : mock . module } >>> with patch . dict ( 'sys.modules' , modules ): ... from package.module import fooble ... fooble () ... <mock.Mock object at 0x...> >>> mock . module . fooble . assert_called_once_with ()

Unfortunately it seems that using patch.dict as a test decorator on sys.modules interferes with the way nosetests collects tests. nosetests does some manipulation of sys.modules (along with sys.path manipulation) and using patch.dict with sys.modules can cause it to not find tests. Using patch.dict as a context manager, or using the patcher start and stop methods, works around this by taking a reference to sys.modules inside the test rather than at import time. (Using patch.dict as a decorator takes a reference to sys.modules at import time, it doesn't do the patching until the test is executed though.) This is an intriguing bug in nosetests , so I may see if I can reproduce and diagnose it.

Archives