Often we work on large static datasets and code that takes a long time to load. Then we work on adding on little bits of code on top of that, but the time to load the data needed to run a test case can be daunting.

In such cases we can load all the data we want and then fork the process, the child process will inherit all the variables from the parent and you can then load new snippets of code on top of that baseline to do you development.

For this I created the forkr lib that you include with

import forkr forkr.main(globaldata)

The global data will be made available to the child process.

I find running the nose test modules in python very useful. The only problem is getting the global data into there, so I created a simple plugin that you can use forkr.DataPlugin, this overrides the prepareTestCase to install the __global_test_data__ variable into each nose test module that you load, this way tou can

Code is here: https://github.com/h4ck3rm1k3/py-loadr-forkr-debugr/blob/master/forkr.py tested on python2.7, compiles on 3, not tested yet

Python supports reloading code, but I have not been able to get that working to my satisfaction. Best is to load all your variable code in the child process.