blog | oilshell.org

Addendum: Two Recurse Center Projects That Explain CPython

In my last post on Recurse Center, I forgot to mention two projects that pulled me in:

byterun : A Python intepreter loop in Python, which I wrote about last year. I ran Oil unit tests with it, and fixed a bug related to import statements.

: A Python in Python, which I wrote about last year. I ran Oil unit tests with it, and fixed a bug related to statements. tailbiter: A Python bytecode compiler in Python. It does the same thing as compiler2, but it's written in a more compact and pedagogical style.

They both have an accompanying article:

I'm really grateful for this work, especially because I know how much effort it takes to write both code and prose.

These projects are so close to work I was already doing that it felt like Recurse Center was a natural fit.

Going Further

They're also a good foundation for understanding this talk:

Optimizations which made Python 3.6 faster than Python 3.5 (YouTube video), by Victor Stinner at PyCon 2017.

Excerpts from my notes:

(1) In Python 3.6, bytecode was changed to word code. Instead of encoding each instruction in one or three bytes, depending on whether it has an argument, they're all encoded in two bytes.

This saves a branch in the inner interpreter loop, because you avoid having to check HAVE_ARGUMENT .

in the inner interpreter loop, because you avoid having to check . In my refactoring of OPy, I noticed an EXTENDED_ARG opcode, which I assume can be used when you need three or more bytes.

opcode, which I assume can be used when you need three or more bytes. This problem reminds me of the posts on superoptimization I wrote last winter. The idea was to express functions over an enum using only primitive math operations — that is, without branches or lookup tables. In that case, we wanted a function over IDs in a syntax tree; in this case we want one over opcode IDs in instructions. The solution probably isn't directly applicable, but the problem is similar. (I also doubt I'll use it in Oil — it was more of a fun diversion.)

(2) Function calls in Python are expensive, and were optimized in two ways:

Avoiding the instantiation of temporary tuples for function arguments.

Combining multiple bytecodes into new LOAD_METHOD and CALL_METHOD ops.

This snippet shows that it takes four bytecodes to call a method in Python 2.7 (and presumably 3.5):

>>> def f(obj): ... return obj.method(42) ... >>> import dis >>> dis.dis(f) 2 0 LOAD_FAST 0 (obj) 3 LOAD_ATTR 0 (method) 6 LOAD_CONST 1 (42) 9 CALL_FUNCTION 1 12 RETURN_VALUE

Conclusion

The work on optimization is still in progress, and will be until after my trip, so that's all for now. I wrote a long comment about OPy this morning if you want to follow along more closely.

The next post will flush the blog backlog, as promised in the last post.