This proposal introduces the stdlib interpreters module. The module will be provisional . It exposes the basic functionality of subinterpreters already provided by the C-API, along with new (basic) functionality for sharing data between interpreters.

CPython has supported multiple interpreters in the same process (AKA "subinterpreters") since version 1.5 (1997). The feature has been available via the C-API. [c-api] Subinterpreters operate in relative isolation from one another , which facilitates novel alternative approaches to concurrency .

To avoid any confusion up front: This PEP is unrelated to any efforts to stop sharing the GIL between subinterpreters. At most this proposal will allow users to take advantage of any results of work on the GIL. The position here is that exposing subinterpreters to Python code is worth doing, even if they still share the GIL.

To mitigate that impact and accelerate compatibility, we will do the following:

Many extension modules do not support use in subinterpreters yet. The maintainers and users of such extension modules will both benefit when they are updated to support subinterpreters. In the meantime users may become confused by failures when using subinterpreters, which could negatively impact extension maintainers. See Concerns below.

Here is a summary of the API for the interpreters module. For a more in-depth explanation of the proposed classes and functions, see the "interpreters" Module API section below.

At first only the following types will be supported for sharing:

Note that objects are not shared between interpreters since they are tied to the interpreter in which they were created. Instead, the objects' data is passed between interpreters. See the Shared data section for more details about sharing between interpreters.

Along with exposing the existing (in CPython) subinterpreter support, the module will also provide a mechanism for sharing data between interpreters. This mechanism centers around "channels", which are similar to queues and pipes.

The interpreters module will provide a high-level interface to subinterpreters and wrap a new low-level _interpreters (in the same way as the threading module). See the Examples section for concrete usage and use cases.

The interpreters module will be added to the stdlib. To help authors of extension modules, a new page will be added to the Extending Python docs. More information on both is found in the immediately following sections.

This shouldn't be a problem for now as we have no immediate plans to actually share data between interpreters, instead focusing on copying.

A common misconception is that this PEP also includes a promise that subinterpreters will no longer share the GIL. When that is clarified, the next question is "what is the point?". This is already answered at length in this PEP. Just to be clear, the value lies in:

However, this PEP does not propose any new concurrency API. At most it exposes minimal tools (e.g. subinterpreters, channels) which may be used to write code that follows patterns associated with (relatively) new-to-Python concurrency models . Those tools could also be used as the basis for APIs for such concurrency models. Again, this PEP does not propose any such API.

Introducing an API for a a new concurrency model, like happened with asyncio, is an extremely large project that requires a lot of careful consideration. It is not something that can be done a simply as this PEP proposes and likely deserves significant time on PyPI to mature. (See Nathaniel's post on python-dev.)

Ultimately this comes down to a question of how often it will be a problem in practice: how many projects would be affected, how often their users will be affected, what the additional maintenance burden will be for projects, and what the overall benefit of subinterpreters is to offset those costs. The position of this PEP is that the actual extra maintenance burden will be small and well below the threshold at which subinterpreters are worth it.

Consequently, projects that publish extension modules may face an increased maintenance burden as their users start using subinterpreters, where their modules may break. This situation is limited to modules that use C globals (or use libraries that use C globals) to store internal state. For numpy, the reported-bug rate is one every 6 months. [bug-rate]

In the Interpreter Isolation section below we identify ways in which isolation in CPython's subinterpreters is incomplete. Most notable is extension modules that use C globals to store internal state. PEP 3121 and PEP 489 provide a solution for most of the problem, but one still remains. [petr-c-ext] Until that is resolved (see PEP 573 ), C extension authors will face extra difficulty to support subinterpreters.

Notably, subinterpreters are not intended as a replacement for any of the above. Certainly they overlap in some areas, but the benefits of subinterpreters include isolation and (potentially) performance. In particular, subinterpreters provide a direct route to an alternate concurrency model (e.g. CSP) which has found success elsewhere and will appeal to some Python users. That is the core value that the interpreters module will provide.

Alternatives to subinterpreters include threading, async, and multiprocessing. Threading is limited by the GIL and async isn't the right solution for every problem (nor for every person). Multiprocessing is likewise valuable in some but not all situations. Direct IPC (rather than via the multiprocessing module) provides similar benefits but with the same caveat.

Some have argued that subinterpreters do not add sufficient benefit to justify making them an official part of Python. Adding features to the language (or stdlib) has a cost in increasing the size of the language. So an addition must pay for itself. In this case, subinterpreters provide a novel concurrency model focused on isolated threads of execution. Furthermore, they provide an opportunity for changes in CPython that will allow simultaneous use of multiple CPU cores (currently prevented by the GIL).

This proposal is focused on enabling the fundamental capability of multiple isolated interpreters in the same Python process. This is a new area for Python so there is relative uncertainly about the best tools to provide as companions to subinterpreters. Thus we minimize the functionality we add in the proposal as much as possible.

CPython has supported subinterpreters, with increasing levels of support, since version 1.5. While the feature has the potential to be a powerful tool, subinterpreters have suffered from neglect because they are not available directly from Python. Exposing the existing functionality in the stdlib will help reverse the situation.

Running code in multiple interpreters provides a useful level of isolation within the same process. This can be leveraged in a number of ways. Furthermore, subinterpreters provide a well-defined framework in which such isolation may extended.

Subinterpreters are not a widely used feature. In fact, the only documented cases of wide-spread usage are mod_wsgi , OpenStack Ceph , and JEP . On the one hand, these cases provide confidence that existing subinterpreter support is relatively stable. On the other hand, there isn't much of a sample size from which to judge the utility of the feature.

Finally, some potential isolation is missing due to the current design of CPython. Improvements are currently going on to address gaps in this area:

Second, some isolation is faulty due to bugs or implementations that did not take subinterpreters into account. This includes things like extension modules that rely on C globals. [cryptography] In these cases bugs should be opened (some are already):

However, there are ways in which interpreters share some state. First of all, some process-global state remains shared:

CPython's interpreters are intended to be strictly isolated from each other. Each interpreter has its own copy of all modules, classes, functions, and variables. The same applies to state in C, including in extension modules. The CPython C-API docs explain more. [caveats]

Limiting the initial shareable types is a practical matter, reducing the potential complexity of the initial implementation. There are a number of strategies we may pursue in the future to expand supported objects and object sharing strategies.

This approach, including keeping the API minimal, helps us avoid further exposing any underlying complexity to Python users. Along those same lines, we will initially restrict the types that may be passed through channels to the following:

To make this work, the mutable shared state will be managed by the Python runtime, not by any of the interpreters. Initially we will support only one type of objects for shared state: the channels provided by create_channel() . Channels, in turn, will carefully manage passing objects between interpreters.

As simply described earlier by the API summary, channels have two operations: send and receive. A key characteristic of those operations is that channels transmit data derived from Python objects rather than the objects themselves. When objects are sent, their data is extracted. When the "object" is received in the other interpreter, the data is converted back into an object owned by that interpreter.

Regarding the proposed solution, "channels", it is a basic, opt-in data sharing mechanism that draws inspiration from pipes, queues, and CSP's channels. [fifo]

Consequently,the mechanism for sharing needs to be carefully considered. There are a number of valid solutions, several of which may be appropriate to support in Python. This proposal provides a single basic solution: "channels". Ultimately, any other solution will look similar to the proposed one, which will set the precedent. Note that the implementation of Interpreter.run() will be done in a way that allows for multiple solutions to coexist, but doing so is not technically a part of the proposal here.

The key challenge here is that sharing objects between interpreters faces complexity due to various constraints on object ownership, visibility, and mutability. At a conceptual level it's easier to reason about concurrency when objects only exist in one interpreter at a time. At a technical level, CPython's current memory model limits how Python objects may be shared safely between interpreters; effectively objects are bound to the interpreter in which they were created. Furthermore the complexity of object sharing increases as subinterpreters become more isolated, e.g. after GIL removal.

Subinterpreters are inherently isolated (with caveats explained below), in contrast to threads. So the same communicate-via-shared-memory approach doesn't work. Without an alternative, effective use of concurrency via subinterpreters is significantly limited.

One class of concurrency models focuses on isolated threads of execution that interoperate through some message passing scheme. A notable example is Communicating Sequential Processes (CSP) (upon which Go's concurrency is roughly based). The isolation inherent to subinterpreters makes them well-suited to this approach.

Concurrency is a challenging area of software development. Decades of research and practice have led to a wide variety of concurrency models, each with different goals. Most center on correctness and usability.

While the module is provisional, any changes to the API (or to behavior) do not need to be reflected here, nor get approval by the BDFL-delegate. However, such changes will still need to go through the normal processes (BPO for smaller changes and python-dev/PEP for substantial ones).

The new interpreters module will be added with "provisional" status (see PEP 411 ). This allows Python users to experiment with the feature and provide feedback while still allowing us to adjust to that feedback. The module will be provisional in Python 3.9 and we will make a decision before the 3.10 release whether to keep it provisional, graduate it, or remove it. This PEP will be updated accordingly.

I've solicited feedback from various Python implementors about support for subinterpreters. Each has indicated that they would be able to support subinterpreters (if they choose to) without a lot of trouble. Here are the projects I contacted:

A channel is automatically closed and destoyed once there are no more Python objects (e.g. RecvChannel and SendChannel ) referring to it. So it is effectively triggered via garbage-collection of those objects..

Python objects are not shared between interpreters. However, in some cases data those objects wrap is actually shared and not just copied. One example might be PEP 3118 buffers. In those cases the object in the original interpreter is kept alive until the shared data in the other interpreter is no longer used. Then object destruction can happen like normal in the original interpreter, along with the previously shared data.

Second, the main mechanism for sharing objects (i.e. their data) between interpreters is through channels. A channel is a simplex FIFO similar to a pipe. The main difference is that channels can be associated with zero or more interpreters on either end. Like queues, which are also many-to-many, channels are buffered (though they also offer methods with unbuffered semantics).

First, channels may be passed to run() via the channels keyword argument, where they are effectively injected into the target interpreter's __main__ module. While passing arbitrary shareable objects this way is possible, doing so is mainly intended for sharing meta-objects (e.g. channels) between interpreters. It is less useful to pass other objects (like bytes ) to run directly.

The interpreters module provides a function that users may call to determine whether an object is shareable or not:

Subinterpreters are less useful without a mechanism for sharing data between them. Sharing actual Python objects between interpreters, however, has enough potential problems that we are avoiding support for that here. Instead, only mimimum set of types will be supported. Initially this will include None , bytes , str , int , and channels. Further types may be supported later.

Raising (a proxy of) the exception directly is problematic since it's harder to distinguish between an error in the run() call and an uncaught exception from the subinterpreter.

Regarding uncaught exceptions in Interpreter.run() , we noted that they are "effectively" propagated into the code where run() was called. To prevent leaking exceptions (and tracebacks) between interpreters, we create a surrogate of the exception and its traceback (see traceback.TracebackException ), set it to __cause__ on a new RunFailedError , and raise that.

We may choose to later loosen some of the above restrictions or provide a way to enable/disable granular restrictions individually. Regardless, requiring PEP 489 support from extension modules will always be a default restriction.

One advantage of this approach is that it allows extension maintainers to check subinterpreter compatibility before they implement the PEP 489 API. Also note that isolated=False represents the historical behavior when using the existing subinterpreters C-API, thus providing backward compatibility. For the existing C-API itself, the default remains isolated=False . The same is true for the "main" module, so existing use of Python will not change.

This represents the full "isolated" mode of subinterpreters. It is applied when interpreters.create() is called with the "isolated" keyword-only argument set to True (the default). If interpreters.create(isolated=False) is called then none of those restrictions is applied.

By default, every new interpreter created by interpreters.create() has specific restrictions on any code it runs. This includes the following:

Also, the ImportError for imcompatible extgension modules will have a message that clearly says it is due to missing subinterpreter compatibility and that extensions are not required to provide it. This will help set user expectations properly.

Note that the documentation will play a large part in mitigating any negative impact that the new interpreters module might have on extension module maintainers.

A separate page will be added to the docs for resources to help extension maintainers ensure their modules can be used safely in subinterpreters, under Extending Python . The page will include the following information:

The new stdlib docs page for the interpreters module will include the following:

In the interest of keeping this proposal minimal, the following functionality has been left out for future consideration. Note that this is not a judgement against any of said capability, but rather a deferment. That said, each is arguably valid.

Interpreter.call() It would be convenient to run existing functions in subinterpreters directly. Interpreter.run() could be adjusted to support this or a call() method could be added: Interpreter.call(f, *args, **kwargs) This suffers from the same problem as sharing objects between interpreters via queues. The minimal solution (running a source string) is sufficient for us to get the feature out where it can be explored.

timeout arg to recv() and send() Typically functions that have a block argument also have a timeout argument. It sometimes makes sense to do likewise for functions that otherwise block, like the channel recv() and send() methods. We can add it later if needed.

Interpreter.run_in_thread() This method would make a run() call for you in a thread. Doing this using only threading.Thread and run() is relatively trivial so we've left it out.

Synchronization Primitives The threading module provides a number of synchronization primitives for coordinating concurrent operations. This is especially necessary due to the shared-state nature of threading. In contrast, subinterpreters do not share state. Data sharing is restricted to channels, which do away with the need for explicit synchronization. If any sort of opt-in shared state support is added to subinterpreters in the future, that same effort can introduce synchronization primitives to meet that need.

CSP Library A csp module would not be a large step away from the functionality provided by this PEP. However, adding such a module is outside the minimalist goals of this proposal.

Syntactic Support The Go language provides a concurrency model based on CSP, so it's similar to the concurrency model that subinterpreters support. However, Go also provides syntactic support, as well several builtin concurrency primitives, to make concurrency a first-class feature. Conceivably, similar syntactic (and builtin) support could be added to Python using subinterpreters. However, that is way outside the scope of this PEP!

Multiprocessing The multiprocessing module could support subinterpreters in the same way it supports threads and processes. In fact, the module's maintainer, Davin Potts, has indicated this is a reasonable feature request. However, it is outside the narrow scope of this PEP.

C-extension opt-in/opt-out By using the PyModuleDef_Slot introduced by PEP 489, we could easily add a mechanism by which C-extension modules could opt out of support for subinterpreters. Then the import machinery, when operating in a subinterpreter, would need to check the module for support. It would raise an ImportError if unsupported. Alternately we could support opting in to subinterpreter support. However, that would probably exclude many more modules (unnecessarily) than the opt-out approach. Also, note that PEP 489 defined that an extension's use of the PEP's machinery implies support for subinterpreters. The scope of adding the ModuleDef slot and fixing up the import machinery is non-trivial, but could be worth it. It all depends on how many extension modules break under subinterpreters. Given that there are relatively few cases we know of through mod_wsgi, we can leave this for later.

Poisoning channels CSP has the concept of poisoning a channel. Once a channel has been poisoned, any send() or recv() call on it would raise a special exception, effectively ending execution in the interpreter that tried to use the poisoned channel. This could be accomplished by adding a poison() method to both ends of the channel. The close() method can be used in this way (mostly), but these semantics are relatively specialized and can wait.

Resetting __main__ As proposed, every call to Interpreter.run() will execute in the namespace of the interpreter's existing __main__ module. This means that data persists there between run() calls. Sometimes this isn't desirable and you want to execute in a fresh __main__ . Also, you don't necessarily want to leak objects there that you aren't using any more. Note that the following won't work right because it will clear too much (e.g. __name__ and the other "__dunder__" attributes: interp.run('globals().clear()') Possible solutions include: a create() arg to indicate resetting __main__ after each run call

arg to indicate resetting after each call an Interpreter.reset_main flag to support opting in or out after the fact

flag to support opting in or out after the fact an Interpreter.reset_main() method to opt in when desired

method to opt in when desired importlib.util.reset_globals() [reset_globals] Also note that resetting __main__ does nothing about state stored in other modules. So any solution would have to be clear about the scope of what is being reset. Conceivably we could invent a mechanism by which any (or every) module could be reset, unlike reload() which does not clear the module before loading into it. Regardless, since __main__ is the execution namespace of the interpreter, resetting it has a much more direct correlation to interpreters and their dynamic state than does resetting other modules. So a more generic module reset mechanism may prove unnecessary. This isn't a critical feature initially. It can wait until later if desirable.

Resetting an interpreter's state It may be nice to re-use an existing subinterpreter instead of spinning up a new one. Since an interpreter has substantially more state than just the __main__ module, it isn't so easy to put an interpreter back into a pristine/fresh state. In fact, there may be parts of the state that cannot be reset from Python code. A possible solution is to add an Interpreter.reset() method. This would put the interpreter back into the state it was in when newly created. If called on a running interpreter it would fail (hence the main interpreter could never be reset). This would likely be more efficient than creating a new subinterpreter, though that depends on what optimizations will be made later to subinterpreter creation. While this would potentially provide functionality that is not otherwise available from Python code, it isn't a fundamental functionality. So in the spirit of minimalism here, this can wait. Regardless, I doubt it would be controversial to add it post-PEP.

File descriptors and sockets in channels Given that file descriptors and sockets are process-global resources, support for passing them through channels is a reasonable idea. They would be a good candidate for the first effort at expanding the types that channels support. They aren't strictly necessary for the initial API.

Integration with async Per Antoine Pitrou [async]: Has any thought been given to how FIFOs could integrate with async code driven by an event loop (e.g. asyncio)? I think the model of executing several asyncio (or Tornado) applications each in their own subinterpreter may prove quite interesting to reconcile multi- core concurrency with ease of programming. That would require the FIFOs to be able to synchronize on something an event loop can wait on (probably a file descriptor?). A possible solution is to provide async implementations of the blocking channel methods ( recv() , and send() ). However, the basic functionality of subinterpreters does not depend on async and can be added later. Alternately, "readiness callbacks" could be used to simplify use in async scenarios. This would mean adding an optional callback (kw-only) parameter to the recv_nowait() and send_nowait() channel methods. The callback would be called once the object was sent or received (respectively). (Note that making channels buffered makes readiness callbacks less important.)

Support for iteration Supporting iteration on RecvChannel (via __iter__() or _next__() ) may be useful. A trivial implementation would use the recv() method, similar to how files do iteration. Since this isn't a fundamental capability and has a simple analog, adding iteration support can wait until later.

Channel context managers Context manager support on RecvChannel and SendChannel may be helpful. The implementation would be simple, wrapping a call to close() (or maybe release() ) like files do. As with iteration, this can wait.

Pipes and Queues With the proposed object passing machanism of "channels", other similar basic types aren't required to achieve the minimal useful functionality of subinterpreters. Such types include pipes (like unbuffered channels, but one-to-one) and queues (like channels, but more generic). See below in Rejected Ideas for more information. Even though these types aren't part of this proposal, they may still be useful in the context of concurrency. Adding them later is entirely reasonable. The could be trivially implemented as wrappers around channels. Alternatively they could be implemented for efficiency at the same low level as channels.

Return a lock from send() When sending an object through a channel, you don't have a way of knowing when the object gets received on the other end. One way to work around this is to return a locked threading.Lock from SendChannel.send() that unlocks once the object is received. Alternately, the proposed SendChannel.send() (blocking) and SendChannel.send_nowait() provide an explicit distinction that is less likely to confuse users. Note that returning a lock would matter for buffered channels (i.e. queues). For unbuffered channels it is a non-issue.

Support prioritization in channels A simple example is queue.PriorityQueue in the stdlib.

Support inheriting settings (and more?) Folks might find it useful, when creating a new subinterpreter, to be able to indicate that they would like some things "inherited" by the new interpreter. The mechanism could be a strict copy or it could be copy-on-write. The motivating example is with the warnings module (e.g. copy the filters). The feature isn't critical, nor would it be widely useful, so it can wait until there's interest. Notably, both suggested solutions will require significant work, especially when it comes to complex objects and most especially for mutable containers of mutable complex objects.

Make exceptions shareable Exceptions are propagated out of run() calls, so it isn't a big leap to make them shareable in channels. However, as noted elsewhere, it isn't essential or (particularly common) so we can wait on doing that.

Make RunFailedError.__cause__ lazy An uncaught exception in a subinterpreter (from run() ) is copied to the calling interpreter and set as __cause__ on a RunFailedError which is then raised. That copying part involves some sort of deserialization in the calling intepreter, which can be expensive (e.g. due to imports) yet is not always necessary. So it may be useful to use an ExceptionProxy type to wrap the serialized exception and only deserialize it when needed. That could be via ExceptionProxy__getattribute__() or perhaps through RunFailedError.resolve() (which would raise the deserialized exception and set RunFailedError.__cause__ to the exception. It may also make sense to have RunFailedError.__cause__ be a descriptor that does the lazy deserialization (and set __cause__ ) on the RunFailedError instance.

Serialize everything through channels We could use pickle (or marshal) to serialize everything sent through channels. Doing this is potentially inefficient, but it may be a matter of convenience in the end. We can add it later, but trying to remove it later would be significantly more painful.

Return a value from run() Currently run() always returns None. One idea is to return the return value from whatever the subinterpreter ran. However, for now it doesn't make sense. The only thing folks can run is a string of code (i.e. a script). This is equivalent to PyRun_StringFlags() , exec() , or a module body. None of those "return" anything. We can revisit this once run() supports functions, etc.

Add a "tp_share" type slot This would replace the current global registry for shareable types.

Expose which interpreters have actually used a channel end. Currently we associate interpreters upon access to a channel. We would keep a separate association list for "upon use" and expose that.

Add a shareable synchronization primitive This would be _threading.Lock (or something like it) where interpreters would actually share the underlying mutex. This would provide much better efficiency than blocking channel ops. The main concern is that locks and channels don't mix well (as learned in Go). Note that the same functionality as a lock can be acheived by passing some sort of "token" object through a channel. "send()" would be equivalent to releasing the lock and "recv()" to acquiring the lock. We can add this later if it proves desireable without much trouble.

Propagate SystemExit and KeyboardInterrupt Differently The exception types that inherit from BaseException (aside from Exception ) are usually treated specially. These types are: KeyboardInterrupt , SystemExit , and GeneratorExit . It may make sense to treat them specially when it comes to propagation from run() . Here are some options: * propagate like normal via RunFailedError * do not propagate (handle them somehow in the subinterpreter) * propagate them directly (avoid RunFailedError) * propagate them directly (set RunFailedError as __cause__) We aren't going to worry about handling them differently. Threads already ignore SystemExit , so for now we will follow that pattern.

Add an explicit release() and close() to channel end classes It can be convenient to have an explicit way to close a channel against further global use. Likewise it could be useful to have an explicit way to release one of the channel ends relative to the current interpreter. Among other reasons, such a mechanism is useful for communicating overall state between interpreters without the extra boilerplate that passing objects through a channel directly would require. The challenge is getting automatic release/close right without making it hard to understand. This is especially true when dealing with a non-empty channel. We should be able to get by without release/close for now.

Add SendChannel.send_buffer() This method would allow no-copy sending of an object through a channel if it supports the PEP 3118 buffer protocol (e.g. memoryview). Support for this is not fundamental to channels and can be added on later without much disruption.