One environment to rule them all

Environment variables are a set of values that can alter the way a process works. They are a part of the environment in which a process runs and as such, the environment can be globally accessed throughout its execution. Cargo and rustc are not an exception, and make a heavy use of it to drive the compilation process or pass additional configuration to the runtime (e.g. via RUST_BACKTRACE ). Since RLS uses both to perform its analysis build, it must pass appropriate environment variables to them. However, since these programs are run in the RLS process rather than in their own, we can cause environment-related race conditions.

One example would be when compiling two packages in parallel. Both compilations would start and an appropriate environment would be set twice. However, since the compilations would share the same process, second compilation would overwrite the environment and introduce invalid data for the first one. Because of that, we must guard the environment and provide a mutually exclusive access to it for programs that normally rely on it and which the RLS uses.

I was recently trying to switch the way the compiler is invoked while working on supporting Cargo workspaces in the RLS. During that time, an implementation of a mutually exclusive access to the environment, using a simple Mutex<()> and an Environment RAII guard, which held the Mutex lock guard, has landed. After that, I encountered a strange regression for the workspace support test - the environment was effectively locked (somewhat) recursively, leading to a deadlock.

The initial implementation of the scope locking the environment with a Mutex used only a single lock. Originally, during the Cargo build routine, rustc compiler instances were ran as separate processes, each one having its own environment appropriately set by Cargo. However, executing the linked compiler lead to an attempt to lock the environment for the duration of the compilation, while it was still locked previously at the outer scope for the entire duration of Cargo execution. Oops.

What’s needed was to provide a consistent and scoped environment at an outer scope, the Cargo execution, but also at the inner one, the compiler scope. Locking two separate, unrelated Mutex es for each scope would not work. Because rustc build routine can be also executed both separately and as a part of Cargo build routine, we need to provide a single, shared outer lock for both situations. If we don’t, Cargo could acquire the outer lock and access the environment, while another rustc build routine would just acquire inner lock, still leading to sharing the environment for both.

The solution for the problem seemed like an easy one - all builds should first acquire a single, shared lock and whilst holding it, possibly acquire an inner one for the duration not longer than the first one is held for. It’d be great to encode that information using RAII guards with specified lifetimes of the lock, all while guaranteeing the order of locking, however there was one problem with this approach.

To intercept planned rustc calls, Cargo exposes an Executor trait, which effectively provides a callback ( exec() ) with all the arguments and environment rustc would be called, so the compiler execution can be freely customized. What’s unfortunate is that the API expects an Arc<Executor> . This means, that there are no guarantees that the passed pointer will not be further copied out of scope, so we can’t really limit the lifetime of the inner lock here (since it has to live shorter than the outer lock).

At the moment, the current implementation does not strictly enforce lock order. Ideally, the access to the second, inner lock should only be given if and only if the outer lock is held. The outer lock() function returns a (MutexGuard<'a, ()>, InnerLock) , where InnerLock is a unit struct that has access to the static inner lock and should not live longer than the returned outer lock guard. While technically it can be copied outside the scope of the initial lock guard, it seems acceptable for the time being.