In a few days the 2018 edition is going to roll out, and that will include some new framing around Rust's tooling. We've got a core set of developer tools which are stable and ready for widespread use. We're going to have a blog post all about that, but for now I wanted to address the status of the RLS, since when I last blogged about a 1.0 pre-release there was a significant sentiment that it was not ready (and given the expectations that a lot of people have, we agree).

The RLS has been in 0.x-stage development. We think it has reached a certain level of stability and usefulness. While it is not at the level of quality you might expect from a mature IDE, it is likely to be useful for a majority of users.

The RLS is tightly coupled with the compiler, and as far as backwards compatibility is concerned, that is the important thing. So from the next release, the RLS will share a version number with the Rust distribution. We are not claiming this as a '1.0' release, work is certainly not finished, but we think it is worth taking the opportunity of the 2018 edition to highlight the RLS as a usable and useful tool.

In the rest of this blog post I'll go over how the RLS works in order to give you an idea of what works well and what does not, and where we are going (or might go) in the future.

Background

The RLS is a language server for Rust - it is meant to handle the 'language knowledge' part of an IDE (c.f., editing, user interaction, etc.). The concept is that rather than having to develop Rust support from scratch in each editor or IDE, you can do it once in the language server and each editor can be a client. This is a recent approach to IDE development, in contrast to the approach of IntelliJ, Eclipse, and others, where the IDE is designed to make language support pluggable, but language support is closely tied to a specific IDE framework.

The RLS integrates with the Rust compiler, Cargo, and Racer to provide data. Cargo is used as a source of data for orchestrating builds. The compiler provides data for connecting references to definitions, and about types and docs (which is used for 'go to def', 'find all references', 'show type', etc.). Racer is used for code completion (and also to supply some docs). Racer can be thought of as a mini compiler which does as little as possible to provide code completion information as fast as possible.

The traditional approach to IDEs, and how Rust support in IntelliJ works, is to build a completely new compiler frontend, optimised for speed and incremental compilation. This compiler provides enough information to provide the IDE functionality, but usually doesn't do any code generation. This approach is much easier in fairly simple languages like Java, compared to Rust (macros, modules, and the trait system all make this a lot more complex).

There are trade-offs to the two approaches: using a separate compiler is fast and functionality can be limited to ensure it is fast enough. However, there is a risk that the two compilers do not agree on how to compiler a program, in particular, covering the whole of a language like Rust is difficult and so completeness can be an issue. Maintaining a separate compiler also takes a lot of work.

In the future, we hope to further optimise the Rust compiler for IDE cases so that it is fast enough that the user never has to wait, and to use the compiler for code completion. We also want to work with Cargo a bit differently so that there is less duplication of logic between Cargo and the RLS.

Current status

For each feature of the RLS, I measure its success along two axes: is it fast enough and is it complete (that is, does it work for all code). There are also non-functional issues of resource usage (how much battery and CPU the RLS is using), how often the RLS crashes, etc.

Go to definition

This is usually fast enough: if the RLS is ready, then it is pretty much instant. For large crates, it can take too long for the RLS to be ready, and thus we are not fast enough. However, usually using slightly stale data for 'go to def' is not a problem, so we're ok.

It is fairly complete. There are some issues around macros - if a definition is created by a macro, then we often have trouble. 'Go to def' is not implemented for lifetimes, and there are some places we don't have coverage (inside where clauses was recently fixed).

Show type

Showing types and documentation on hover has almost the same characteristics as 'go to definition'.

Rename

Renaming is similar to 'find all references' (and 'go to def'), but since we are modifying the user's code, there are some more things that can go wrong, and we want to be extra conservative. It is therefore a bit less complete than 'go to def', but similarly fast.

Code completion

Code completion is generally pretty fast, but often incomplete. This is because method dispatch in Rust is really complicated! Eventually, we hope that using the compiler for code completion rather than Racer will solve this problem.

Resource usage

The RLS is typically pretty heavy on the CPU. That is because we prioritise having results quickly over minimising CPU usage. In the future, making the compiler more incremental should give big improvements here.

Crashes

The RLS usually only crashes when it disagrees with Cargo about how to build a project, or when it exercises a code path in the compiler which would not be used by a normal compile, and that code path has a bug. While crashes are more common than I'd like, they're a lot rarer than they used to be, and should not affect most users.

Project structure

There is a remarkable variety in the way a Rust project can be structured. Multiple crates can be arranged in many ways (using workspaces, or not), build scripts and procedural macros cause compile-time code execution, and there are Cargo features, different platforms, tests, examples, etc. This all interacts with code which is edited but not yet saved. Every different configuration can cause bugs.

I think we are mostly doing well here, as far as I know there are no project structures to avoid (but this has been a big source of trouble in the past).

Overall

The RLS is clearly not done. It's not in the same league as IDE support for more mature languages. However, I think that it is at a stage where it is worth trying for many users. Stability is good enough - it's unlikely you'll have a bad experience. It does somewhat depend on how you use an IDE: if you rely heavily on code completion (in particular, if you use code completion as a learning tool), then the RLS is probably not ready. However, we think we should encourage new users to Rust to try it out.

So, while I agree that the RLS is not 'done', neither is it badly unstable, likely to be disappear, or lacking in basic functionality. For better or worse, 1.0 releases seem to have special significance in the Rust community. I hope the version numbering decision sends the right message: we're ready for all Rust users to use the RLS, but we haven't reached 'mission accomplished' (well, maybe in a 'George W Bush' way).

More on that version number

The RLS will follow the Rust compiler's version number, i.e., the next release will be 1.31.0. From a strict semver point of view this makes sense since the RLS is only compatible with its corresponding Rust version, so incrementing the minor version with each Rust release is the right thing to do. By starting at 1.31, we're deliberately avoiding the 1.0 label.

In terms of readiness, it's important to note that the RLS is not a user-facing piece of software. I believe the 1.x version number is appropriate in that context - if you want to build an IDE, then the RLS is stable enough to use as a library. However, it is lacking some user-facing completeness and so an IDE built using the RLS should probably not use the 1.0 number (our VSCode extension will keep using 0.x).

The future

There's been some discussion about how best to improve the IDE experience in Rust. I believe the language server approach is the correct one, but there are several options to make progress: continue making incremental improvements to the compiler and RLS, moving towards compiler-driven code completion; use an alternate compiler frontend (such as Rust analyzer); improve Racer and continue to rely on it for code completion; some hybrid approach using more than one of these ideas.

When assessing these options, we need to take into account the likely outcome, the risk of something bad happening, the amount of work needed, and the long-term maintenance burden. The main downside of the current path is the risk that the compiler will never get fast enough to support usable code completion. Implementation is also a lot of work, however, it would mostly help with compile time issues in general. With the other approaches there is a risk that we won't get the completeness needed for useful code completion. The implementation work is again significant, and depending on how things pan out, there is a risk of much costlier long-term maintenance.

I've been pondering the idea of a hybrid approach: using the compiler to provide information about definitions (and naming scopes), and either Racer or Rust Analyzer to do the 'last mile' work of turning that into code completion suggestions (and possibly resolving references too). That might mean getting the best of both worlds - the compiler can deal with a lot of complexity where speed is not as necessary, and the other tools get a helping hand with the stuff that has to be done quickly.

Orthogonally, there is also work planned to better integrate with Cargo and to support more features, as well as some 'technical debt' issues, such as better testing.