On the heels of Rust’s 1.0 release, we are pleased to be able to interview Mozilla’s Aaron Turon, a member of Rust’s core team (which is the leadership for the project that sets the overall direction). This is our third interview with a PL PhD working in industry.

What is your academic background?

I received my undergraduate degree at the University of Chicago, at a time where a lot of PL was happening (a lot of the folks who built or studied Standard ML were there); I did some research under John Reppy. After that, I went on to do a PhD at Northeastern University, which continues to have a thriving PL group; I was supervised by Mitchell Wand. Finally, I was a post-doc under Derek Dreyer at the Max Planck Institute for Software Systems (MPI-SWS).

What motivated Mozilla to create Rust?

We wanted a language in which you could system software, like a web browser, that has both great performance (including good responsiveness and low power usage) and a high degree of reliability and security.

All mainstream browsers are built in C++, because that’s historically been the only language that can give you the strong low-level control you need to eke out maximal performance, while working reasonably well at the scale of something like a browser. But C++ is not so great on the reliability/security front, and it’s not easy to modify a C++ codebase to leverage multicore processors to increase responsiveness and decrease power usage.

Mozilla started two projects — the language Rust, and the browser engine Servo — to try to shift the long-standing tradeoff between safety and control.

Rust was designed from the ground up to compete with the low-level control of C++, while offering memory safety guarantees found in higher-level languages, and making concurrent programming less error prone. Servo is a brand new browser engine whose core pieces are all written in Rust, which is already capable of rendering a wide range of sites in the wild, and which shows promising performance improvements and power reduction over current browsers.

Can you say a bit about Rust’s design goals?

Rust provides three key value propositions:

Memory safety without garbage collection Concurrency without data races Abstraction without overhead

Each of these value propositions imposes constraints on its design. For example, the lack of GC is an important constraint for applications like browsers, where a GC pause could kill UI responsiveness, and low memory footprint is highly desirable. But traditionally, managing memory directly opens the door to dangling pointers and other memory safety problems, which in turn makes exploits possible.

The key for Rust is a statically-enforced discipline of ownership and borrowing, which is at heart an affine type system. Borrowing allows you to create a reference to an object that is tied to some lexical scope, and can be passed to functions called within that scope. Getting the borrowing system to work smoothly, at scale, was probably the key turning point in Rust’s design, and took many iterations.

Ownership and borrowing leads to a memory (or really, resource) management scheme that feels as automatic as GC, has none of GC’s overhead, and guarantees memory safety (since you can only chase a pointer you have borrowed or own). Moreover, the same scheme also prevents data races (unsynchronized concurrent access to state that involves a write) because permission to mutate a value is uniquely owned by a single thread.

How has the research literature influenced Rust’s design (both positively and negatively), if at all?

A lot of Rust’s basic design was settled before I joined, but my impression is that the literature has been enormously influential. Certainly, you can trace very clear lines back to Cyclone and Haskell. The ownership and borrowing scheme draws inspiration from Cyclone and modern C++ (unique_ptr and friends). The basic form of abstraction — traits — combines the best of Haskell’s type classes and C++’s templates (cf. “How to make ad hoc polymorphism less ad hoc“). In particular, you can use a single trait in the style of bounded parametric polymorphism (generics), which results in static dispatch, or OO-style polymorphism, which results in dynamic dispatch. But ideas haven’t been simply taken “off the shelf:” Rust has benefited from repeated iteration on its design, which has meant months of breaking changes. Making these ideas practical, making them work together at scale, has taken time and experience. This iteration has been supported by an enthusiastic and patient community, which has been willing to continuously update code as the designs shifted.

When doing a full design for a language with any hope of mainstream success, you have to think very carefully about the overall complexity budget: you can only afford a certain amount of unfamiliarity in the core programming model, and it had better carry its weight. Academic work tends to focus on a particular language feature, and it’s easy to do a deep dive and wind up with something far too complicated to be plugged into an already-complicated language.

I think that tension is probably fundamental — the incentives are very different in the research world. But in general, research designs that are simple and can fit well into existing programming models are going to be much easier for us to take advantage of.

In terms of the community’s perception of academic ideas: there is an overall sense that Rust has tried to learn from both past languages and prior research, and our community sees that as very valuable. But I think, even more than that, there are certain core principles (some of which I mentioned above) that we hammer on relentlessly, and that directly motivate the use of e.g. affine types. In other words, these ideas aren’t just gimmicks or sideshows — they are the heart of the programming model — and I think people get that.

What are the current goals for the language and its ecosystem?

In the short term, we’ve been very focused on a successful 1.0 release and its aftermath. There are a number of known pain points — compiler performance being a major one — that we will need to address to smooth out the basic experience of working with Rust today.

In the longer term, I think focus is gradually going to move away from the core language (while still addressing a few known gaps, e.g., involving ownership and traits ) and toward the ecosystem and tooling story. We’ve made that a priority by shipping 1.0 with “Cargo”, a package manager in the vein of Ruby’s Bundler, together with crates.io, our central repository of community libraries; it’s in the community’s hands. We also plan to grow the number of platforms with “first class support”, and make Rust code easy to embed into a wide range of contexts and other languages.

We envision two major constituencies for Rust. The first is existing system programmers, largely using C or C++ today. The second is programmers working in higher-level languages, but who need to dip down to the systems level to grab the last bit of performance. Those groups in turn inform our goals: to continue to push Rust into ever-lower-level contexts, while making the language ever friendlier to high-level abstractions.

Is Mozilla open to outside contributions to the language? How can the broader community get involved, if they want to?

The language is very community-driven, with over 1,000 contributors to the 1.0 release. The enthusiasm, know-how, and investment coming from volunteers world-wide blows me away on a regular basis. If you are interested in helping, there are a number of welcoming forums that you can find on the Rust home page and http://www.ncameron.org/rust.html.

One of my favorite aspects of the Rust community/governance model is our RFC process: all major changes to the language or standard library first go through a written design document and community-wide discussion, before finally being approved/declined by a team of people in charge of a particular area. In academia I always found that I uncovered a lot when actually writing up a paper, and the RFC process winds up working out in much the same way.

Shifting gears a bit: Can you say more about how you ended up at Mozilla?

Originally, I was pretty strongly on an academic career path but ended up changing course, due to a lot of factors.

Most fundamentally, I did not feel that I could (personally!) manage to maintain a satisfying career as a traditional academic while retaining sufficient time and energy for my family; I was never able to maintain boundaries I was happy with. Moreover, my passion for problem solving and design work is much greater than for teaching, reviewing, and so on — but these latter duties by themselves already constitute a full time job, if you want to do them well.

On the other hand, it gradually became clear that I could do perfectly satisfying work outside the realm of traditional academia.

My current position is in Mozilla Research, and my title includes the word ‘research’, so I haven’t escaped altogether. But it’s true that our group puts much more emphasis on creating viable (but research-driven) products, rather than pure research. We do hope to publish a series of papers on Rust in the next few years and I continue to give talks and collaborate with more academic groups.

Personally, I’ve found this to be a good fit, and I feel very fortunate to have ended up here. I have opportunities to do genuine, deep research, but I am also fitting that into the context of a viable language with a thriving community. It’s thrilling to author RFCs that are widely read and responded to by the Rust community, and there is much more of a sense of shared values and concrete problems we are all working to solve than I experienced in academia.

In the end, the chance to work on a project with a real shot at becoming a major part of the tech landscape, but which also values research ideas and methodology, was too good to pass up.

What is your view of the value of a PhD, particularly from the perspective of working at a company like Mozilla?

I’m not sure I can speak to Mozilla broadly, but as a member of the Rust team, the skills I gained in academia are indispensable. Being able to boil down a problem to its fundamental constraints, being able to communicate clearly in speech and writing, being able to pitch an idea, being confident that I can learn anything given enough time — these are skills I rely on every day. While it’s true that I don’t have the breadth of engineering experience that some of my amazing peers do, I feel that my academic background has given me complementary strengths that have made me an effective part of the team.

That’s not to say that everyone should go get a PhD, of course — just that what I learned in the process has yielded ongoing value, even outside of a traditional research setting.

Follow Aaron on Twitter at @aaron_turon.