By David Sosnow, Chief Product Officer

The conversation around self-driving cars has zeroed in on the question of safety, and for good reason. The tragic death of Elaine Herzberg by an autonomous vehicle has revealed the many ways that self-driving cars have not yet caught up with the buzz that surrounds them. Rightfully, the question of blame is the focus. Were the sensors calibrated properly? Should more have been done to monitor the driver? Had the cars undergone sufficient testing? Few, however, have asked the larger question that sits at the root of the whole enterprise: is the software safe enough to drive a car?

A self-driving car is only as good as its software. To a layman, this seems obvious, and yet that same layman might be surprised to find that today’s most sophisticated vehicles run on code that is littered with the same bugs that plague any other software stack. Until new development methods are implemented, there is no reason to believe that tomorrow’s cars won’t crash just as often as today’s desktops.

Most of the microcontrollers in today’s cars are programmed using C. It’s portable, performant, and efficient — the ideal choice for the last twenty years of embedded systems. But C is also susceptible to human error, and we have found that even code checking software does not protect against system-crippling errors. Combine this truth with the fact that self-driving cars will rely on upwards of 300 million lines of code, and the complexity of the problem begins to take shape.

In our recent white paper, we were able to demonstrate this vulnerability by running a simple piece of self-corrupting code through a series of static-analysis and linting tools. In the few cases that a problem was identified at all, the flawed code was never found, and would crash while the program was running. These are the same fatal errors that could be in the software that will soon be guiding families down the highway, and there is no way to find them.

Most troubling is that none of these tools are technically at fault. Many comprise the best hope any software development team has to find potentially fatal bugs. All of our examples were verified as MISRA-C compliant (the most widely used set of code-safety guidelines in the automotive industry for C) by a proprietary MISRA-C checker. C is simply not the right language for safety-critical software.

Thankfully, progress has been made in the forty years since C revolutionized the programming world. Languages, as they move farther from machine code, have become strongly opinionated about best practices, and many of these opinions are informed by the areas where C falls short. In our search for the safest language, we’ve found Rust to be the most promising. It allows our engineers to write ambitious production-ready code quickly, and refuses to compile the kind of errors that are allowed in C. Rust is also extensible, allowing our engineers to expand its out-of-the-box guarantees around thread safety and segfault prevention to include their own restrictions on what sort of code can make it to production.

At PolySync, we’re developing state-of-the-art methods to ensure that tomorrow’s self-driving cars are safe. Not just better, but safe. Technology selection is a critical phase, and choosing Rust is just one of the steps we’re taking towards realizing this goal. If we are not rigorous about these choices, how can we trust the outcome?