Since 2015, the CA DMV has asked that all companies with an autonomous vehicle testing permit submit annual disengagement reports. The ask was logical, enabling the CA DMV to learn rapidly about a technology too infantile to regulate.

However, in recent years, as the number of companies with autonomous vehicle testing permits has sky-rocketed to over 60, the annual disengagement reports have taken on a life of their own. Each year, headlines are written about which company is now leading the self-driving “race”, and who is now lagging behind.

What’s worse is that these reports are often misinterpreted, leading to erroneous assumptions on the progress of certain self-driving car projects (like Apple SPG in 2018).

Given this dynamic, many in the industry are now questioning whether these annual reports continue to serve the original intended purpose for the CA DMV.

At the root of this issue is that there is no standard about what constitutes a disengagement that must be reported, leading to each permit holder adopting different internal standards. Take these two scenarios:

Scenario 1: Company A reports a disengagement to the CA DMV every time a Vehicle Operator disengages the system from autonomous mode, regardless of whether the autonomous system was at-fault or not (e.g. performing a U-Turn to re-run a test). This resulting metric is challenging because it doesn’t represent the true performance of Company A’s driverless technology.

Scenario 2: Company B only reports a disengagement to the CA DMV only when a Vehicle Operator takes the wheel to prevent a safety-critical incident, as verified by re-simulation of the situation (e.g. a collision would have occurred were it not for the Vehicle Operator). This resulting metric is challenging because each company has a different definition of safety-critical, and varying degrees of re-simulation capability and accuracy.

This lack of standardization makes an apples-to-apples comparison between self-driving projects only directionally correct at best, and useless at worst. To make matters worse, miles-per-disengagement doesn’t consider that companies test their self-driving technology in environments of varying complexity. A disengagement in San Francisco is very different than a disengagement on an empty highway.

Introducing the Driverless Readiness Score

If we all agree that miles-per-disengagement is a sub-optimal measure of progress for the industry, let’s talk about alternatives. Inside Voyage, we measure our progress with something we call the Driverless Readiness Score (DRS).

The DRS represents our progress toward removing the need for a Vehicle Operator, and considers a set of key metrics and deliverables that are tailored to our targeted roadway conditions. When the goal metrics and deliverables are hit, we consider ourselves ready for driverless operations (on our targeted roadway).

Let’s first talk about the metrics that make up 50% of the weight of the DRS. We consider these our KPIs (although we track many, many more metrics internally):

Miles per intervention: a Vehicle Operator had to take the wheel to prevent a safety-critical incident

Miles per disengagement: a Vehicle Operator took the wheel for any reason

Miles per false negative in critical zones: how often do we not detect an object within close proximity to our vehicle?

Miles per false trigger of the collision mitigation system: how often does your backup autonomous system trigger when it shouldn’t?

Percentage of simulation scenarios passing

These KPIs all contribute equally to the DRS, with each having a goal dictated by our roadway requirements.

However, metrics alone cannot always tell you if you are making forward progress, especially when some metrics take time to move (accumulating miles can take a while!) This is why we bake deliverable progress into the DRS. On the road to driverless, there are certain features to build that either work or don’t, and where metrics won’t quite capture your momentum. For example: Do you have redundant compute and power sources? Is your self-driving technology running atop a safety-certifiable middleware? Does your perception component throw a warning to your diagnostics module when it’s taking longer-than-normal to process?

We break down each and every one of these requirements, create deliverables, and then assign priority (P0, P1, P2) and status (🔴🟡🟢). The status of these deliverables is input week-to-week and priority weights a deliverable higher. The sum of our progress towards our deliverables then makes up the remaining 50% of our DRS.

With both of our metrics and deliverables quantified, you can now calculate a single score that tells you just how far or close you are to driverless operations on your targeted roadway:

DRS = (Metrics Percentage Progress / 2) + (Deliverables Percentage Progress / 2)

While I’m certain DRS isn’t perfect, I do believe it solves a number of the challenges associated with miles-per-disengagement:

An ability to apples-to-apples compare the readiness for driverless of self-driving projects

Companies can keep internal metrics confidential, but still share the progress that consumers really care about

Internally, self-driving projects can detect if they are moving forward or backward on a week-to-week basis

We will soon open-source our DRS template (along with our score) as a template for all companies to report, should they wish. Stay tuned.

Correctly quantifying your progress, especially when you’re a startup, is a tremendously powerful force. It’s so critical to have a pulse that quantifiably says that you are moving forward (or not), and miles-per-disengagement alone does not tell you this. I hope our work on the Driverless Readiness Score can serve as a small boost to the progress within autonomous driving, to bring these much-needed vehicles to market that little bit faster.

If other companies are interested in adopting the DRS, please get in touch, and let’s talk about how we can together create a standard of reporting.