On Wednesday, the California Department of Motor Vehicles released its latest annual reports from companies who test self-driving cars in the state. The filings offer some of the best data for the public to scrutinize to better understand how far along AV tech has come. But companies have serious wiggle room when it comes to how they report crucial data. That’s a problem.


In particular, the reports detail how many times a safety operator had to take control of the wheel back from the self-driving tech during tests on public roads. In industry parlance, this is called a “disengagement.” California’s disengagement reports are one of the only metic reporters covering the industry have to analyze how well each AV’s company tech performs, hence why there’s so much attention paid to them.


For clarity, here’s how California’s DMV defines a “disengagement”:

For the purposes of this section, “disengagement” means a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.

It’s vague, but the spirit of the law is pretty clear: either the tech fails, or the operator of the vehicle takes control to ensure “the safe operation of the vehicle.”

So, take this into consideration.

Last November, someone in San Francisco mentioned on Twitter that they saw a car owned by GM’s self-driving unit, Cruise Automation, run a red light. The car, the person wrote, was called “Pickle.” (Yes, Pickle. GM’s report shows they did name one of their 94 self-driving cars Pickle.)


Perhaps surprisingly—given the way things can spread like wildfire on social media, and the intense focus on a self-driving Uber running a red light in 2016—this didn’t get much attention.


When I asked Cruise about it earlier this month, they clarified that Pickle didn’t actually run a red light.

“The light changed from yellow to red while the vehicle was crossing the crosswalk, and as that happened the AVT [autonomous vehicle trainer] took manual control of the vehicle and proceeded through the intersection,” the spokesperson said. “Safety is always the top priority as we test our self-driving technology.”


This was confusing, to be honest. So I asked to clarify whether the operator took control of Pickle because it was about to blow a red light. Not so, Cruise said.

“No, the AVT manually took control of the vehicle to avoid blocking the crosswalk,” the spokesperson said.


So the car stopped in the middle of a crosswalk, and then proceeded through to avoid blocking the way for pedestrians. Fair enough. That’s why vehicle operators are behind the wheel for autonomous test cars.

But based on my read of California’s legal provisions, this surely seemed to qualify as a disengaging event. Again, the provision says a disengagement is defined as “...when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.”


When GM/Cruise released its 2017 disengagement report, I went looking for the reported Nov. 22 incident. Pickle, the report says, drove 347.79 miles that month.

But in a section laying out the reasons for Cruise’s four reported disengagements in November, Pickle’s incident is missing:


When I asked why it wasn’t included in the report, Cruise said it didn’t meet the standard for California regulations.

“California regulations require to report two types of disengagements: 1. Disengagements for immediate safety, and 2. Disengagements when the autonomous system failed,” the company said in an email. “In this instance, there was no need to disengage for immediately safety, and there were no system failure.”


The company declined to provide additional comment.

Cruise had a similar response a year ago, when The Information asked about its report and how it defines a “qualifying disengagement.”

For instance, Cruise said in the report that its cars drove nearly 2,300 miles in San Francisco in November 2016 and experienced only six “qualifying disengagements.” (Cruise declined to say what it means by “qualifying.”) One of the people who’s been involved with Cruise’s testing in that period said, however, it was normal for one car to experience four or five safety-related disengagements every couple of hours, in just several dozen miles’ worth of driving. The same can be said about Cruise’s car tests at the beginning of this year, this person said. And in the report, Cruise listed only disengagements from what it called “planned tests.”


To be sure, it’s not just Cruise. Other companies have broad interpretations of the statute; Tesla, for instance, reported zero autonomous test miles in California last year, but said it conducts “shadow-testing” of autonomous tech, by collecting data from its semi-autonomous cars that are already on the road during normal operations. Meanwhile, Nissan doesn’t count disengagements to start or finish a drive, reports Automotive News, while chipmaker Nvidia does. It’s confusing, and this is apparently a-OK under California statutes.

GM/Cruise’s report stands out otherwise, reporting a total of 105 disengagements over 131,671 miles, meaning it had a rate of 0.79 disengagements per 1,000 miles—a sharp improvement over its 2016 stats. (Comparatively, Google’s self-driving car unit, Waymo, reported a rate of 0.18 disengagements per 1,000 miles over 352,545 miles driven.)


Surely this’ll make GM’s shareholders happy. This month, GM introduced a new fully-autonomous Bolt test vehicle that lacks a steering wheel or any pedals. The automaker wants to get the car on the road as early as next year, a super-fast timeline by any standard.


GM also reported it had “no disengagements” in California “made as a result of a failure of the autonomous technology.”

Fine, but the reports this week again show there’s the potential for serious gaps in what even qualifies as an event that required a human to manually take control of the car. That only means the public’s going to have to rely more and more on the company’s word—something that became evident last year, when it emerged that federal regulators could only rely entirely on Tesla’s data to conclude the automaker’s Autosteer capability led to a 40 percent drop in crashes.


That seems like quite a gamble when it comes to garnering the public’s trust to eventually let go of the wheel entirely and sit back while the automated tech steers.

Correction (3:50 p.m.): A previous version of this story incorrectly said that GM wants to deploy its steering wheel-and-driverless car as early as next month. GM wants to deploy it as early as next year. We regret the error.