This image was removed due to legal reasons.

This week, California's DMV released reports for the nine accidents involving self-driving cars in the state over the last year. Spoiler: they were all caused by humans.


The first accident happened in October of last year when a human driving a Honda in Palo Alto dinged the side of a Delphi self-driving Audi. The other eight were accidents involving Google's self-driving Lexus models, not the podlike cars it's been testing out more recently. The reports show self-driving cars following the rules and yielding to oncoming traffic, but being undermined by careless humans. All of the accidents involved the cars being rear-ended or side-swiped, mostly while in autonomous mode but also sometimes in "human mode."

Given the more than a million miles self-driving cars have driven, and the way people freak-out when they see a car with a weird self-driving radar on its roof, the low number of accidents speaks to how much better behaved self-driving cars are on the roads than humans. On average, there are 1.10 deaths per 100 million vehicle miles traveled in 2013, according to the NHTSA. With the self-driving car accidents, none were fatal. Most happened at low speeds. The only injury was whiplash for people in a Google Lexus, rear-ended by a Nissan Altima when traffic was stopped at a green light. (They were taken to the hospital, where they were told they were all fine to go back to work.)


Given self-driving cars' track record so far, my colleague Kevin Roose went so far as to argue that human driving should be banned to pave the way for self-driving cars to become ubiquitous. "Some researchers estimate that, by the middle of this century, self-driving cars could prevent a million traffic deaths a year—making them as important a public health achievement as vaccines," he wrote.

These reports bolster that thesis. None of the accidents seem to have occurred because of aberrant behavior on the part of self-driving cars—even though they do get confused sometimes. One report indicates that a self-driving car detected that an accident was going to happen though it was unable prevent it. In February, a self-driving Google Lexus detected an Audi going way too fast approaching a stop sign so it "began applying the brakes in response to its detection of the Audi's speed at trajectory." The car's human driver "disengaged Autonomous Mode and took manual control of the vehicle in response to the [braking]." The human in the Audi then ran the stop sign and hit the Lexus.

Google has been self-reporting accidents monthly since May 2015. To date, they've been involved in 16 minor accidents, according to Google. Its first report included several other accidents dating back to 2010 not included in the DMV release. They all involved a human rear-ending a Google robocar, or a human operating a Google car in manual mode and getting in an accident. "Not once was the self-driving car the cause of the accident," Google declares each month.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.