The scene of the accident involving a Tesla vehicle running in autopilot mode in Laguna Beach, California on May 29th, 2018. Photo : AP

A Tesla sedan running in its autopilot mode crashed into a parked police car in Laguna Beach, California on Tuesday, per the Associated Press, resulting in “minor injuries” to the driver. The officer in charge of the cruiser at the time of the crash was not inside the vehicle and thus avoided being injured.




According to USA Today, the man who was driving the Tesla said he had engaged the autopilot mode prior to the impact—the latest in a series of incidents involving the feature to some degree:

[Police Sgt. Jim Cota] said the luxury electric car crashed in almost the same place as another Tesla about a year ago. The driver, he said, also pointed to the Autopilot system as being engaged.

The crash comes as Tesla has been facing scrutiny involving Autopilot. Since March, a driver died when his Model X SUV crossed the center divider in Mountain View, Calif., while in Autopilot mode. And another driver in Salt Lake City was injured in a crash when her car hit a parked fire truck. In both cases, Tesla says it can tell from its logs that drivers were either distracted or ignored the car’s warnings to take control.


Tesla has repeatedly emphasized that the autopilot system is only intended to assist, not replace, an alert human driver, and requires drivers to agree that they understand how to use it before it can be activated. It also insists the accidents are primarily the result of human error, not the autopilot feature itself. In a statement to USA Today, the manufacturer wrote, “When using Autopilot, drivers are continuously reminded of their responsibility to keep their hands on the wheel and maintain control of the vehicle at all times.”

The scene of the accident involving a Tesla vehicle running in autopilot mode in Laguna Beach, California on May 29th, 2018. Photo : AP

However, last week Tesla settled a lawsuit alleging the semi-autonomous autopilot feature was “essentially unusable and demonstrably dangerous.” The $5 million settlement dropped any usage of the word “dangerous,” according to Reuters, and instead was officially paid out to resolve disputes over delayed updates to the Model S and Model X autopilot features—though Tesla did seem to back away from its prior claims that media reports regarding autopilot crashes were “disingenuous,” “inaccurate,” and “sensationalist.”



In April, Bloomberg reported that safety advocates were becoming increasingly skeptical of Tesla’s claims that autopilot mode reduced crashes by 40 percent, saying the company was misrepresenting National Highway Traffic Safety Administration data that itself was not fully released to the public. By May, the NHTSA was publicly distancing itself from Tesla’s use of the statistic. Another study by the Insurance Institute for Highway Safety found a much lower reduction of 13 percent, though spokesman Russ Rader told Bloomberg they could not attribute that to any specific feature of a Tesla vehicle. CEO Elon Musk recently committed to regularly releasing safety data on autopilot, per the Verge.


Though engineers say that self-driving cars are likely going to be safer than human-piloted ones—humans are in the aggregate very unsafe drivers—there’s something about either the concept or the companies developing them that seems to send up red flags in the minds of the public. A recent AAA study found that 73 percent of respondents didn’t trust the cars in April 2018, following a string of accidents involving Tesla autopilot as well a lethal accident in March involving a self-driving Uber vehicle.

[AP/USA Today]