Car companies are likely to go through a similar progression. After being widely embarrassed by their failure to consider security at all—the CAN bus, designed in the 1980s, has no concept of authentication—they now appear to be paying attention. When hackers demonstrated that vehicles on the roads were vulnerable to several specific security threats, automakers responded by recalling and upgrading the firmware of millions of cars. Last July, GM CEO Mary Barra said that protecting cars from a cybersecurity incident “is a matter of public safety.”

But the efforts being made to date may be missing the next security trend. The computer vision and collision avoidance systems under development for autonomous vehicles rely on complex machine-learning algorithms that are not well understood, even by the companies that rely on them (see “The Dark Secret at the Heart of AI”).

Last year researchers at CMU demonstrated that state-of-the-art face recognition algorithms could be defeated by wearing a pair of clear glasses with a funky pattern printed on their frames. Something about the pattern tipped the algorithm in just the right way, and it thought it saw what wasn’t there. “We showed that attackers can evade state-of-the-art face recognition algorithms that are based on neural networks for the purpose of impersonating a target person, or simply getting identified incorrectly,” lead researcher Mahmood Sharif wrote in an e-mail.

Also last year, researchers at the University of South Carolina, China’s Zhejiang University, and the Chinese security firm Qihoo 360 demonstrated that they could jam various sensors on a Tesla S, making objects invisible to its navigation system.

Many recent articles about autonomous driving downplay or even ignore the idea that there might be active, adaptive, and malicious adversaries trying to make the vehicles crash. In an interview with MIT Technology Review, the chair of the National Transportation Safety Board, Christopher Hart, said he was “very optimistic” that self-driving cars would cut the number of accidents on the nation’s roads. In discussing safety issues, Hart focused on the need to program vehicles to make ethical decisions—for example, when an 80,000-pound truck suddenly blocks a car’s way.

Why anyone would want to hack a self-driving car, knowing that it could result in a death? One reason is that widespread deployment of autonomous vehicles is going to result in a lot of unemployed people, and some of them are going to be angry.

In August 2016, Ford CEO Mark Fields said that he planned to have fully autonomous vehicles operating as urban taxis by 2021. Google, Nissan, and others planned to have similar autonomous cars on the roads as soon as 2020. Those automated taxis or delivery vehicles could be vulnerable to being maliciously dazzled with a high-power laser pointer by an out-of-work Teamster, a former Uber driver who still has car payments to make, or just a pack of bored teenagers.

Asked about its plans for addressing the threat of adversarial machine learning, Sarah Abboud, a spokesperson for Uber, responded: “Our team of security experts are constantly exploring new defenses for the future of autonomous vehicles, including data integrity and abuse detection. However, as autonomous technology evolves, so does the threat model, which means some of today’s security issues will likely differ from those addressed in a truly autonomous environment.”

It will take only a few accidents to stop the deployment of driverless vehicles. This probably won’t hamper advanced autopilot systems, but it’s likely to be a considerable deterrent for the deployment of vehicles that are fully autonomous.

Simon Garfinkel is a science writer living in Arlington, Virginia. He is working on a new book about the history of computing.