When Venetian merchants hauled the first shipments of a popular Ottoman drink called coffee into 17th century Europe, leaders in the Catholic Church did not exult at the prospect of increased productivity at the bottom of a warm cuppa. So they asked Pope Clement VIII to declare coffee “the bitter invention of Satan.” The pontiff, not one to jump to conclusions, had coffee brought before him, sipped, and made the call. “This Satan's drink is so delicious that it would be a pity to let the infidels have exclusive use of it,” he declared, the (perhaps apocryphal) story goes.

Which is all to say: Sometimes people are so scared of change that they get things very wrong.

Today that metathesiophobia has found a new target in cars that occasionally drive themselves. And the fearful murmuring only got louder this week, when the National Highway Traffic Safety Administration opened an investigation after a driver in Utah crashed into a stopped firetruck at 60 mph, reportedly while Tesla's Autopilot feature was engaged. Every time a Tesla with its semiautonomous Autopilot feature crashes—one hit a stopped firetruck in Southern California in January, another struck a highway barrier in Mountain View, California, in March, killing its driver—it makes headlines. (One could imagine the same thing happening with a car using Cadillac’s Super Cruise or Nissan’s Pro Pilot, but those newer, less popular features have had no reported crashes.)

So, many are fearful. The National Transportation Safety Board and the National Highway Transportation Safety Administration have launched investigations into these crashes, while consumer advocates lob criticisms at Tesla.

Human factors engineers who study the interactions between humans and machines question the sagacity of features that allow drivers to take their hands off the wheel, but require they remain alert and ready to retake control at any moment. Humans are so bad at that sort of thing, many robocar developers, including Waymo, Ford, and Volvo, are avoiding this kind of feature altogether.

LEARN MORE The WIRED Guide to Self-Driving Cars

Elon Musk, a leader who inspires quasi-religious devotion in his own right, spurns this hand-wringing. “It’s really incredibly irresponsible of any journalist with integrity to write an article that would lead people to believe that autonomy is less safe,” he said on an earnings call earlier this month. “People might turn it off and die.”

Musk and Tesla spokespeople have repeatedly said the feature can reduce crashes by 40 percent. But a recent clarification from the National Highway Traffic Safety Administration and a closer look at the number reveals that it doesn’t hold up.

Still, it’s plausible that Autopilot and its ilk save lives. More computer control should minimize the fallout when human drivers get distracted, sleepy, or drunk. “Elon’s probably right in that the number of crashes caused by this is going to be less than the ones that are going to be avoided,” says Costa Samaras, a civil engineer who studies electric and autonomous vehicles at Carnegie Mellon University.1 “But that doesn’t change how we interact with, regulate, and buy this technology right now.” In other words: It’s never too early to ask questions.

So how can carmakers like Musk’s prove that their tech makes roads safe enough to balance out the downsides? How can Autopilot follow in the path of the airbag, which killed some people, but saved many more, and is now ubiquitous?

Experts say it would take some statistics, helped along by a heavy dose of transparency.

Data Gap

“The first thing to keep in mind is, while it seems like a straightforward problem to compare the safety of one type of vehicle to another, it’s in fact a complicated process,” says David Zuby, who heads up vehicle research at the Insurance Institute of Highway Safety.