Cars crash a lot: Nearly 37,500 Americans died on the roads last year. Autonomous cars would crash less (for one thing, they don’t drink or text or yell at their kids in the backseat). But that doesn’t mean drivers are ready to give over the wheel.

“There will be a horrific crash, not long after the vehicles are introduced, because automobiles crash a lot,” says David Groves, a senior policy researcher at the RAND Corporation, a policy think-tank. “We are so numb and tolerant of the crashes that occur by the thousands all around us every year,” he says. “But the first autonomous vehicle crash is going to be extremely novel." In other words: Expect a freak out.

What then? Does a public backlash send potentially innovative tech spinning into disrepute or even obscurity, like it did with Three Mile Island and nuclear energy, or the Hidenburg disaster and airships? Those are the questions at the heart of new research published today by Groves and co-investigator Nidhi Kalra, a roboticist who heads up RAND’s Center for Decision Making Under Uncertainty.

The report addresses the doubts percolating around self-driving cars, but it's very clear that these things are coming. Just look at San Francisco; Tempe, Arizona; Michigan; Boston; Pittsburgh, Pennsylvania; or the secretive former Air Force base in California where Waymo conducts testing. But wide-scale deployment of autonomous vehicles hasn't actually happened yet, and regulators have a hard time knowing when totally self-driving cars will be ready to mix with human traffic.

The rational argument: Put them on the roads when they cause fewer deaths overall than human drivers. If humans cause 37,462 car deaths a year, and driverless cars cause 37,461, let ‘em roll. Counter-argument: The public will flip the first time one single person dies in a self-driving car accident, even if thousands of others have been “saved” by non-distracted, non-drunk robo-cars. (Witness the frenzy produced by the death of driver behind the wheel of a semiautonomous Tesla.) The engineers may not mind a less-than-perfect robot. The public will likely prove less forgiving.

Presumably, though, there will be some moment where it makes sense, public safety-wise, to let autonomous vehicles own the road. But when is that? The RAND researchers used an analytic method called robust decision making to try to put some intellectual rigor into the question.

Their conclusion sounds clichéd: Don’t let the perfect be the enemy of the good. But it’s meaningful, too. They conclude that tens or even hundreds of thousands of lives could be saved by self-driving cars, even if regulators allow less-than-perfect cars on the road. As Groves puts it, “Even though we can’t predict the future, we found it’s really hard to imagine a future where waiting for perfection doesn’t lead to really big opportunity costs in terms of fatalities.”

Hard-ish Numbers

Self-driving cars are obviously not perfect yet. In fact, we have a pretty clear sense of how not perfect they are. The 43 companies testing self-driving cars in California must submit public “disengagement reports,” noting every time a human driver intervenes while behind the wheel of a self-driving car. Last year’s reports show these cars are getting better, but aren’t all the way there: Waymo's cars averaged 5,128 miles between disengagements—pretty good!—while Mercedes-Benz did 1.8—not so great. Today, autonomous vehicles are about as good as a standard crappy driver. "You’re probably safer in a self-driving car than with a 16-year-old, or a 90-year-old," researcher Brandon Schoettle told WIREDin August. "But you’re probably significantly safer with an alert, experienced, middle-aged driver than in a self-driving car."