Just how safe autonomous vehicles need to be before they go on the market is a crucial question for policymakers. More than 37,000 people died in 2016 on U.S. roadways as a result of human drivers, yet studies show that people have little tolerance for mistakes made by machines. Some think autonomous vehicles need to be nearly perfect before they can be sold.

Mark Rosekind, when still chief regulator of the National Highway Transportation Safety Administration, noted the problem with waiting for perfect cars to replace imperfect human drivers on the road.

“We can’t stand idly by while we wait for the perfect,” Rosekind said at a symposium in 2016. “We lost 35,200 lives on our roads last year. … If we wait for perfect, we’ll be waiting for a very, very long time. How many lives might we be losing if we wait?”

To answer that question, RAND researchers Nidhi Kalra and David Groves have developed new tools that could help policymakers decide when to put autonomous vehicles on the road. The researchers found that introducing autonomous vehicles when they are just better than human drivers—as opposed to nearly perfect—could save hundreds of thousands of lives over 30 years.

“Waiting for the cars to perform flawlessly is a clear example of the perfect being the enemy of the good,” Kalra said.

Most Crashes Are Caused by Human Error

Kalra knows first-hand the consequences of driver error. At age 19, she survived a serious collision with a tractor-trailer. A driver in a merging car failed to see Kalra in his blind spot; Kalra overcorrected while avoiding him, and her car spun around and hit the big rig behind her. Both Kalra and the other driver were at fault.

In fact, more than 90 percent of crashes are caused by human error, such as speeding, miscalculating other drivers’ behaviors, or driving impaired.

“We’ve all heard the argument that autonomous vehicles are never drunk, distracted, or tired,” Groves said, "so they could reduce the huge number of crashes involving these factors."

Levels of Autonomy In this article, the term autonomous vehicles refers to those vehicles at levels 3–5 of SAE’s taxonomy for motor vehicle automation: No automated features One automated feature, e.g. steering or throttle Automated steering and throttle, but driver has primary responsibility Vehicle drives itself but may request help from the driver as needed Vehicle entirely drives itself in some conditions Vehicle entirely drives itself in all conditions In our formal research documentation, we refer to vehicles having this level of autonomy as highly automated vehicles.

Of course, autonomous vehicles aren’t perfect either, but they’re getting better. The machine learning algorithms that govern their performance rely largely on experiencing various road conditions and situations to improve. The more miles that autonomous vehicles travel—on different roads, in different environments, and under various weather conditions—the more quickly their safety improves.

However, developers today have only small fleets of autonomous vehicles traversing public roads with trained safety drivers behind the wheel, so those miles aren’t accumulating very rapidly. If autonomous vehicle use were widespread, the cars would travel more miles, learn much faster, and make safety gains more quickly.

Wrestling with Regulation

Officials at the federal and state levels are debating the question of how safe the cars need to be before they can be introduced to the market. The federal Department of Transportation recently released its guidelines for companies developing autonomous vehicles. “The guidelines are voluntary—which may reflect uncertainty around what standards to apply, and how to test them,” Kalra said.

Congress has taken an interest in autonomous vehicles as well. The House passed a bill in September that would establish a national framework for regulation of self-driving vehicles and made it easier to create exemptions to get more vehicles on the road sooner. The Senate is currently working on its version of the bill.

Meanwhile, some states appear poised to let driverless vehicles be tested on public roads without human drivers on board—for example, as early as June 2018 in California. And many cities are competing to be testing hubs for the developers in hopes of getting a piece of a market estimated to hit $42 billion by 2025.

The Missing Piece of the Policy Puzzle

But while policymakers have been focused on the safety of autonomous vehicles now and in the future, Kalra and Groves say they also need to think about what happens in between.

“What we don’t think about is the trajectory that gets us from here to there,” Kalra said. “How important is it that autonomous vehicles are safe when they’re introduced versus how quickly they improve? Do we allow them on the roads when they’re like teenage drivers or do we wait for them to be as good as professional drivers? We’re helping to answer that question by quantifying the lives at stake.”

How important is it that autonomous vehicles are safe when they’re introduced, versus how quickly they improve?

That number depends on several factors: how quickly consumers trade in their conventional cars; how quickly autonomous vehicles improve; the extent to which autonomous vehicles change how much we drive; and whether the safety of non-autonomous vehicles also changes, to name a few.

Taking into account all these forces, Kalra and Groves developed a model to estimate the number of lives saved (or lost) over the coming decades under different scenarios of autonomous vehicle introduction, adoption, and improvement.