AUSTIN—What’s the right level of safety for self-driving cars to achieve before the industry can design them so that a human can completely ignore what’s happening on the road? ““We operate internally with the assumption that an autonomous car needs to be 100 times better than a human,” Axel Nix, a senior engineer in Harman International‘s autonomous vehicle team, said Saturday during a panel at South by Southwest.

The most immediately surprising part of this statement is that Harman’s saying it. If you’re familiar with the company, it’s probably because they made the speakers attached to your computer. It turns out that it has also really blossomed inside cars, as the maker of high end automobile entertainment solutions, which will only become more important as people pay less and less attention to the road.

Nix took part in the a panel on semi-autonomous cars, where stakeholders discussed the difficult transition from humans taking complete responsibility for what cars do toward lessening that responsibility by degrees. Autonomous vehicles have proved to be one of the hottest topics at this year’s interactive conference, with multiple panels on the topic and a display of the Chinese NIO concept car.

It’s a nice idea to talk about, but the Intelligent Car Coalition pulled this panel together to help the public think seriously about the messy process of moving toward automobiles in which people can drink and not drive.

“I don’t think the public at large even understands the cars they are driving today,” Nix said. For example, he speculated that there are drivers out there who simply let their cars roll to a stop when power steering fails, because they don’t understand that they can still turn the wheel if they just pull harder.

“I don’t think we can put it on the humans to truly understand the capability of the technology,” Nix argued. “I think the technology has to understand the human.”

Nat Beuse, a researcher at the National Highway Traffic Safety Administration, said that traffic deaths have started moving in the wrong direction for the first time in years. Meanwhile, computer systems have shown more promise lately. “For the first time in a very long time,” he said his agency “is really trying to think hard about how we can use computer systems to avoid crashes in the first place.”

So, when the NHTSA released its “Federal Automated Vehicles Policy” in September 2016, it spoke directly to the question of safety metrics. Research needs to help regulators think not just about how safe vehicles need to be, but how to measure safety for them.

SEE ALSO: What Google’s cars learned from a woman in a wheelchair and a duck.

“I can tell you for sure it better not be crashes,” Beuse said, “because if we’re measuring crashes you are measuring the wrong thing.” As an example today, it might be better to measure near misses, but that also requires finding a way to define a near miss, which is extraordinarily complicated in itself.

And those near misses get far more frequent when humans fail to take control of semi-autonomous cars when they should. Even engineers at Ford came to trust their self-driving cars too much, and that’s why the company plans to skip the semi-autonomous stage and jump all the way to the almost entirely autonomous era.

“Societal acceptance of involuntary risk is very low,” Nix explained. It’s one thing to drive too aggressively yourself, but it’s another thing to get into something you have no control over. That’s why people become so alarmed when something goes wrong with one of today’s vaguely autonomous vehicles.

If the technology rolls out in a limited fashion and proves to be very good, it is hard to believe that society will really, in the end, expect vehicles to be 100X better than humans. But humans are sure to be more comfortable with the technology’s progress in the early days if they know that the makers of these cars have aimed for that level of precision.