"We're really good at licensing drivers and regulating vehicles and the car sales industry, but we don't have a lot of expertise in developing those types of standards," Soublet said. "So as we start approaching things like that, we have to back off. We don't have the technical ability to do it. We have to come at this from a regulatory perspective of what we as a department are capable of."

Assuming they won't have access to and the expertise to look at Google's or Volkswagen's algorithms, they've come up with a compromise approach. One set of things they know they can measure are behavioral competencies. They know what a car needs to be able to do to drive on highways, or city streets, or on rural lanes. They can test for those competencies without actually peering under the metaphorical hood of the autonomous vehicle.

What Happens When Artificial Intelligence Fails?

Most efforts to regulate an emerging technology encounter opposition from the developers of the technology. This one is no different. The biggest area of contention came in the the reporting of failures of the autonomous systems. Not just crashes, which I'm pretty sure everyone would have to agree to, but what are called "disengagements." They define this in the regulations as "deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle."

Basically: a disengagement signals when the car's AI did not work. "That was something that was very important for us to get," Soriano said. "We had a lot a lot of pushback on that from the manufacturers." Of course, they did not want to report when things went wrong to the government agency that would be in charge of their permits. But it seems vital for the safety of the public that someone know the true state of the technology outside of the companies themselves.

They decided to include this requirement, Soublet said, when "we started hearing about this race to have autonomous cars on the road. Nissan says we're going to have something by 2020. Within a week, Mercedes Benz, said they'd have it by 2020. Then, Volvo says no one is going to die in our cars."

Regulators worried about the companies pushing the limits of what their cars could do to be first. In the end, the reporting requirements are not that onerous, but they are interesting, and will certainly create the best data about the state of self-driving cars that exists in the world. Each year, the companies will submit a monthly breakdown of their disengagements along with "the circumstances or testing conditions at the time of the disengagements." That will include:

The location: interstate, freeway, highway, rural road, street, or parking facility.

A description of the facts causing the disengagements, including: weather conditions, road surface conditions, construction, emergencies, accidents or collisions, and whether the disengagement was the result of a planned test of the autonomous technology.

The total number of miles each autonomous vehicle tested in autonomous mode on public roads each month.

The period of time elapsed from when the autonomous vehicle test driver was alerted of the technology failure and the driver assumed manual control of the vehicle.

These are the benchmarks on which autonomous vehicles will be judged. And they highlight the key anxieties of the technology developers and regulators. What will cause disengagements? How often will they happen? Where will they happen? And how safe was the handoff to a human driver?

Speaking of that driver, another contentious issue was the requirement that a human driver be in the car and ready to assume control at all times. They also ask that the autonomous car operators have gone through a training course and have a clean driving record. All sensible stuff.