The recent Federal Automated Vehicles Policy is long. (My same-day analysis is here and the whole series is being released.) At 116 pages (to be fair, less than half is policy declarations and the rest is plans for the future and associated materials) it is much larger than many of us were expecting.

The policy was introduced with a letter attributed to President Obama, where he wrote:

There are always those who argue that government should stay out of free enterprise entirely, but I think most Americans would agree we still need rules to keep our air and water clean, and our food and medicine safe. That’s the general principle here. What’s more, the quickest way to slam the brakes on innovation is for the public to lose confidence in the safety of new technologies. Both government and industry have a responsibility to make sure that doesn’t happen. And make no mistake: If a self-driving car isn’t safe, we have the authority to pull it off the road. We won’t hesitate to protect the American public’s safety.

This leads in to an unprecedented effort to write regulations for a technology that barely exists and has not been deployed beyond the testing stage. The history of automotive regulation has been the opposite, and so this is a major change. The key question is what justifies such a big change, and the cost that will come with it.

Make no mistake, the cost will be real. The cost of regulations is rarely known in advance but it is rarely small. Regulations slow all players down and make them more cautious -- indeed it is sometimes their goal to cause that caution. Regulations result in projects needing "compliance departments" and the establishment of procedures and legal teams to assure they are complied with. In almost all cases, regulations punish small companies and startups more than they punish big players. In some cases, big players even welcome regulation, both because it slows down competitors and innovators, and because they usually also have skilled governmental affairs teams and lobbying teams which are able to subtly bend the regulations to match their needs.

This need not even be nefarious, though it often is. Companies that can devote a large team to dealing with regulations, those who can always send staff to meetings and negotiations and public comment sessions will naturally do better than those which can't.

The US has had a history of regulating after the fact. Of being the place where "if it's not been forbidden, it's permitted." This is what has allowed many of the most advanced robocar projects to flourish in the USA.

The attitude has been that industry (and startups) should lead and innovate. Only if the companies start doing something wrong or harmful, and market forces won't stop them from being that way, is it time for the regulators to step in and make the errant companies do better. This approach has worked far better than the idea that regulators would attempt to understand a product or technology before it is deployed, imagine how it might go wrong, and make rules to keep the companies in line before any of them have shown evidence of crossing a line.

In spite of all I have written here, the robocar industry is still young. There are startups yet to be born which will develop new ideas yet to be imagined that change how everybody thinks about robocars and transportation. These innovative teams will develop new concepts of what it means to be safe and how to make things safe. Their ideas will be obvious only well after the fact.

Regulations and standards don't deal well with that. They can only encode conventional wisdom. "Best practices" are really "the best we knew before the innovators came." Innovators don't ignore the old wisdom willy-nilly, they often ignore it or supersede it quite deliberately.

What's good?

Some players -- notably the big ones -- have lauded these regulations. Big players, like car companies, Google, Uber and others have a reason to prefer regulations over a wild west landscape. Big companies like certainty. They need to know that if they build a product, that it will be legal to sell it. They can handle the cost of complex regulations, as long as they know they can build it. Small companies want to know their products will be legal, too, but they are willing to take more risk, and aim for targets that are currently illegal but probably will be legal when the time comes.

For reasons outlined below, I am not sure these regulations offer as much certainty as desired. Many of them have been left deliberately vague -- in the laudable goal of not trying to regulate too much at this time. Indeed, I believe the aut hors of the regulations hope they will offer this certainty and help progress.

I will contend that the certainty needed by the vendors could have been delivered with much simpler regulation. The fact is, having been inside or close to many of the developers out there, they are already safety obsessed. The potential liability of crashes already makes them do far more for safety than NHTSA could write in a standard. I do not believe that any developer has been acting so recklessly than they needed to be reined in at this point.

Most of the regulations are fairly obvious things, known already to all major developers, and already part of their plans. As such, I may not have criticism of specific rules themselves in many cases. My criticism instead is of the idea of th inking you can write the rules at this time. Even when the rules are explicit in stating that they are currently vague an d will be evolving over time.

Mandatory data sharing

The mandatory data sharing rules are among the more radical elements. This actually could be a powerful role for government. As I have written, the most difficult challenge is to build the testing tools to prove you've actually made a vehicle safe enough and mandatory sharing of full data on any incidents and near-incidents could quickly create that test database. It also would greatly level the playing field between competitors, because the huge test experience the big and old players have is one of their big advantages in the game. More on this later. I might recommend the government actually could help by funding and promoting an open source simulator which is chock-full of test situations and a way for every developer to test against them.

Voluntary regulations?

The current document describes compliance with the 15 point program as optional. Respondents making their filing -- the filing becomes mandatory 4 month after the paperwork reduction act process is done -- can say that they decline to certify the requested item.

This might be seen as an out, a regimen where players can still experiment. This is the case only if the statement that they are voluntary is really true. In the same document, NHTSA states it wants to make them mandatory in the future. And that it reserves its power to recall any vehicle they don't think is safe.

My fear is that deliberately ignoring these regulations will be the scary path for many developers. They will know that if they do have a safety problem, and they explicitly declared not to follow one of the regulations, they are likely to get in more trouble than if the opt-out were never there.

NHTSA also has a great deal of power over experimental or revolutionary vehicles because of the existing and mandatory Federal Motor Vehicle Safety Standards (FMVSS.) For example, those regulations require a steering wheel, which Google wishes to exclude. Google gets away with this by doing this only in an experimental low-speed vehicle of a class that is exempted from much of the FMVSS. To make a real vehicle, Google would need an exemption from the FMVSS to build it their way, and I doubt they would get it if they are not complying with this "voluntary" standard.

The rules are simply not written in a way that imagines that compliance is very optional.

The danger of citing existing standards

At a large number of places in these regulations, developers are called upon to follow the established standards for safety, reliability, software development practices and many other things that have been written by organizations like the ISO, Society of Automotive Engineers, NIST, the Alliance of Auto Manufacturers, ANSI, CIE, IEEE, US Dept. of Defense and NHTSA itself.

I am not offering a criticism of the various standards written by these bodies. While standards certainly have their flaws, even if we viewed all these cited bodies and standards as superb, the reality is that {bo standards can only ever encode conventional wisdom}. This makes it much more difficult to invent new and non-conventional ways of being safe which may violate the conventional rules but are actually safer. I'll offer some more detail on this in the Safety section.

Poor handling of machine learning and other radical methods

The NHTSA authors know about the revolution going on in machine learning and the use of trained neural networks in AI. Th ey know it is one of the most heavily researched areas in robocars today, or they should. Yet many of these regulations a re so tied to more conventional thinking that I fear they could preclude the use of many machine learning techniques, or a t least make them more difficult.

As I have written, machine learning based approaches create a black box the developers don't fully understand. You don't know why it makes the decisions it does. If it makes wrong decisions, you can add new training data until it doesn't, but you don't know why the fix worked and how universal it was. That's scary, and one can understand how the regulations might discourage that. At the same the neural networks are so powerful we might find ourselves in a sitation where we can choose between two approaches:

A traditional, transparent approach which has X accidents per million miles, and we understand why, and A machine learning approach that has 60% of X accidents per million miles, but we don't know how or why

Which is the right choice? What if it's 10% of the accidents of the transparent system?

Further, if they might block today's hot new method, what will they do with tomorrow's?

Overly detailed regulation vs. safety levels

These regulations contain long lists of things that cars should do. They are, to put it bluntly, a beginner's checklist of things a car developer might want to watch out for in building their system. This approach is not complete, and nor will it ever be complete -- there will be many things not in the governmental lists which need to be done.

Writing lists now in regulations (rather than in advisory documents or research papers) could lead to the dangerous thought that the lists are complete. They will certainly become a checklist for teams.

Some people believe, however, that the safety goals should be expressed in a different way. That the rules should demand that a system meet or exceed human safety levels by some amount, without saying how. You can write a regulation that says "be able to handle a police officer redirecting traffic" and nobody is going to argue with the need for that, but there are arguments against the government being the one to write it down. (First of all, that's already in all vehicle codes as far as I know so would already be required of all vehicles.)

The problem, at least in 2016, is better expressed as "Be {it this} safe, and you figure out how to do that." Describe levels of unsafe events in broad categories like:

Failure to avoid unexpected incursion into your lane Unplanned and incorrect departure from your lane Improperly close approach to another road user Impact with a physical object Impact with a person or vehicle with a person

Then put a multiplication factor on these elements for speed. Set up a score and charge the teams with meeting that goal. Don't tell them how to meet it.

Constant updates and timelines

The regulations require "entities to provide a Safety Assessment at least four months before active public road testing begins on a new automated feature." Furthermore, they ask for an update report for any update that will "materially change the way in which the vehicle complies (or take it out of compliance) with any of the 15 elements of the Guidance (e.g., vehicle’s ODD, OEDR capability, or fall back approach." (The ODD is the operating domain, which is to say the roads it works on and the OEDR is the perception system and resulting path changes due to it.)

NHTSA seems unaware of just how much of a burden this could be on the development process. Every team I know is constantly revising their software and developing new capabilities and functionality. The certainly won't wait 4 months to put something on the road after building it.

Teams are also constantly improving and adding functions, both to the perception systems, and to the set of roads that they can safely handle. These regulations don't require an update just to add a road to the map, but they do require it for the ability to handle a new type of road.

This simply isn't workable during testing and development, and would be tough even after deployment, since testing and development never end.

This is also one of many burdens on those attempting to use neural networks in their perception systems and especially their path planning systems. With these networks you train them rather than program them, and you only know if you have added a capability after you test it!

Going forward

In the next few days, I will release analysis of the different portions of the regulations. Then I will look at the plan for states and NHTSA's future plans. NHTSA hopes other countries (particularly in NAFTA) will copy their regulations, and the power of the USA is strong so this may well happen. There will be countries which don't copy this approach, and which have faster robocar development as a result. Will the USA realize what to do before it's to late? Once regulations like this are passed, they are very hard, if not impossible, to un-pass. Nobody wants to be the person who removed a safety regulation, because if you can connect an accident with that removal, you will get blamed. You must be very sure to get it right because going back is very difficult.

You can read part three of the NHTSA series.