Yesterday we saw the state of Arizona kick Uber's robocar program out of the state. Arizona worked hard to provide very light regulation and attracted many teams to the state, but now it has understandable fear of political bite-back. Here I discuss what the government might do about this and what standards the courts, public or government might demand.

Waymo / Jaguar

Waymo's big announcement today was a partnership with Jaguar to base their next vehicle on Jaguar's expensive electric car. They are going to buy a lot of cars. I think it's a surprising choice. While the luxury of such vehicles is nice, and electric makes sense, I somehow suspect that for a taxi people prefer vehicles like the minivan they now use, with high seats, easy entry and automatic doors. Less green, though.

Making right turn

Some folks who have been investigating the video (I hate to watch it myself) have suggested that the car shows signs of starting a turn, and that the right turn indicators might be on. This provides some context which might provide an explanation, though not an excuse, for the system failure. In other words that very sloppy code, planning to exit the lane it's in, erroneously decided it need not treat a pedestrian in its soon-to-be-former lane as to be avoided. We're still at the point of speculation, and still waiting for Uber to release the real logs of what transpired in their spirit of full cooperation.

What should the government do?

Some have reacted to this tragedy to look for more regulation. It does appear that Uber has lived up to its reputation as a "cowboy" and put the public at unnecessary risk. That is my standard of when regulation can make sense -- when it is shown that the companies can't be trusted to act reasonably without it.

At the same time it's not at all clear any regulatory body would be better at writing safety rules to follow than the companies are, and that the field would not change so quickly that the rules became obsolete before long.

As such, I hold to my existing position of relying on the existing regulation which exists in tort and traffic law. It's already illegal to hit people, and always will be. There is evidence to suggest Uber will fail the usual tests in the courts on what good and best practices are, on what reasonable duties of care are. If so, they will be punished, and quite severely. In fact, I think that Uber's self-driving program may receive the "death penalty" because of this incident, in that both the public will not trust it and management will shut it down or entirely revamp it.

If Uber receives a severe punishment in traffic or civil courts, and gets blocked from operation in other states, this will provide a strong message to all other players. A stronger one than NHTSA regulations might offer. If this does indeed become so strong a penalty that Uber's self-drive project gets an effective death penalty, I can't imagine the need for anything stronger than that. The public regulators might have a better moral sense than Uber, but they won't be better at writing rules on how to keep safe from a technical standpoint. Certainly not rules that will still make sense in 2021.

That's why companies like Waymo and Zoox actually hired former bosses at NHTSA, the National Highway Transportation Safety Agency. While obviously these men help their employers navigate the regulatory environment, part of their role is to help the companies design safety protocols. (They are actually not allowed to do any lobbying or other professional interaction with their former colleagues for 3 years.)

Many may not appreciate that this is the norm in regulating automotive safety technologies. Pretty much all the technologies out there -- seatbelts, airbags, anti-lock brakes, stability control, blindspot warning, adaptive cruise control, forward collision warning, lanekeeping and more -- were all developed and deployed entirely without regulation, and then sold for years, even decades before regulations were applied. When the regulations were applied, they were typically to say things like, "This technology is so good, we're going to require every car to have it at some basic level." It would be highly unusual for regulation to describe how to build the technology before it is out with customers.

I know that some people will feel that some regulation should have been there, but at least on the surface (we don't know enough yet) there are few reasonable regulations I can think of that would have stopped Uber. While Uber's car did not perform to reasonable minimum standards, I don't think Uber deliberately put a sub-minimum car on the road.

There is one area I think regulation might have helped, and that's on the number of safety drivers. It is essential that teams be able to reduce to one, and then zero safety drivers in time, but we might consider regulations that had something to say about the switch from 2 to 1. (The switch to zero is already considered in many regulations.)

Problem is, Uber had racked up a lot of miles, more than most teams out there. So a rule saying you need a certain number of miles before dropping to 1 is hard to apply here. Their reported intervention rate is low, so we could look into requiring 2 drivers until the intervention rate reaches a certain level. Unfortunately, as discussed yesterday, there are lots of different types of interventions and they are treated differently by the different teams. It will be very hard to define a universal standard for what one is. Worse, as noted, pressure to reduce intervention counts could actually be the cause of accidents by making safety drivers reluctant to intervene.

One likely, but expensive method, requires detailed simulators from every team which can tell if an intervention was truly needed or was just cautionary. We would also need to consider the question of interventions due to software fault alerts (which cause alarms and don't need a second safety driver) compared to interventions due to problems on the road (where two safety drivers can play a useful role.)

In general when considering the need for regulation, one should examine what things the companies might be motivated to lie about or be unsafe about. If there are high liabilities for accidents -- as I believe there are and will be -- there is low motive to lie. You're only lying to yourself, since you will certainly pay handily for any accident which is your fault, and fault will be readily apparent.

This leaves the issue of companies being reckless in order to lower costs of testing and development. While this is a risky gamble for them, it is nonetheless a gamble that perhaps Uber was willing to take. Courts tend to punish such attitudes harshly, but intent can be hard to prove.

I would consider the following set of safety driving regulations. They would be based on measuring "miles per safety-necessary intervention." Teams can use simulation to distinguish safety-necessary interventions (ie. without the intervention a violation of the vehicle code or other's right of way would have occurred) and others. If they don't have such simulation tools, they can do a human analysis, but "inconclusive" will be counted as safety-necessary.

Low maturity software, whose major revision level is less than some threshold of miles per safety intervention must have two safety drivers on duty while testing. Higher maturity software, above a certain threshold, may have only one safety driver while testing in the type of road situations it has reached that threshold. When only one safety operator is present, that operator's attention to the road must be measured, and the operator taken off duty if it drops too low. In addition, solo safety drivers must have regular breaks to avoid fatigue. No task of the safety driver shall require them to look away from the road for more than a short glance. Fully unmanned operation (zero safety drivers) shall be covered by other rules. Inherently unmanned vehicles (ie. cargo robots) may be monitored from a chase vehicle with appropriate local-radio takeover mechanisms. Robots below a certain amount of kinetic energy (ie. low mass and speed) will have lower requirements.

The minimum levels

In not braking for a pedestrian crossing 3 lanes and entering the car's lane, I have advanced that Uber's vehicle performed (well) below the minimum standards for a vehicle. We don't know if is generally performs below that level, or just had some very unusual event take place that made it fail in only this situation. For now though, it seems pretty bad. There are so many ways that the car's sensors and systems should have been able to detect and react to this pedestrian.

At a basic level, I believe we should expect a vehicle can do the following:

Detect and brake for an obstacle in their lane when that obstacle is clearly visible or clearly moving into their lane on urban streets. Broadly, this requires perceiving an obstacle being approached at a time better than the 0.7 second reaction time of high-performing humans, and with room to stop given the current stopping distance in current road conditions. Ideally, perception should take place with a margin above this, allowing less than full braking to be applied. Where practical, on clear city streets, swerving should also be available, but only if there is high confidence it will not worsen the situation.

At 25mph less than 60 feet is needed -- well within the range of all LIDARs, radars, stereo cameras and more.

At 40mph, for human drivers, this means a need to perceive 140 feet (42 meters) out. This is well within the range of typical LIDARs, and even within the capability of widely separated stereo cameras (depending on illumination.) It's also well within the range of radars. Typical range of low-beam headlights for human eyes is around 160 feet.

At full highway speeds of 75mph, the distance is 108m -- this is seriously pushing the limits of near-infrared LIDAR (which is to say most LIDARS) especially on a person in black clothing. It's also beyond stereo cameras, but not radar or 1.5 micron long-range LIDAR.

In reality, while you may gain some distance due to faster reaction times, these stops require full hard brake stops, which present a problem if somebody is riding your tail, regardless of whether that's illegal or not.

(Understand that I suspect Uber falsely believed that their vehicle did perform to those levels and was unaware that it would not.)

However, there are several exceptions to this

If the road is curved, cars are not expected to slow to a speed low enough to be able to brake for any stopped object that suddenly appears around a corner. While they could do so, human drivers almost never slow to this speed, for better or worse. If an obstacle enters the lane suddenly, such as a pedestrian jumping into the street, it is not expected that anybody, human or robot, can surpass the laws of physics. On limited access freeways or other locations where pedestrians are forbidden and/or fenced off, a vehicle may exceed the maximum speed necessary for such a full stop, particularly if following another vehicle, and in particular so as not to be an impediment to traffic.

The highway rule matches human activity. Humans routinely overdrive their headlights on the highway. Because pedestrians and cyclists are not just forbidden from highways but physically fenced off, we all drive like they can't be there. (Pedestrians who do try to cross highways are very frequently killed because of this, but we have come to accept that rule.)

In addition, on the highway it is quite normal to drive at very fast speeds behind another car which blocks our view of what's in front. We rely on that other car to detect things, and we follow it with a short headway, just to be sure we won't hit it, given our reaction time to its brake lights.

I would call these minimum levels "good practices." Think that "best practices" would be a level higher than this. In time, we will ask production cars that are deployed on the road to reach a higher level. During testing, with safety drivers, it is only necessary to ask for good practices. And in fact, in the earliest phases of development, with very attentive safety drivers, I believe it is OK to deploy a vehicle that does not even meet these minimums, because the attentive human does. But you must take good care to assure that human truly is that attentive.