The newsletter "The Information" has reported a leak from Uber about their fatal accident. You can read the article but it is behind a very expensive paywall. The relevant quote:

The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, but Uber’s software decided it didn’t need to react right away. That’s a result of how the software was tuned. Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road. In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects. But the tuning went too far, and the car didn’t react fast enough, one of these people said.

This is true as far as it goes. As I explain in my article on robocar sensors you have the problem of false positives (ghosts) and false negatives (blindness.) Generally, the rule is you must never get a serious false negative, because if you do, you might hit somebody. So you have to build your system so that it will not miss another vehicle or pedestrian. You can sometimes miss them, ie. not spot them in every frame, or not identify them when you first start perceiving them, but you must not fail to spot them for an extended period of time, particularly as they get close to you.

If you do that, you are going to get some false positives -- ghosts that aren't really there. Or things that are there, like blowing trash or birds, that you should not brake for. If you brake every time there is blowing trash or a bird, you have a car people find unusable, which is also no good.

If you can't do both -- never miss important obstacles on the road, and very rarely brake when you don't need to -- you are probably not ready to go out on public roads. You can, but only with diligent safety drivers.

It's not out of the question that a prototype car might have some occasional blindness problems. That's why it's a prototype, and why you have the safety drivers. Before you go out on the road, you need to make this extremely rare, though. So if the Uber fatality had involved some rather unusual situation, one might accept this sort of failure as within the realm of normal operations.

Problem is, this was not an unusual situation. This was a person simply walking across the road. The only thing slightly unusual is she was walking a bike. But that's not that unusual. She was walking the bike behind her so her full 3D image was seen by the LIDAR.

A typical situation where you might get a blindness would be if one sensor detects the obstacle and the others don't. Say the LIDAR sees something, but there are only a few points returned, while the camera and radar see nothing. That's the kind of situation where your tuning might decide there is nothing there. If you're good, you might mark the area as one for special investigation in future frames, in case it's something right on the edge of your sensor range.

A woman standing in the road should be seen by all the sensors, and seen more and more clearly the closer she gets. This is what is perplexing -- the report by police that the vehicle never slowed at all. Even if you can imagine the sensors had a problem at 50m or more (they should not have) they have little excuse for not seeing her quite well at 40m, 30m and less. Yes, if you don't brake until 20m out, you will still hit her, but not nearly as hard.

It is more probable that instead what we have is the system seeing her and classifying her as not an obstacle -- which as noted you do for things blowing in the wind, or for birds. But she's no bird or trash bag. She's a lot larger, and a lot more steady in path. She doesn't look like a plain pedestrian because of her bicycle, but again, that's not that unusual a thing. And more to the point, key to the algorithm above is "If you're not sure you can ignore it, stop." That means you fail safe -- if you definitely identify it as a bird, keep driving. But if you can't figure out what it is, don't hit it. She did have some trash bags on her bicycle -- though they are mostly blocked by her body. An unlikely but possible error would be seeing those trash bags, positively identify them as trash bags floating in the air, but miss that they are on a bicycle. I doubt it.

Their sensors should have seen her too many ways -- dimly in the LIDAR at 100m but clearly from 60m onward. Decently in the radar (but easily mistaken for the returns from stationary objects as she was moving perpendicular to the car.) Clearly in motion detection on the camera images and in parallax detection. Reasonably clearly with stereo vision with a long baseline after about 40m (too late, but in time to slow a lot.) And while neural network computer vision is still a research area, she should still have been tagged by that in plenty of time, even in the dark.

Error rates and safety drivers

If your car misses something once every 100 miles, but your safety drivers catch 999 out of 1,000 misses, then you will only have a problem every 100,000 miles. That's just on the limit of possibly acceptable, because human drivers have a small collision every 100,000 miles or so, and one that gets to police every 500,000 miles or so. If you can attain better, like once in a million miles for the two combined, you are not putting the public at zero risk, but at a risk that's less than a typical car sent out on the road.

How good the safety drivers actually are depends on how good the car is. If you drive with adaptive cruise control, you probably need to adjust steering fairly often, even if it's good with the brakes. As such, you remain fully attentive, and cruise control is considered safe for consumers. Once the car gets down to one error every 100 miles, safety driving gets a bit more boring, and it's harder to stay diligent. That's one reason most teams use two drivers.

Uber, it has been rumoured, was driving with an intervention needed every 13 miles. But they did it with one safety driver, and she obviously was not acceptable at her job, as the famous video shows her looking away from the road for 5 seconds before the accident.

While the safety driver could have been better -- obviously -- this is not the sole source of the problem. If Uber's systems were so poor that they would misclassify something as basic as a pedestrian walking a bicycle, they were not ready to go out with less than two alert safety drivers.

This type of alleged misclassification is still very strange for a team as experienced as Uber's. "Pedestrian walking bicycle" is not an obscure thing. In their testing they should have encountered it very often, as well as plain pedestrians. We will need to see what explanation their is for deciding that's a non-obstacle to figure out what this means.

Warning the safety driver

I have no knowledge of Uber's practice in this area, but it is also possible for the system, when it has uncertainty about the situation, to issue an audible warning to the safety driver. When you have two safety drivers, one usually is working as "software operator" and is watching the diagnostics coming from the system, but this watch is not constant, and with only one safety driver, you could only deliver audible alerts.

For example, when the perception system is deciding it has a false positive, it could judge probability and make an audible alarm to tell the safety driver, "pay particular attention here." Of course, the safety driver is supposed to be paying attention at all times, but no human is perfect.

Such a system would have to be "tuned" as well. If it is beeping all the time it would quickly get ignored. There is also the risk that if it is pretty good, it might further encourage the safety driver feeling they can safely look away from the road for 5 seconds as Uber's driver did. It should be combined with a driver gaze monitoring system.

I do believe you could make the system beep when it is in the state suggested in the leak: "We have identified an obstacle but concluded it is not one we should slow for." That state should be rare enough that a beep for that would not be overdoing it. It seems that Uber did not have any alerts for the safety driver, however, certainly not in this case.

Naturally, if a car decides to slow because it is uncertain, the safety driver feels that and would normally look forward if otherwise distracted from doing so.