A crash today with a Waymo van is getting attention coming in the same area just a short time after the Uber fatality, but Waymo will not be assigned fault -- the driver of the car that hit the Waymo van veered out of his lane into oncoming traffic because of somebody else who was incurring on the intersection. Only minor injuries, but higher energy than prior crashes for Waymo.



Update: Waymo says the vehicle was not in autonomous driving mode.

Waymo has released the video view from their car to make it very clear the accident was not their fault, even if the car had been driving.

This does, however, cause people to ask, "Could the Waymo car have done more to avoid being hit?" This question was also asked recently when a stationary Navya shuttle was hit by a truck that was backing up. In that case the Navya could have backed away to prevent being hit, which a human probably would do.

It is my hope that at some time in the future, robocars will start to gain superhuman abilities not just to drive safely, but to avoid being hit by reckless drivers, but that day is not any time soon. The truth is that people forget how hard the problem of building a robocar is, but this particular task is not very high on priority lists. Once the higher items have been well resolved, this will start to happen.

One reason teams will be reluctant to solve this is the fear of making things worse. From a liability standpoint, just sitting there is the low-risk choice. It's hard to blame a car for just sitting there if it had the right to do so. If a car starts backing up, or swerving, or zooming away, you move into less tested and charted territory. In some cases, like the Navya event, it probably was relatively simple, and we might see some efforts made. We don't have much info on the geometry of the Chandler event, but it's easy to imagine situations like this where trying to move could make things worse. It might change the angle of the crash and alter the damage or injuries. Other cars might also suddenly move causing other problems. The apparently safe path out might involve leaving your right-of-way or doing things you've very rarely tested because they just don't happen very often.

This is just the sort of thing to test in simulators, as I was discussing earlier today.

Even when it all looks good, it can still go bad. Imagine you're stopped at a light in the #2 position and you see somebody barreling up behind you, not stopping in time. You could elect to veer out of the lane if there's room, but that means the car in the #1 position is hit hard, without you to buffer the crash. This pushes them into the intersection where something far worse could happen -- it might change a fender bender into a fatality, and it's very hard to predict.

To really get out of accidents, you have to understand the situation in all directions, have reasonable models about what other drivers will do to avoid the accident, and also be confident in your skills in very rare driving situations. This is only of the very few situations where V2V communication could actually do something, but it's such a rare situation that it's not worth doing V2V just for this.

Humans get a bit of a pass in what they do when somebody is heading for them. We know humans have limited ability to do instant physics and strategy in fractions of a second. Machines are not so limited, so we can be more bothered when they don't do it.

One situation where calculations might be easier is a multi-car pileup. If you see a car braking hard in front of you, while somebody else is tailgating, if you brake hard, you will be rear-ended. You could time your braking to be the absolute least you can get away with, just kissing the bumper of the car in front, to minimize the impact in the rear. That's probably pretty sure to make things better, but I am sure one can think of times when it could go wrong.

We could also imagine, in this case, the car being truly superhuman by predicting not the impact of the silver car, but the running of the red light by the black one. You do want to notice this (and not go through the intersection until it's clear) but you could imagine a very smart car noting that the silver car may be forced to swerve and might enter your lane. You could take pre-evasive action, including speeding up (and then hard braking for the car running the red) or other steps. This is very speculative, and the uncertainty cones for all vehicles are large, but one could imagine thinking about it. Waymo will probably put this accident into their simulator, and then could try some things out -- it would impress the public a lot. If the road were more crowded it would be harder to do such tricks safely.

Because robocars, even when they can't avoid an accident, will know it is coming, this offers a few other special options. They could sound alerts to passenger and tighten their seatbelts. They could even release some airbags in advance of impact instead of just after it, if this will help. They could even release airbags outside the vehicle to reduce the crash impact. In an extreme case, we might imagine a car that can quickly rotate its passenger compartment, so that the passenger is facing away from the impact and pulled back in her seat, allowing far more force without injury and avoiding the need for an airbag. That would be a pretty fancy passenger compartment, and would only work if only one direction of impact is coming.