Self-Driving Has A Robot Problem

Why the Perfect Driver must pass the Turing Test

Roboticists have put more than 10 years and 10 billion dollars into self-driving cars. But the promises of just a few years ago have been replaced with doubt, as self-driving has failed to reach commercialization and internal milestones push further into the future.

I suspect that the entire self-driving industry has been limited by a fundamental flaw in design.

From The Information:

“there were debates about whether [Waymo should] try to mimic typical human driving behavior that doesn’t strictly follow the law. But the team decided that “it doesn’t matter how a human would do it; we would need to do it like a perfect driver.”

While the perfect driver is a noble aim, ignoring how people drive is a mistake. Driving does not happen in a vacuum or a lab, but on roads full of other people driving. It is a fundamentally social act. Human behavior and the perfect driver are not independent ideas, but in fact inextricably linked. Building a perfect driver without considering human behavior is impossible, akin to trying to talk without considering language.

With its roots in the DARPA Urban Challenge, the industry has approached driving as an individual robotics task — one car navigating a large obstacle course. In reality, driving is a complex social interaction, with millions of people working together to get home safely. Stability in this system requires successful interactions between people — you can’t drive safely without “buy in” from every other car around you. The perfect driver must drive and respond in a way that is predictable and familiar to other people on the road. As people are the only drivers on the road, “how a human would do it” is in fact the only acceptable way to drive. The Perfect Driver must pass the Turing Test, proving itself indistinguishable from other human drivers on the road.

Robotics misses the human part of driving — the interaction between people that is essential to safety and stability on our roadways. Fortunately there is a new and better way to do it.

Driving is a Team Sport, Not an Individual Medley

We drive in coordination with other people — every action we take on the roads generates a response from other drivers, and their responses in turn generate another action. This recursion reflects a dynamic system, complicated by all the nuances of human interaction.

It’s obvious that the optimal behavior of any single vehicle must consider its impact on the entire system. A “perfect driver” must not only keep itself safe, but should also keep the system intact with everyone else safe as well. We should not avoid an accident ourselves, only to cause one for our neighbor.

By analogy, driving is a lot more like basketball than chess. In basketball, we continuously move around the court, dynamically responding to the other nine players to play the game. We don’t take turns making static movements on a frozen board. And this explains exactly why robots are not very good at driving. Robots are good at chess, but bad at basketball.

Programming a dynamic system in the natural world is perhaps the single greatest challenge in computer science. No one has ever done it, and it will probably never be done. This also happens to be why testing self-driving in simulations is useless. It is impossible to write the correct behavior for every actor in every situation — there are simply too many. We are already seeing the limitations of programming as these cars enter the real world.

Again from The Information:

“Waymo’s prototypes sometimes respond to [other drivers’] maneuvers by stopping abruptly in ways that human drivers don’t anticipate. As a result, human drivers from time to time have rear-ended the Waymo vans.”

Robotics ignores human behavior, only to push a stable system into chaos. I doubt that they can write enough rules for robots to ever “fit in” in a way that is safe and acceptable to every other car on the road. We must rethink the process from the ground up.

The Perfect Driver

The perfect driver starts with people. Our roads demand predictable and familiar human behavior from every participant to ensure safety for every car and driver on the road.

Recent breakthroughs in imitation learning have now made this possible. It starts with observation: collecting all of the macro and micro behaviors that make up human driving. We can then build a model that imitates those behaviors in software, creating a driver that behaves like a real person. Once we successfully imitate baseline human behavior, we can then take the next step to improve upon human driving. We first eliminate all the passive human errors, like non-observation (e.g. texting, falling asleep), and then identify and eliminate the active human errors that cause accidents or close calls.

Self-driving is a hard problem, but we have been making it harder by ignoring the most obvious clue. Human drivers have been giving us the answer key to safe self-driving for more than 100 years. Imitating their behavior is the first step on the path to perfection.

John Hayes is the CEO and cofounder of Ghost. He founded Pure Storage in 2009, taking the company public in 2015 (NYSE: PSTG). Follow him on twitter @ghosthayes. Learn more about Ghost at www.driveghost.com.