The Philosopher Automobile

On the Morality of Self-Driving Cars

There have been a number of #thinkpieces regarding the “complicated” morality of self-driving cars. And they all say the exact same thing.

They claim that these cars will have to make complex moral decisions. For example, if there are two pedestrians in the street, the car might choose to crash and kill its passenger to avoid hitting them. After all, two is more than one. It’s the Trolley Problem. Literally, just the Trolley Problem. Your freshman year, philosophy-major roommate found it compelling.

It’s also ignorant and dangerous.

I’m going to keep saying this until everyone gets it: We will not program cars to make complicated calculations about who gets to live and die. The cars will behave according to intelligent and predictable rules. This is not the problem it is being made out to be.

Before we even talk about the possibility of Judge Dredd-mobiles, let’s talk about the baseline safety of a world with self-driving cars. They won’t speed, they won’t tailgate, they won’t run red lights, they won’t turn early, they won’t slam on the breaks, they won’t get distracted or drunk or angry. Cars are only dangerous because humans operate them.

And just as now, a future with automated vehicles would have designated safe spaces, signs, and signals for other forms of traffic. Pedestrians will know when and where they can be, and bike lanes will be functional for the first time in human history without greedy assholes sitting in them with their four-ways on.

There. You’ve just eliminated almost all vehicular accidents.

I’m going to be honest and say I don’t know what the remaining accidents will look like, and I’m not sure anyone outside of the development labs does either. But if these cars have perfectly vigilant sensors, safe and predictable behavior, and reaction speeds many times faster than humans… well, what does that leave? A boulder falling on your car?

I’m not sure a scenario where self-driving cars are forced to choose between two deaths is even plausible (and I haven’t even assumed that these cars are a vast interconnected network that shares real-time information yet).

Regardless, the thinkpieces claim that this small number of theoretical mystery accidents will contain some even smaller number of scenarios where cars are forced to choose between murdering one person or another. Yes, self-driving cars that choose who to kill is a terrifying idea.

It’s also so stupid that no engineer in their right mind would ever do it.

The Utilitarian answer to the Trolley Problem seeks to maximize value. It chooses two lives over one life, infants over senior citizens, “productive” members of society over criminals. Why? Because humans often think that way. We have bias and prejudice that we construe as justice. When we see the dark possibility of philosopher cars, it is merely a reflection of our own cruelty.

We cannot afford to imagine a future where car accidents become car homicides, where our automobiles value one human life over another. No one — pedestrian or passenger — is safe in such a world.

The most moral choice for self-driving cars is not to choose. They will be so much better at driving than we ever were that accidents will be virtually eliminated. We should not take it upon ourselves to turn that tiny fraction of remaining accidents into murders.

Let’s stop muddying the discourse with dramatic musings for the sake of Content when what we’re really talking about is the greatest safety feature since the invention of the seat belt.