I had been enjoying a quiet happy hour with my friend Linde. He was professing his love for Ayrton Senna da Silva, the Brazilian Formula One champion, recounting how Senna’s death at the track had moved him to tears. Our neighbor had started eavesdropping, and then interrupting. He knew all there was to know about Senna, all about Formula One, the Monaco Grand Prix, Porsches. But his salmon polo shirt, pleated khaki shorts, and boat shoes advertised that this bloviator had never gotten grease under his fingernails or raced at the track.

Linde has a more encyclopedic knowledge of all things super-car and motor-racing than anyone I know. Tiring of the onslaught, he parried, “Dan here is writing a book on robot cars.”

“You know why I’ll never have a robot car?” asked the bloviator, turning his attention to me. “At some point, my car is going to have a choice between hitting a semi and hitting a minivan full of kids. It’s going to run me into the truck. I don’t want to die.”

Sated, he spun from his stool and revved his 911 into the night.

“What an ass,” I said, imagining a Wittgenstein on wheels.

“Yeah,” Linde agreed. “Porsche guys are like that.”

I stand by my assessment of the ass on the bar stool, but he is not alone in considering the darker side of robot cars. Some very smart people—from business-, philosophy-, and mechanical engineering–backgrounds, in Silicon Valley and elsewhere—also warn against looming dangers.

“The ethics of saving lives with autonomous cars is far murkier than you think,” writes Patrick Lin, director of the Ethics and Emerging Sciences group at California Polytechnic. Lin has been the most prolific advocate of the idea that driverless cars are a revolutionary kind of moral actor. His comments have appeared in the Atlantic, in Wired, on NPR’s “All Things Considered,” and in an elaborately animated TED-ed web video in which Lin narrates a philosophical thought experiment. To paraphrase: Your driverless car trails an overloaded truck on a three-lane highway, when the trucker’s load suddenly tumbles off the back. Quick as a flash, the car—let’s call her Porsche—assesses the situation. She can swerve left into an SUV, swerve right into a motorcycle, or stay straight and collide with the boxes just ahead. A teraflop later, the car has taken into account who is riding in the SUV (the children are our future), whether or not the biker is wearing a helmet, and what’s in the boxes. What will Porsche do?

It’s a cute cartoon. Lin’s voiceover is at once soothing and foreboding. But it ignores the obvious question: Why was Porsche tailgating the truck in the first place?

Homicidal speculation also comes from the home of the guillotine. “Autonomous vehicles need experimental ethics,” suggests Jean-François Bonnefon of the Toulouse School of Economics. “Are we ready for utilitarian cars?” Bonnefon, a research psychologist, calls for supporting research psychologists, who can determine our preparedness for Benthamite Buicks.

These questions are too juicy for anyone at the bleeding edge of business and technology to ignore. And in the eyeball-chasing business, all subtlety is lost. From Digital Trends: “Should your self-driving car kill you to save a school bus full of kids?” “Who will a driverless car be programmed to kill?” asks a Fast Company headline. Wired has given Patrick Lin plenty of space for his opinion pieces, headlined by such warnings as, “The Robot Car of Tomorrow May Just Be Programmed to Hit You” and “Here’s a Terrible Idea: Robot Cars With Adjustable Ethics Settings.” But soberer outlets are riding the story. The CBC reports, “Computers could decide who lives and dies in a driverless car crash.” Even the Cornell Journal of Law and Public Policy came up with a horror-movie headline: “Who Lives and Who Dies? Just Let Your Car Decide.”

Philosopher Jason Millar claims to have originated the idea of the ethically challenged self-driving car in a 2014 paper on robotics. As a grad student he proposed “The Tunnel Problem”—a formulation that has done well online thanks to its simple name (supposedly an analog to the Philosophy 101 “Trolley Problem”).

In the “The Tunnel Problem,” Millar’s driverless car (let’s call her Porsche again) is fast approaching a narrow tunnel, the entrance of which is blocked by a child who has fallen in the roadway. The car can either kill the kid or hit the wall of the tunnel, killing the driver (who is really just a passenger).

Millar insists programmers need to build such scenarios into their code. I imagine them writing something like this:

if (kid_in_tunnel > 16) {

kill kid_in_tunnel;

lp “We are sorry for your loss.”;

}

else {

kill ass_in_Porsche;

lp “Serves you right, schmucko.”;

}

Millar wants programmers to ask, “How should the car react?” But again, there’s a better question: “Why was the car going so damn fast in the first place?”

Only by looking past that better question can anyone imagine that driverless cars introduce novel ethical puzzles. Simply put, car ethics have been with us as long as cars have. When the horseless age began more than a century ago, venture capital and the rapid technological development of the automobile moved so quickly that government couldn’t, or wouldn’t, keep up. In fact, government didn’t step in even when industry asked for regulation. Both the National Association of Automobile Manufacturers and the American Automobile Association lobbied in the House and the Senate for national legislation to create minimum manufacturing standards for vehicles as early as 1902. Several bills were introduced, but none made it out of committee. In 1919, the chief engineer of Willys-Overland (a forgotten nameplate that rivaled Ford and Chevrolet in its day) expected the government to begin testing the safety of vehicles in the near future. He was right, although that future was distant. Legislators, in a decades-long sin of omission, failed to act. By the time they created the first standards, nearly 1.5 million Americans had died in crashes.

Without a federal umbrella, cities and states were left to build a new regulatory apparatus to cope with aspects of motor traffic both grisly and mundane. There was no such thing as a parking citation or a speeding ticket. When speed limits were enacted, police on foot had little hope of catching speeders. When the police finally got their own cars, traffic citations so overwhelmed the lower courts that municipalities had to create a parallel system of administrative justice. States passed rules of the road, but most did not issue licenses until the 1930s. Even then, they did not administer tests.

Planning for limited-access highways also began in the 1930s when concerns over road fatalities spiked. Twenty years passed before these plans were realized in a significant way with the Federal-Aid Highway Act (known better as the Interstate and Defense Highway Act) of 1956, which authorized $25 billion for the construction of more than 41,000 miles of limited-access highways. To combine speed with safety, engineers used grade-separated “interchanges” to replace “intersections.” Controlled only by traffic lights or stop signs, intersections are inherently dangerous. With no physical barrier to preclude collisions, they depend entirely on driver behavior. Red-light running killed 709 people and injured over 125,000 more in 2014. Interchanges—think cloverleafs and overpasses—physically separate opposing traffic. No matter how badly drivers behave, they stand little chance of dying at an interchange. From this perspective, the Interstates are the most high-minded roads in the nation.

Yet the interstates were a broad panacea: they were built where land was cheap and the power to resist meager—namely in poor, and especially African-American, neighborhoods. The demise of the African-American Overtown section of Miami is typical: according to historiran Raymond Mohl, “One massive expressway interchange took up twenty square blocks . . . and destroyed the housing of about 10,000 people”; “by the end of the 1960s, Overtown had become an urban wasteland dominated by the physical presence of the expressway.”

The social compact of the roads is literally cast in stone. Reworking the 1.5 billion tons of concrete in the Interstates would take an army of earthmoving equipment and an endless summer of road construction. Renegotiating the moral framework of the automobile itself might be less dusty, but it would take just as long. The US fleet tops a quarter billion vehicles and takes about a human generation to refresh. A gas guzzler sold in 2016 could still be on the road in 2046. Leaving aside the moral calculus behind the amount of CO2 each vehicle emits, our corporations build—and our government allows—machines capable of truly homicidal speeds. The Audi A6 tops out at 130; Fiat Chrysler’s Dodge Challenger can hit 182. (Linde tells me he’s only made it to “a buck thirty” in his.) GM has three cars that can close in on 200 mph. At that speed, even a driverless car might panic. It’s worth noting that electronics govern the top speeds on all modern cars, which is to say that lowering the upper limit would take little more than editing a line of code.

Governments mandate certain safety features, but car sales are engineered so that optional safety features go first to those who will pay the most. Mercedes-Benz offered air bags on a high-end model a full decade before the US government began to require them. The latest life-saving features, such as automatic braking and lane-departure warnings, have arrived first at the top of the market. As a Mercedes-Benz executive asked rhetorically in 1971, “How can we sell some cars that are safer than others, even by option?”

And what about pedestrian safety? In 2016, even the lowliest new vehicle on the American road protects its passengers with seat belts, air bags, and crumple zones. But these improvements do nothing for what safety experts call “vulnerable road users”: those traveling on motorcycles, bicycles, or shoe leather. Protecting a 150-pound pedestrian from a 4,000 pound car may seem a tall order, but it’s an old idea. Both European and American inventors pursued this mitigation technology a century ago. E. J. Pennington’s Cleveland Motor Company offered a cowcatcher on its 1895 model. Various nets and baskets were designed to deploy automatically in the case of a collision. Quaint as the “Man Sweeper,” the “Protector,” and the “Man Catcher” of the 1920s seem today, the European safety agency demands modern versions of such features. Front ends must be designed to clip a pedestrian low and then “catch” them on the bonnet. Automakers have responded with lower, softer bumpers and higher bonnets (hoods). They’ve even developed bonnet and windshield air bags. These changes will eventually make their way to passenger cars sold in the US. Meanwhile, the uniquely American large pickups and SUVs sport ever higher, more aggressive and decidedly pedestrian-unfriendly front ends. Not a single light truck model comes with a cow catcher. Not even as an option!

More than one startup CEO has insisted that their company’s real mission is not to get bought by an auto company for a billion dollars but to save the 30,000 lives now lost on the roads each year. Whatever their level of sincerity, these CEOs neglect the fact that we can already significantly reduce the death toll without ethically challenged autonomous vehicles.

Take “Vision Zero,” the multinational road-safety project that undermines all speculation about how autonomous vehicles will make moral choices. Its simple assertion is this: No amount of mobility or economic efficiency is worth a single human life. “No level of fatality on city streets is inevitable or acceptable,” states the New York City’s Vision Zero Action Plan. First proposed in the late ’90s in the birthplace of the three-point safety belt, Sweden, Vision Zero has since been taken up by Canada, the US, and several European nations. New York is one of ten US cities pursuing this vision of zero traffic fatalities. At root, the approach involves renegotiating the balance between mobility and safety, individual freedom and collective good. It is a thumb on the scale in favor of life.

Unfortunately, Vision Zero lacks the gee-whiz appeal of the driverless car. Instead of a photogenic Porsche speeding toward a tunnel, the poster child of Vision Zero is heavy traffic moving at 20 miles per hour. Also, its supporters lack the financial resources of the global corporations—based not just in Silicon Valley and Detroit but in Stuttgart, Toyota City, and Beijing. Nevertheless, its advocates are right to consider the entire sociotechnical machine in which the car itself is just a cog.

I’m optimistic about our robot car future. It will be really cool. But make no mistake that the development of driverless cars will flow from the same combination of forces that have carried us from the Model T to the Tesla. For some 120 years those forces have favored not mobility precisely, but automobility: a system that melds moving from place to place with industrial production and consumerism. Promoters of autonomous vehicles promise that they will defeat those forces, will wipe the slate clean. History suggests that they might also be consumed by them. To paraphrase Marx, “Cars make their own history, but they do not make it as they please.” Robot cars will be neither moral nor immoral in the narrow sense premised in the thought experiments now being conducted and sold as valuable. They will not exist outside of the current automotive ecosystem. They will instead enter an automotive landscape that instantiates myriad ethical choices made in the past and rehearsed daily. Nonetheless, society may one day conclude, as per Vision Zero, that no one should die on the altar of consumption and mobility. Except maybe an ass in a Porsche.

If you like this article, please subscribe or donate to support n+1.