There is an old thought experiment called the Trolley Problem that's become central to the development of autonomous cars. In the context of self-driving cars, it sets up a scenario where an autonomously-operated vehicle approaches, say, a nun herding a group of orphans from a burning hospital. There is no time to stop or room to maneuver around the group. The car must therefore choose whether to run over the nuns and orphans, likely killing them, or swerve into the burning building, likely killing the passengers. What should the car do? On October 7th, Christoph von Hugo, manager of driver assistance safety systems at Mercedes-Benz, inadvertently became the first significant player at a car manufacturer to take a position on the Trolley Problem. According to von Hugo, the Self-Driving Car should run over the nun and the children. Here's his statement from the Paris Auto Show, as quoted in Car and Driver: “If you know you can save at least one person, at least save that one. Save the one in the car. If all you know for sure is that one death can be prevented, then that’s your first priority.” To be clear, this is not Mercedes's official position on the Trolley Problem. In fact, M-B's parent company, Daimler, issued a statement that walked von Hugo all the way back to, presumably, a dark woodshed in Stuttgart. But if that were its position, it would make perfect sense coming from the company that has been making arguably the safest and most popular luxury sedan in the world since the 1950s (that would be the Mercedes-Benz S-Class). It would also answer the obvious yet unspoken question in the minds of everyone for whom the S-Class is the gold standard: In an autonomous future, will the S-Class (and its competitors) protect its passengers, or sacrifice them for some idea of a greater good? Mercedes's survival absolutely depends on protecting its passengers above all others, because no one will get in an S-Class that doesn’t. As for the consequences, well, let the insurance companies figure it out, just as they do with human drivers today. Autonomous automotive altruism only has one outcome: dead brands.

Don't forget to sign up Your Email Address

Mr. von Hugo's position was a brave one, because sometimes educating consumers is more difficult than developing products for them. But, once that statement is out there, how does one convince a clickbait-driven media that Mercedes aren’t killer cars? On the other hand, how to appeal to first-world luxury customers who use high-ticket purchases to advertise their moral superiority? His statement also highlighted a truth painful to armchair critics: in the real world, there is no Trolley Problem. There never was one. For more clarity, we turn to science fiction. The absurdity of the Trolley “problem” is best explained in the 2009 Star Trek reboot. Young Captain Kirk, faced with an unwinnable training simulation called the Kobayashi Maru, wins by hacking the simulation itself. “I don’t believe in the no-win scenario,” he later explains to Spock. That most car crashes are mistaken for unforseeable, no-win scenarios is largely a function of the language used to describe them—calling them "accidents," for example, even though 56 percent of all incidents involve a single vehicle, and many agencies agree that almost all crash incidents, up to 94 percent, stem from driver error somewhere down the line—and the fact that most people consider themselves good if not great drivers, even though they aren't. Every time we call a car crash an "accident," it reinforces the idea that the blame lies not with driver error (the most likely scenario) but with forces beyond one's control. We're even at a point where drivers will blame the weather, or road conditions—as if those are acts of God rather than factors that a driver must take into account in his approach to the world from behind the wheel. When it comes to car crashes, most of us have been learning the wrong lessons—if we've learned anything at all. But back to the Trolley Problem, or rather the lack of one. Consider the rarity of Trolley Problems in real life. When was the last time you heard of a human driver forced to choose between the burning hospital and the nuns and orphans—or something with equally clear choices and similarly dire stakes? Let’s suppose such a problem did occur in the real world; in order to choose, the driver would have to understand that he had a choice at all, which means the driver must: Properly assess the situation (this assumes both very good eyesight and near-instantaneous informational processing) Know the exact braking distance of the car, factoring in degradation of said distance based on the current conditions of the tires, brake rotors, pads, and fluids Calculate that braking distance exceeds the distance to either the group or the building Know the handling characteristics of the car during emergency maneuvers Calculate that the car cannot avoid either the group or the building Weigh his options Make a moral choice There's no guarantee that Lewis Hamilton or Ken Block, let alone even a very well-trained civilian, would be able to make that choice in real time. In the real world, whether the nuns and kids go splat or the driver tries to pull a Hey Kool-Aid! through a flaming brick wall is immaterial, because the "choice" will probably not have been a choice at all, but simply a panicked reaction based on instinct.