The Trolley Problem Will Tell You Nothing Useful About Morality

You are on an asteroid careening through the cosmos. Aboard the asteroid with you are nine hundred highly-skilled physicians, who have been working on developing a revolutionary medication that will cure every disease in the known universe. The asteroid’s current trajectory is taking it straight toward the Planet of Orphans, where all intergalactic civilizations have dumped their unwanted offspring, of which there are now 100 trillion, all living, breathing, and mewling. If you detonate the asteroid, all of the doctors will die, along with the hope for curing every disease in the universe. If you do not detonate the asteroid, the doctors will have time to develop the cure and send it hurtling toward the Healing Planet before you crash into and destroy the Planet of Orphans. Thus you face the crucial question: how useful is this hypothetical for illuminating moral truths?

The “Trolley Problem” is a staple of undergraduate moral philosophy. It is a gruesome hypothetical supposedly designed to test our moral intuitions and introduce the differences between Kantian and consequentialist reasoning. For the lucky few who have thus far managed to avoid exposure to the Trolley Problem, here it is: a runaway trolley is hurtling down the track. In the trolley’s path are five workers, who will inevitably be smushed to a gory paste if it continues along its present course. But you, you have the power to change things: you happen to be standing by a switch. If you give the switch a yank, the trolley will veer onto a different track. On this track, there is only one worker. Do you pull the switch and doom the unsuspecting proletarian, or do you refrain from acting and allow five others to die?

Most people announce that they would pull the switch, thus extinguishing one life instead of five. But usually someone in the class will dissent, and say that pulling the switch is wrong because there is a difference between killing someone intentionally versus letting them die through circumstances beyond your control. A discussion will ensue about the action/inaction distinction. Then variants will be introduced: what if you could save the five people by pushing an obese man in front of the trolley? What if the obese man was evil? This leads to further scenarios: what if you were a doctor in a remote country who could save five dying people by killing one and harvesting his organs? What if you were part of a group of Jews hiding in a basement in 1941 while the Gestapo searched your house, and your baby started to cry—would you be justified in smothering it to death to save a dozen others? (Existential Comics has nicely lampooned the tendency of trolley hypotheticals to quickly spiral out of control with more and more elaborate sets of conditions and caveats.)

If all of this sounds incredibly stupid, with no obvious relationship to any moral problem that an ordinary human is likely to encounter, that’s because it is. And yet it is an “iconic philosophical thought experiment,” one which “has occupied the attention of brilliant minds, from academic ethicists to moral psychologists to engineers.” In psychology, literally hundreds of studies have tested people’s responses to the trolley problem with the aim of usefully understanding human moral intuition. On social media, trolley problem memes have even become unexpectedly popular.

The persistence of the trolley problem in philosophy and psychology tells us a lot more about the state of those fields than it does about ourselves and our moral choices. Here we have a hypothetical that is essentially on par with The Asteroid And The Orphans, being treated as a helpful window into moral questions.

It’s very obvious what would happen if any of us ever encountered a “trolley problem” in real life. We would panic, do something rashly, and then watch in horror as one or more persons died a gruesome death before our eyes. We would probably end up with PTSD. Whatever we had ended up doing in the moment, we would probably feel guilty about for the rest of our lives: even if we had somehow miraculously managed to comply with a consistent set of consequentialist ethics, this would bring us little comfort. In fact, the total lack of insight that a real-world trolley problem would provide is illustrated well in this scene from NBC’s The Good Place. Michael, a demon taking an ethics course, decides that he could better understand the trolley problem if it were less abstract, and plunges his bewildered professor into a realistic version of the scenario, an actual trolley rattling toward five oblivious track-repairmen. What happens, predictably, is panic followed by horror followed by the spattering of guts. “What did we learn?” Michael says to his blood-soaked teacher after the trolley has come to a stop. The answer, as we can all see, is nothing.

By thinking seriously about the trolley problem, i.e. considering what the scenario being described actually involves, we can see why it’s so limited as a moral thought experiment. It’s not just that, as the additional conditions grow, there are not any obvious right answers. It’s that every single answer is horrific, and wild examples like this take us so far afield from ordinary moral choices that they’re close to nonsensical. The trolley problem may not be much different from playing “Marry-Fuck-Kill,” or asking horrible questions like “If you had to kill one of your parents, which one would it be?”; “If you had to bomb a factory or stab a nun, which would you do?”; “If someone paid you enough money to vaccinate 13,000 children against malaria, would you commit a sex crime?” It’s not that it’s impossible to have discussions about these scenarios; on the contrary, people could spend hours debating them, and there’s a dark temptation in the human subconscious to contemplate these kinds of sinister ideas. It’s that the answer to the “What did we learn?” question will be the same regardless of which answer we choose: “I learned that I have kind of a sick mind.” That should be the major revelation that comes from realizing that we’re willing to dispassionately discuss which person we would murder, and how much value to place on individual human lives. To encourage someone to think about these questions is to encourage them to be a worse and more callous person, and what the trolley problem largely shows is that it’s very easy to temporarily become a psychopath if your professor says doing so will be intellectually useful.

In real life, very few people face trolley problems, unless their job is literally to program collision avoidance algorithms for driverless cars. (Actually, around the Current Affairs offices in New Orleans, we do encounter trolley problems, though most of them are of the “large log stuck on the tracks” variety.) The first thing every student introduced to the problem notices is how implausible it is, and a major problem for teachers is that students have to suppress their laughter and take the problem seriously. Psychologists, who had long been using the problem to try to usefully analyze moral instincts, are now beginning to conclude that it “doesn’t tell us as much about the human condition as we might hope,” since it is—astoundingly enough—“too silly and unrealistic to be applicable to real-life moral problems.”

But the trolley problem is not just a pointless exercise. It could also be a damaging one, because of the way in which it gets students to start thinking about moral questions. The first limitation of the trolley problem is that it places us in a situation of forced decision-making, where all the future outcomes of your choices are completely certain, and all of them are bad. (The trolley problem, by the way, also encourages people to be confident that they can predict outcomes, setting aside the uncertainty that characterizes all actual tough decision-making.) Unless you are a very particular kind of strict utilitarian, who truly believes that killing one innocent person is “good” if five other people get to live, the trolley problem is not a “moral quandary” that asks you to choose between one option that is, say, good but difficult, and another option that is, say, bad but easy, thus testing the strength of your willingness to do the right thing in adverse circumstances. Rather, you are in a situation where any choice you make will result in people’s deaths: any decision-making pathways that would allow you to reduce the likelihood of people being hurt (can you shout to the workers to move? can you throw yourself down onto the track to slow the trolley’s progress?) have been presumptively closed off. The thought experiment is designed to place us into a situation that has already unfolded. We are helpless victims of our conditions, who face a binary choice with two horrendous outcomes. Our choice does not occur, as human moral choices actually do, as part of a chain of decision-making. Literally everything has been decided for us by an unseen external force, except who will die, which is conveniently left up to us.

It’s not just the fact that trolleys are outmoded—or that, since they travel about 10mph the workers would probably see the damn thing coming and just move out of the way—that makes this a highly unrealistic hypothetical. It’s that in the actual world, decisions do not occur in this kind of vacuum, and it’s just as important to pay attention to the factors that structure individual choices as the nature of those choices. For example, we can ask whether it’s morally justified for me to steal a block of cheese in order to feed my starving, cheese-addicted child. (It is.) But if we focus on hashing out that question, debating how individuals should balance their obligation to follow the law with their obligation to their loved ones, we miss the far more crucial one: why am I even in this situation? The whole reason I am faced with an unpleasant set of choices is that I live in a highly unequal society in which children are deprived of the basic cheeses they need in order to survive. If we zero in on the question of what I should do once my choices have been set for me, we fail to ask whose actions caused me to have those particular options available to me, a.k.a. How Did I End Up On This Fucking Trolley To Begin With? If am forced against my will into a situation where people will die and I have no ability to stop it, how is my choice a “moral” choice between meaningfully different options, as opposed to a horror show I’ve just been thrust into, in which I have no meaningful agency at all? Let’s think a bit more about who put me here and how to keep them from having diabolical power over others. (Some might say this makes the trolley problem the perfect philosophy question for the “neoliberal” era, since it reduces everything to individual choice and tells us there is no alternative to existing power structures. Since the word “neoliberalism” is banned from the pages of Current Affairs, though, we ourselves would not say this.)

The “who should have power over lives” question is often completely left out of philosophy lessons, which simply grant you the ability to take others’ lives and then instruct you to weigh them in accordance with your instincts as to who should live or die. Now, it’s true that, for example, an emergency-room doctor or a rescuer frantically excavating a collapsed building after an earthquake may have to make some very difficult decisions about how to apportion their time and resources: but realistically speaking, assuming the lifesavers are making a good-faith effort to help as many people as possible without prejudice, a thousand mundane and logistical factors (who came in first? who do I actually have the right tools to save? how soon will reinforcements arrive?) will dictate how these tough choices are made, not abstract metaphysical calculations. But what about situations where people are making high-level life-or-death decisions from a distance, and thus have the leisure to weigh the value of certain lives against the value of certain other lives? Perhaps the closest real-life parallels to the trolley problem are war-rooms, and areas of policy-making where “cost-benefit” calculuses are performed on lives. But in those situations, what we should often really be asking is “why does that person have that amount of power over others, and should they ever?” (answer: almost certainly not), rather than “given that X is in charge of all human life, whom should X choose to spare?” One of the writers of this article vividly recalls a creepy thought experiment they had to do at a law school orientation, based on the hypothetical that a fatal epidemic was ravaging the human population. The students in the room were required to choose three fictional people out of a possible ten to receive a newly-developed vaccine. (Disturbingly, this was presented as a “negotiation” exercise, and was intended to see how effectively the exercise participants could arrive at “consensus” within their groups.) The groups were given biographies of the ten patients: some of them had unusual talents, some of them had dependents, some of them were children, and so on. Unsurprisingly, the exercise immediately descended into eugenics territory, as the participants, feeling that they had to make some kind of argument, began debating the worthiness of each patient and weighing their respective social utilities against each other. (It only occurred to one of the groups to simply draw lots, which would clearly have been the only remotely fair course of action in real life.) This is a pretty good demonstration of why no individual person, or small group of elites, should actually have decision-making authority in extreme situations like this: all examinations of who “deserves” to live rapidly become unsettling, as the decision-maker’s subjective judgments about the value of other people’s lives are given a false veneer of legitimacy through a dispassionate listing of supposedly-objective “criteria.”

Of course, if you ask a question like “Why Does This Person Deserve The Power of Life And Death Over Others?” in your philosophy class, people will think you’re missing the point. With philosophical hypotheticals, you’re supposed to set aside the question of why things are the way they are, and just accept that the choices are the choices. That’s exactly what we should never do, though: in order to assign moral responsibility for a particular set of bad outcomes, you need to understand who had agency in causing things to be the way they are. In the case of a trolley accident, probably the least important question is what the person who was frantically trying to stop the trolley should have done, since all of their options are bad and they will be improvising. We should spend far more time inquiring into the trolley company’s lackadaisical attitude to safety precautions, and whether it’s morally justified to cut down on voluntary brake inspections in order to decrease operating costs and maximize quarterly profits. It sounds as if we’re “arguing with the hypothetical,” of course, which you’re never supposed to do. But philosophical hypotheticals are often based on questionable premises that shape our thinking, and unless we point out and dispute those premises, we may end up passively endorsing them in ways that alter our moral worldviews. There is nothing more dangerous than accepting a bad premise.

Not only does it distract us from structural questions, but the problem isn’t even good at highlighting areas where individual choices do matter. It is deliberately set up as a “no-win” situation, in order to convince us that moral questions are hard and there are no easy answers. (Although there is a right answer.) But actually, once you get away from the world of ludicrous extremes in which every choice leads to bloodshed, large numbers of moral questions are incredibly easy. The hard thing is not “figuring out what the right thing to do is” but “mustering the courage and selflessness to actually do it.” In real life, the main moral problem is that the world has a lot of suffering and hardship in it, and most of us are doing very little to stop it. If we must put everything in terms of trolleys, the closer parallel would be: a trolley is bearing down on five people, and someone says to you “If you give me your money, I can save them.” You immediately get quite uncomfortable because you wanted to buy something, and come up with a sheepish rationalization like “Uh, well, I can’t just go around giving money to anybody who asks me, it’s not sustainable.” (Then everyone dies.)

The trolley problem conveniently involves making no actual sacrifices yourself, which “doing the right thing” almost always entails in reality. It allows us to avoid the real discomfort that would come with facing questions like this honestly, by making moral questions seem somewhat irrelevant and futile. Nothing could be better for rationalizing the pursuit of an immoral and selfish life. The trouble is, not only does the trolley problem discourage us from examining the historical realities and top-down impositions that limit our choices, it also leads us to believe that, within a system not entirely of our own making, it’s not possible to have a set of choices where some options are more moral than others. For example, just because our country’s housing policies are terrible, landlord-tenant relationships are complex, and homelessness has many contributory causes that are outside any individual’s direct control, does not mean that “supporting a low-income housing development in my nice neighborhood” and “opposing a low-income housing development in my nice neighborhood” are morally neutral and equivalent options. Many “systemic,” “structural” problems are simply not anything like the trolley problem, where all choices produce horrific results no matter what (and are therefore not choices in any meaningful sense). In most cases, there is actually a significant extent to which human agency perpetuates the existence of bad systems, and the exercise of human agency in a different direction would start to change those systems, especially if more people found the willpower to make the right choices. It’s fundamentally unfair to assign moral weight to a decision made in a structurally “no-win” situation, but most structural situations are not “no-win” situations, or are not “no-win” situations in the stark and exaggerated terms of the trolley problem.

There are plenty of moral questions we don’t discuss nearly enough: Is there a moral obligation to help refugees? Is being rich in a time of poverty justifiable? Do you have an obligation to speak out about sexual harassment? What should you do if you know someone is being abused but they explicitly ask you not to say or do anything about it? Are there any justifiable reasons for the existence of borders? Does capitalism unfairly exploit workers? Should you lie to protect an undocumented person? (The correct answers are: Yes, No, Yes, It’s Complicated, No, Yes, Yes.) And you also have to choose which to discuss, because our time is finite; every moment you’re talking about one ethical dilemma you’re not talking about another. One of the hardest moral quandaries is in determining what our priorities should be: in a world filled with a million injustices, do you just pick one at random to address? It’s only because we spend so little time thinking about which questions probably matter more than others that anyone can think trolley problems are a comparably effective use of time.

The trolley problem is repulsive, because it encourages people to think about playing God and choosing which people to kill. It is as irrelevant as the Asteroid-Orphans Dilemma, because “who would you murder in extreme situation X?” is not even a distant parallel to the issues that will likely come up in your own life. It warps human moral sensibilities, by encouraging us to think about isolated moments of individual choice rather than the context in which those choices occur. It is escapist, in that it allows us to comfortably drift into the realm of the implausible and ridiculous, so that we do not have to confront disturbing truths about our real-world moral failings. And it encourages a kind of fatalism, where everything you do will inevitably be a disaster and moral questions seem hard rather than easy. If you want to actually be a better person, you can start by never wasting a second of your life contemplating trolley problems.