You're walking alone along a trolley track when you see five people tied to it further in the distance. You suddenly hear the distinctive horn of a trolley from behind you, the panicky pace of the blasts sending warning that this train's a runaway. Unless you do something, the five people in front of you will be killed. Good news: You are near a switch that will allow you to divert the train before it hits them. Bad news: On the other track, there's a single person, also tied to the track.

So, what do you do?

In survey after survey after survey, respondents—no matter race, gender, social upbringing, anything—overwhelmingly choose to hit the switch and kill the single person to save the other five.

This is the famed Trolley Problem, a thought experiment first mentioned in 1967 by philosopher Philippa Foot. (I should say allegedly, as Foot's role as “founding mother” is a matter of some debate.) It has been used in a wide range of studies to shed some light on what decisions are “right” or “wrong.” There's even a word for it: Trolleyology.

In survey after survey after survey, respondents—no matter race, gender, social upbringing, anything—overwhelmingly choose to hit the switch and kill the single person to save the other five. (The only real difference seems to be Chinese people, where only 52 percent feel that it is “morally permissable” to flip the switch.) This switch, for whatever reason, just feels right. Five is a larger number than one, so it makes logical sense we'd have a moral imperative to change the train's trajectory.

But things start to get really interesting when The Fat Man is introduced.

You're walking on a bridge above a trolley track when you see five people tied below. In the distance, you hear the same distinctive frantic horn of a runaway trolley. This time, though, there's no other track to divert the train onto. Rather, there's a stranger standing on the bridge beside you, a man of such girth that if you push him off the bridge and onto the track, his body will stop the train and save the five people tied further down the track.

So, what do you do?

“If you imagine he's wearing a rubber suit and bounces off the track and runs off to a nearby pub, you wouldn't be delighted at all. The whole point of pushing him is that he gets in the way of the train.”

This variation was first developed in 1976 by moral philosopher Judith Thomson to help illustrate some problems with Foot's initial scenario. (Thomson's issues with Foot didn't stop there: The former was a proponent of a woman's right to receive an abortion, the latter was not.) In theory, it's the same calculus as the first, one life versus five. But one of the key differences is the distinction between “kill” and “let die.”

“It's wrong to kill the fat man, because you're using him as a means to an end,” says David Edmonds, author of Would You Kill the Fat Man? The Trolley Problem and What Your Answer Tells Us About Right and Wrong. “You intend to kill the fat man.”

The difference is a puzzler, and probably best explained by looking at best-case scenarios. In the first problem, if you switch the trolley to the one-person track, and the person somehow loosens their constraints and escapes, you'd be happy about that outcome. Everyone put in danger by the runaway train would be safe. But this isn't the case in The Fat Man problem. “If you imagine he's wearing a rubber suit and bounces off the track and runs off to a nearby pub, you wouldn't be delighted at all,” Edmonds says. “The whole point of pushing him is that he gets in the way of the train.”

Survey results back this up: About 90 percent of people feel it's OK to divert the train toward the lone person, while nearly 90 percent don't approve of pushing the fat man.

This nuance has been the focus of countless studies attempting to more clearly explain our moral sensibilities. There have been differences in how the fat man is described, with the theory being that descriptors make a difference. Researchers have also offered the test to bilingual speakers in two different languages, with respondents more likely to kill the fat man when taking the test in their non-native tongue. “The suggestion is that in your first language, you're not having to mediate the language, so your emotion is entirely in control,” Edmonds says. More recently, surveyors have introduced headphones and 3-D imaging during the question sessions, adding even more realism to the study in the hopes of getting a true answer.

It should be noted that these two problems cover only a narrow slice of the wide spectrum of theoretical scenarios involving trolleys and innocent people. The Fat Man is simply the first of many, many, many updates to the original, the most over-the-top probably being Frances Kamm's “Tractor” design, which includes a runaway tractor as a second threat to The Five and The Fat Man. “When it comes to these outlandish [scenarios], it's almost impossible to have a strong intuition about them,” Edmonds says. “They're so weird, you can never imagine them.” Why has all this money and time gone into figuring out if you'd save fictional people from a fictional trolley? Because, as technology continues to improve, the stakes are becoming increasingly higher.

Many philosophers, neurologists, and linguists explain the disparity of the results by pointing out that the first scenario is less visceral than the second. Pulling a lever adds a level of detachment to the proceedings, whereas actually pushing a man—who, again, is enormous enough to stop a trolley—turns it into a personal act. You have to build up energy, maybe even take a running start, to muscle this innocent and surely distraught fellow over the edge.

In the fat man's case, blood is directly on your hands.

Pulling a lever adds a level of detachment to the proceedings, whereas actually pushing a man—who, again, is enormous enough to stop a trolley—turns it into a personal act.

“The psychology seems to be quite clear,” Edmonds says. “It's easier to press a button on a joystick in the deserts of Nevada and blow up a compound in the mountains of Northern Pakistan than it is to put the bayonet between somebody's eyes. The more divorced we are from the outcomes of our actions, the easier it is to take lives.”

One way these morals will ultimately manifest themselves is through a less violent, and more insidious, technology: the driverless car. “[Driverless cars] are going to encounter things and make choices without humans being around,” Edmonds says. “At that moment, they're going to need some kind of ethical programming.”

Situation A: A driverless car's brakes fail, and it's about to run over five people. The car can swerve, but if it does, it will hit one innocent person. Situation B: A driverless car's brakes fail, and it's about to plow into 10 bicyclists. The car can swerve to avoid them, but if it does, it will careen off a mountain ridge and kill the person inside. Situation C: A driverless car's brakes fail, and it's about to run off a bridge and into a giant crowd of people. But, in this case, the car can swerve slightly and run into our beloved Fat Man, who will prevent the fall.

However we ourselves actually feel about steering into the Fat Man, we'd probably be OK with the idea if we were reading a news story about a driverless car doing it. “Although everybody thinks it's wrong to kill the fat man, if it was a robot [making the decision] most people seem to be slightly utilitarian,” Edmonds says. The problem with that thinking is it's not, and has never been, the computer making decisions. Computers can't make decisions. A computer is, as Paul Ford put it, “a clock with benefits.”

Rather, the decision has been made—is being made—by programmers working on how computers should—and will—respond to these ethical conundrums. Maybe it's time to worry less about how we're going to respond to these fictional trolley scenarios, and more about how developers at Google and Apple are going to approach the fat man.

The Sociological Imagination is a regular Pacific Standard column exploring the bizarre side of the everyday encounters and behaviors that society rarely questions.