Imagine that you’re driving through a residential area when your brakes fail. Directly in your path is a group of five jaywalkers. The only place to swerve is onto the sidewalk, where a pedestrian is waiting for the signal to change.

Who do you run over, the five jaywalkers or the one law-abiding citizen?

Such stark choices are rare, if they occur at all, and, in a world of human drivers they would be made in milliseconds. But in a future where cars drive themselves, the choices will be coded in the operating systems of millions of cars, highlighting a paradox of a technology that is expected to save countless lives: The cars may also have to be programmed to run people over.

“There is a common misconception that because it’s an automatic system it’s automatically infallible, and will simply brake in time when a critical situation develops,” says Leon Sütfeld, the lead author of a paper on the ethics of autonomous vehicles published Wednesday in Frontiers in Behavioral Neuroscience. “This unfortunately just isn’t realistic. A self-driving car is subject to the same laws of physics as a manually driven car.”

Writing code for autonomous vehicles will require us to take our moral intuitions – those nebulous and often contradictory feelings that color our perceptions of human behavior – and package them into precise instructions for millions of cars we set loose on our roads. That raises what philosophers call Big Questions: Can you quantify morality? Whose set of morals do we use?

Globally there are an estimated 1.25 million traffic fatalities each year, with 40,000 in the United States. And in the US, 94 percent of traffic deaths are attributable to human error. Elimination of human error on our roads would be a boon to public safety.

But before the public is comfortable having software take the wheel, consumers and regulators will need assurances that the cars are programmed with the moral responsibility that comes with a drivers license. This risk-management programming is not just for the one-in-a-million Trolley Problem event where a crash is unavoidable, but for the routine operation of the vehicle.

“I just don’t see a lot of these forced-choice scenarios occurring in actual traffic,” says Noah Goodall, a researcher at Virginia’s Department of Transportation who specializes in the ethics of autonomous vehicles. “The idea with this kind of work is to figure out how people assign values to different objects.”

In an effort to measure those values, Mr. Sütfeld, a doctoral candidate at the Institute of Cognitive Science at the University of Osnabrück, Germany, and his colleagues asked 105 participants to don head-mounted virtual-reality displays that placed them in the driver’s seat of a virtual car traveling down a two-lane road. A variety of obstacles, including adults, children, dogs, goats, trash cans, and hay bales, were placed in the lanes, and drivers had to pick which obstacle to strike and which one to spare.

The participants were given either one second or four seconds to decide. The one-second trials showed little consistency, suggesting that participants didn’t have enough time to deliberately choose what to strike. But when the time constraints were eased, a pattern emerged. In the four-second trials, drivers were more likely to spare the lives humans over animals, children over adults, pedestrians over motorists, and dogs over livestock and wild animals.

These consistent choices, say the researchers, could be used to develop a one-dimensional “value-of-life” scale that could be used to determine whose safety autonomous vehicles should prioritize. Such a scale has an advantage over more sophisticated models, such as those that rely on neural networks, in that it is straightforward and transparent to the public, potentially leading to a quicker acceptance of driverless vehicles.

But a strict hierarchy may not be enough to capture the moral complexity of balancing risks while driving.

“If human well-being is always a priority, does that mean a self-driving car may not avoid a dog that runs into the street, if there is an ever so little chance of mild injury to a human in the process?” asks Sütfeld. “We would argue that there needs to be a system that is able to make reasonable decisions even in complex situations, and categorical rules often fail this requirement.”

Get the Monitor Stories you care about delivered to your inbox. By signing up, you agree to our Privacy Policy

Iyad Rahwan, a professor at the Massachusetts Institute of Technology who researches the ethics of self-driving cars, cautions that no formula will be truly satisfying for everyone.

“There is too much focus on identifying the correct answer to the rare ethical dilemmas that a car might face,” says Professor Rahwan. “I think there is no right answer in an ethical dilemma, almost by definition. Instead, we need to come up with a balance of risks that is acceptable. We need a social contract that constitutes an acceptable solution to an ethical dilemma that is unsolvable in any objective sense.”