Faced with two deadly options the public want driverless vehicles to crash rather than hurt pedestrians – unless the vehicle in question is theirs

In catch-22 traffic emergencies where there are only two deadly options, people generally want a self-driving vehicle to, for example, avoid a group of pedestrians and instead slam itself and its passengers into a wall, a new study says. But they would rather not be travelling in a car designed to do that.

The findings of the study, released on Thursday in the journal Science, highlight just how difficult it may be for auto companies to market those cars to a public that tends to contradict itself.

Statistically, self-driving cars are about to kill someone. What happens next? Read more

“People want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs,” Iyad Rahwan, a co-author of the study and a professor at MIT, said. “And car makers who offer such cars will sell more cars, but if everybody thinks this way then we end up in a world in which every car will look after its own passenger’s safety … and society as a whole is worse off.”

Through a series of online surveys, the authors found that people generally approve of cars that sacrifice their passengers for the greater good, such as sparing a group of pedestrians, and would like others to buy those cars, but they themselves would prefer to ride in a car that protects its passengers at all cost.

Several people working on bringing self-driving cars to market said that while the philosophical and ethical question over the two programming options is important to consider, real-life situations would be far more complex.

Brian Lathrop, a cognitive scientist who works on Volkswagen’s self-driving cars project, stressed that in real life there are likelihoods and contingencies that the academic example leaves out.

“You have to make a decision that the occupant in the vehicle is always going to be safer than the pedestrians, because they’re in a 3,000lb steel cage with all the other safety features,” said Lathrop, who was not involved in the new study.

So in a situation in which a car needs to, say, slam into a tree to avoid hitting a group of pedestrians, “obviously, you would choose to program it to go into the tree,” he said.

A spokesman for Google, whose self-driving car technology is generally seen as being the furthest along, suggested that asking about hypothetical scenarios might ignore the more important question of how to avoid deadly situations in the first place.

The problem seems to be how to get people to trust cars to consistently do the right thing if we’re not even sure we want them to do what we think is the right thing.

The study’s authors argue that since self-driving cars are expected to drastically reduce traffic fatalities, a delay in adopting the new technology could itself be deadly. Regulations requiring self-driving cars to sacrifice their passengers could move things forward, they write. But, in another catch-22, forcing the self-sacrificing programming could actually delay widespread adoption by consumers.

Susan Anderson, an ethicist at the University of Connecticut, and her husband and research partner, Michael Anderson, a computer science professor at the University of Hartford, believe the cars will be able to make the right call.

“We do believe that properly programmed machines are likely to make decisions that are more ethically justifiable than humans,” they said in an email. “Also, properly programmed self-driving cars should have information that humans may not readily have,” including precise stopping distance, whether to swerve or brake, or the likelihood of degree of harm.

How to get those cars “properly programmed”? The Andersons, who were not involved in the study, suggest having the cars learn from or be given “general ethical principles from applied ethicists”.