The widespread use of self-driving cars promises to bring substantial benefits to transportation efficiency, public safety and personal well-being. Car manufacturers are working to overcome the remaining technical challenges that stand in the way of this future. Our research, however, shows that there is also an important ethical dilemma that must be solved before people will be comfortable trusting their lives to these cars.

As the National Highway Traffic Safety Administration has noted, autonomous cars may find themselves in circumstances in which the car must choose between risks to its passengers and risks to a potentially greater number of pedestrians. Imagine a situation in which the car must either run off the road or plow through a large crowd of people: Whose risk should the car’s algorithm aim to minimize?

This dilemma was explored in a series of studies that we recently published in the journal Science. We presented people with hypothetical situations that forced them to choose between “self-protective” autonomous cars that protected their passengers at all costs, and “utilitarian” autonomous cars that impartially minimized overall casualties, even if it meant harming their passengers. (Our vignettes featured stark, either-or choices between saving one group of people and killing another, but the same basic trade-offs hold in more realistic situations involving gradations of risk.)

A large majority of our respondents agreed that cars that impartially minimized overall casualties were more ethical, and were the type they would like to see on the road. But most people also indicated that they would refuse to purchase such a car, expressing a strong preference for buying the self-protective one. In other words, people refused to buy the car they found to be more ethical.