This particular dilemma of robotic morality has long been chewed on in science fiction books and movies. But in recent years it has become a serious question for researchers working on autonomous vehicles who must, in essence, program moral decisions into a machine.

As autonomous vehicles edge closer to reality, it has also become a philosophical question with business implications. Should manufacturers create vehicles with various degrees of morality programmed into them, depending on what a consumer wants? Should the government mandate that all self-driving cars share the same value of protecting the greatest good, even if that’s not so good for a car’s passengers?

And what exactly is the greatest good?

“Is it acceptable for an A.V. (autonomous vehicle) to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the A.V., than for the rider of the motorcycle? Should A.V.s take the ages of the passengers and pedestrians into account?” wrote Jean-François Bonnefon, of the Toulouse School of Economics in France; Azim Shariff, of the University of Oregon; and Iyad Rahwan, of the Media Laboratory at the Massachusetts Institute of Technology.

At the heart of this discussion is the “trolley problem.” First introduced in 1967 by Philippa Foot, a British philosopher, the trolley problem is a simple if unpleasant ethical thought puzzle.