Later in his talk, he returned to this, however, through the work of Xin Xiang, an undergraduate researcher who wrote a prize-winning thesis in his lab titled “Would the Buddha Push the Man of the Footbridge? Systematic Variations in the Moral Judgment and Punishment Tendencies of the Han Chinese, Tibetans, and Americans.”

Xiang administered the footbridge variation to practicing Buddhist monks near the city of Lhasa and compared their answers to Han Chinese and American populations. “The [monks] were overwhelmingly more likely to say it was okay to push the guy off the footbridge,” Greene said.

He noted that their results were similar to psychopaths—clinically defined— and people with damage to a specific part of the brain called the ventral medial prefrontal cortex.

“But I think the Buddhist monks were doing something very different,” Greene said. “When they gave that response, they said, ‘Of course, killing somebody is a terrible thing to do, but if your intention is pure and you are really doing it for the greater good, and you’re not doing it for yourself or your family, then that could be justified.’”

For Greene, the common intuition that it’s okay to use the switch but not to push the person is a kind of “bug” in our biologically evolved moral systems.

“So you might look at the footbridge trolley case and say, okay, pushing the guy off the bridge, that’s clearly wrong. That violates someone’s rights. You’re using them as a trolley stopper, et cetera. But the switch case that’s fine,” he said. “And then I come along and tell you, look, a large part of what you’re responding to is pushing with your hands versus hitting a switch. Do you think that’s morally important?”

He waited a beat, then continued.

“If a friend was on a footbridge and called you and said, ‘Hey, there’s a trolley coming. I might be able to save five lives but I’m going to end up killing somebody! What should I do?’ Would you say, ‘Well, that depends. Will you be pushing with your hands or using a switch?’”

What people should strive for, in Greene’s estimation, is moral consistency that doesn’t flop around based on particulars that shouldn’t determine whether people live or die.

Greene tied his work about moral intuitions to the current crop of artificial-intelligence software. Even if they don’t or won’t encounter problems as simplified as the trolley and footbridge examples, AI systems must embed some kind of ethical framework. Even if they don’t lay out specific rules for when to take certain behaviors, they must be trained with some kind of ethical sense.

And, in fact, Greene said that he’s witnessed a surge in people talking about trolleyology because of the imminent appearance of self-driving cars on human-made roads. Autonomous vehicles do seem like they will be faced with some variations on the trolley problem, though Greene said the most likely would be whether the cars should ever sacrifice their occupants to save more lives on the road.