What can science say about morality? Traditionally, the distinction between good and evil has been the terrain of philosophy and of religion. But in recent years, scientists have begun to explore the complex subject of morality, with surprising results. Might morality serve an evolutionary purpose? Is it even unique to humans?

Molly Crockett is an American neuroscientist best known for her work on morality, altruism and decision-making. She is Associate Professor of Experimental Psychology at the University of Oxford and is currently working on how harm aversion affects our decision-making processes. She spoke to the IAI about how neuroscience is changing the way we think about morality.

Could you outline your thesis on morality – what forms does it take and what evidence is there to show that certain elements of morality are actually instinctive?

It’s clear from research on both humans and animals that we have a very deeply rooted aversion to harming others, and this aversion to harming others infuses our moral judgment and also our moral behaviour. There has been work showing that even very small infants dislike puppet characters who harm other puppet characters. We see what looks like harm aversion in non-human animals, such as primates and rats. These suggest that the sense of harm aversion is very deeply ingrained, so could possibly be innate. Because we share it with other animals, it doesn’t seem unique to humans.

We have done some work recently which shows that people will, on average, spend more money to prevent a stranger from being harmed by electric shock than to prevent themselves from receiving pain. This study cannot say anything about whether harm aversion is innate or unique to humans, but we have shown a very striking level of altruism in the lab when it comes to making decisions about harm.

In terms of morality being instinctive, surely you cannot know if an action is moral until you reason whether it’s good or not?

There are different perspectives on this. Certainly some philosophical perspectives argue that truth is arrived at through reason. But given that we see the building blocks of morality, things like empathy and harm aversion, in babies and animals which clearly don’t have the ability to engage in sophisticated reasoning, then surely morality depends on more than just reason.

A recent study claimed to find an evolutionary basis for selflessness because it plays a part in human cooperation, suggesting that there is a form of self-interest in any act of selflessness. Do you believe in altruism for its own sake?

I’d say it’s still an open question. There’s some really nice work by David Rand and colleagues where they actually looked at the transcripts of people who won the Carnegie Medal for heroism – these are people that risked their lives to save someone else. If you talk to these people, it looks like the thought process behind those decisions was really minimal: they didn’t really think about it in terms of the potential benefits to themselves; they just did it. That suggests that selfish motives, if they’re there, would likely not be conscious. Even though it’s true that a lot of people will behave altruistically for the sake of their own reputation, we know that people sometimes will give anonymously, that they will help others even when no one is watching. Whether that implies a certain level of selfishness in the sense that they behave altruistically because it just feels good to them, well, I suppose that’s a valid point but kind of a pointless argument. There’s a really nice piece by Jamil Zaki written on edge.org about this very issue – at a certain level, any behavior is going to be motivated and people do the things that they want to do. So you could boil down to the level that there’s no such thing as altruism, because I get some personal satisfaction from helping someone else. It just seems to me to be an unproductive argument.

If everyone has a different concept of good and evil, how do we go about testing the instinctive nature of morality?

One way to test instinct is to look at cross-species comparisons between humans and non-humans, and to study the development of babies – that’s all work that's being done at the moment. You can also look at how much time people take to make moral decisions, and you can make the argument that if people react quicker this is a more instinctive response than if people are slower. There’s some nice work, again from David Rand, suggesting that cooperation is intuitive and instinctual because people are more likely to cooperate if you force them to make a decision quickly. But our recent work actually showed that people who take longer to decide take the moral decision. I think it depends on the specifics of the choice involved, and more research needs to be done to tease out these mechanisms.

What are the limits of neuroscience in studying morality, selflessness and altruism, and how does an interdisciplinary approach help us find answers?

Neuroscientists who study the neurobiology of altruism are interested in very different questions than psychologists who study altruism: Neuroscientists are primarily interested in the brain and how it makes decisions. That is really potentially valuable information if you are trying, for example, to develop brain-based treatments for disorders like psychopathy, which is traditionally associated with very low levels of altruism. The limits come from the fact that our understanding of the brain currently is still very poor, and a lot of the work on morality hasn’t necessarily tested very high-level questions, because morality is such a complex phenomenon. It’s going to be quite some time before this high-level theoretical work bridges with lower-level descriptions of how the brain functions at the circuit level. This will have to happen if we are to build a complete picture of the neurobiology of morality.

How does your work combine neuroscience with philosophy?

We’ve taken a really interdisciplinary approach: we collaborate regularly with philosophers who bring a unique perspective to the study of morality in the lab. They can suggest approaches that are based in centuries of thought in moral philosophy. We also bring methods from neuroscience and behavioural economics to try and create quantitative measures of how much people care about avoiding harm to others versus themselves. We’re excited about these methods because we hope that they will provide a link between behavior and the brain.

How would you convince someone who is skeptical that evolution plays a part in morality?

I think they’re not mutually exclusive – of course morality is cultural. If you accept that behavior comes from the brain, then you have to acknowledge the importance of the brain in producing any kind of behavior – whether it be moral or immoral.





Image credit: Jeroen Kransen