Patricia Churchland is a neurophilosopher. That’s a fancy way of saying she studies new brain science, old philosophical questions, and how they shed light on each other.

For years, she’s been bothered by one question in particular: How did humans come to feel empathy and other moral intuitions? What’s the origin of that nagging little voice that we call our conscience?

In her new book, Conscience, Churchland argues that mammals — humans, yes, but also monkeys and rodents and so on — feel moral intuitions because of how our brains developed over the course of evolution. Mothers came to feel deeply attached to their children because that helped the children (and through them, the mother’s genes) survive. This ability to feel attachment was gradually generalized to mates, kin, and friends. “Attachment begets caring,” Churchland writes, “and caring begets conscience.”

Conscience, to her, is not a set of absolute moral truths, but a set of community norms that evolved because they were useful. “Tell the truth” and “keep your promises,” for example, help a social group stick together. Even today, our brains reinforce these norms by releasing pleasurable chemicals when our actions generate social approval (hello, dopamine!) and unpleasurable ones when they generate disapproval.

You’ll notice that words like “rationality” and “duty” — mainstays of traditional moral philosophy — are missing from Churchland’s narrative. Instead, there’s talk of brain regions like the cortex.

Rooting morality in biology has made Churchland a controversial figure among philosophers. Some think that approach is itself morally repugnant because it threatens to devalue ethics by reducing it to a bunch of neurochemicals zipping around our brains. A number of philosophers complain that she’s not doing “proper philosophy.” Other critics accuse her of scientism, which is when you overvalue science to the point that you see it as the only real source of knowledge.

I talked to Churchland about those charges, and about the experiments that led her to believe our brains shape our moral impulses — and even our political beliefs. A transcript of our conversation, edited for length and clarity, follows.

Sigal Samuel

How does a neuroscientist even begin to piece together a biological basis of morality?

Patricia Churchland

One insight came from a rather unexpected place. There are these little rodents called voles, and there are many species of them. With montane voles, the male and female meet, mate, then go their separate ways. But with prairie voles, they meet, mate, and then they’re bonded for life. Neuroscientists asked: What’s the difference in their brains?

There’s a special neurochemical called oxytocin. It gets taken up by neurons via special receptors. You can vary the effect of oxytocin by varying the density of receptors.

Scientists found that in the brain’s reward system, the density of receptors for oxytocin in the prairie voles was much higher than in montane voles. And that changed the portfolio of the animals’ behavior. It turns out oxytocin is a very important component of feeling bonded [which is a prerequisite for empathy].

Sigal Samuel

In your book, you write that our neurons even help determine our political attitudes — whether we’re liberal or conservative — which has implications for moral norms, right?

Patricia Churchland

Yes. There was this experiment that totally surprised me. Researchers rounded up a lot of subjects, put them in the brain scanner, and showed them various non-ideological pictures. If you showed subjects a picture of a human with a lot of worms squirming in his mouth, you could see differences in the activity levels of whole series of brain areas. There were much higher levels of activity if you identified as very conservative than if you identified as very liberal. Just that one picture of worms squirming in the mouth separated out the conservatives from the liberals with an accuracy of about 83 percent.

Sigal Samuel

That’s incredible. And these brain differences, which make us more inclined to conservatism or liberalism, are underwritten by differences in our genes. So what proportion of our political attitudes can be chalked up to genetics?

Patricia Churchland

These characterological attitudes are highly heritable — about 50 percent heritable. But of course that means learning also plays a significant role. So genetics is not everything, but it’s not nothing.

That may mean some of us find certain norms easier to learn and certain norms harder to give up. Do I have a tendency to want to be merciful if I’m on a jury? Or do I not? And would I react differently if I had slightly different genes? The answer is probably yes.

Sigal Samuel

I suspect that answer would make a lot of people uncomfortable. Some feel that rooting our conscience in biological origins demeans its value. When you say in your book, “your conscience is a brain construct,” some hear “just a brain construct.”

Patricia Churchland

Well, there does not seem to be something other than the brain, something like a non-physical soul. So I think it shouldn’t be that much of a surprise to realize that our moral inclinations are also the outcome of the brain.

Having said that, I don’t think it devalues it. I think it’s really rather wonderful. The brain is so much more extraordinary and marvelous than we thought. It’s not that I think these are not real values — this is as real as values get!

Sigal Samuel

So how do you respond when people critique your biological perspective as falling prey to scientism, or say it’s too reductionist?

Patricia Churchland

I think it’s ridiculous. Science is not the whole of the world, and there are many ways to wisdom that don’t necessarily involve science. Aristotle knew that. Confucius knew that. And I know that.

The word “reductionist” is, I guess, an attempt to be nasty? But I just think of a reduction as an explanation of a high-level phenomenon in terms of a lower-level thing. It’s explaining the causal structure of the world. So if that’s reductionism, I mean, hey! … I think it’s wrong to devalue that.

Sigal Samuel

It sounds like you don’t think your biological perspective on morals should make us look askance at them — they remain admirable regardless of their origins. How do you think your biological perspective should change the way we think about morality?

Patricia Churchland

It might make us slightly more humble, more willing to listen to another side, less arrogant, less willing to think that only our particular system of doing social business is worthy.

If we don’t imagine that there is this Platonic heaven of moral truths that a few people are privileged to access, but instead that it’s a pragmatic business — figuring out how best to organize ourselves into social groups — I think maybe that’s an improvement.

Sigal Samuel

One challenge your view might pose is this: If my conscience is determined by how my brain is organized, which is in turn determined by my genes, what does that do to the notion of free will? Does it endanger or at least modify it?

Patricia Churchland

It depends. If you thought having free will meant your decisions were born in a causal vacuum, that they just sprang from your soul, then I guess it’d bother you. But of course your decisions aren’t like that. I think of self-control as the real thing that should replace that fanciful idea of free will. And we know there are ways of improving our self-control, like meditation.

Our genes do have an impact on our brain wiring and how we make decisions. So you might think, “Oh, no, this means I’m just a puppet!” But the thing is, humans have a humongous cortex. One of the things that’s special about the cortex is that it provides a kind of buffer between the genes and the decisions. An ant or termite has very little flexibility in their actions, but if you have a big cortex, you have a lot of flexibility. And that’s about as good as it gets.

I think it’s better at the end of the day to be a realist than to be romantically wishing for a soul.

Sigal Samuel

Speaking of the animal kingdom, in your book you mention another experiment with prairie voles, which I found touching, in a weird way. Can you describe it?

Patricia Churchland

I think it’s a beautiful experiment! You have a pair of prairie voles that are mated to each other. You take one of them out of the cage and stress it out, measure its levels of stress hormone, then put it back in. The other one rushes toward it and immediately grooms and licks it. If you measure its stress hormones, you see that they’ve risen to match those of the stressed mate, which suggests a mechanism for empathy. The [originally relaxed] vole grooms and licks the mate because that produces oxytocin, which lowers the level of stress hormone.

Sigal Samuel

So in your view, do animals possess morality and conscience?

Patricia Churchland

Absolutely. I think there’s no doubt. The work that animal behavior experts like Frans de Waal have done has made it very obvious that animals have feelings of empathy, they grieve, they come to the defense of others, they console others after a defeat. We see one chimp put his arm around the other. We see one rodent help a pal get out of a trap or share food with a pal.

We don’t have anything they don’t have — just more neurons. The precursors of morality are there in all mammals.

Sigal Samuel

To get into the philosophical aspects of your book a bit, you make it pretty clear that you have a distaste for Kantians and utilitarians. But you seem fond of Aristotle and Hume. What is it about their views that gels better with your biological perspective?

Patricia Churchland

I think what’s troubling about Kant and utilitarians is that they have this idea, which really is a romantic bit of nonsense, that if you could only articulate the one deepest rule of moral behavior, then you’d know what to do. It turns out that’s not workable at all: There is no one deepest rule. We have all kinds of rules of thumb that help us with a starting point, but they can’t possibly handle all situations for all people for all times.

Aristotle realized that we’re social by nature and we work together to problem-solve and habits are very important. Hume in the 18th century had similar inclinations: We have the “moral sentiment,” our innate disposition to want to be social and care for those to whom we’re attached. And then there are the customs that we pick up, which keep our community together but may need modification as time goes on. That’s just much more in tune with the neurobiological reality of how things are.

Utilitarianism — seeking the greatest happiness for the greatest number of people — is totally unrealistic. One of its principles is that everybody’s happiness must be treated equally. There’s no special consideration for your own children, family, friends. Biologically, that’s just ridiculous. People can’t live that way.

Sigal Samuel

It seems to me like you need some argumentative fill to get from the “is” to the “ought” there. Yes, our brains are hardwired to care for some more than others. But just because our brains incline us in a certain direction doesn’t necessarily mean we ought to bow to that. Does it?

Patricia Churchland

No, it doesn’t, but you would have a hard time arguing for the morality of abandoning your own two children in order to save 20 orphans. Even Kant thought that “ought” implies “can,” and I can’t abandon my children for the sake of orphans on the other side of the planet whom I don’t know, just because there’s 20 of them and only two of mine. It’s not psychologically feasible.

Sigal Samuel

I’m curious if you think there are some useful aspects of previous moral philosophies — virtue ethics, utilitarianism — that are compatible with your biological view. It strikes me that the biology is sort of a substrate and these different approaches to ethics can emerge out of that and be layered on top of it.

For example, you describe virtues like kindness as being these habits that reduce the energetic costs of decision-making. And as for the utilitarian idea that we should evaluate an action based on its consequences, you note that our brains are always calculating expected outcomes and factoring that into our decision-making.

Patricia Churchland

Of course we always care about the consequences. But the important thing is that’s only one constraint among many. Moral decision-making is a constraint satisfaction process whereby your brain takes many factors and integrates them into a decision. According to utilitarians, it’s not just that we should care about consequences; it’s that we should care about maximizing aggregate utility [as the central moral rule].

Sigal Samuel

Right. I think we’d have to take a weakened version of these different moral philosophies — dethroning what is for each of them the one central rule, and giving it its proper place as one constraint among many.

Patricia Churchland

I think that would be terrific! And my guess is that the younger philosophers who are interested in these issues will understand that.

The really established philosophers want nothing to do with the idea that the brain has anything to do with morality, but the young people are beginning to see that there are tremendously rich and exciting ideas outside the hallowed halls where ethics professors hide. The world of neuroscience has become quite hard to ignore.

Reporting for this article was supported by Public Theologies of Technology and Presence, a journalism and research initiative based at the Institute of Buddhist Studies and funded by the Henry Luce Foundation.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.