In my review of the book Strange Fruit, I said I would expand on the question about whether any moral/ethical system can be justified, with reference to the book A Very Bad Wizard by Tamler Sommers, 2009. Sommers interviews nine scholars on the subject of morality and ethics. Following are excerpts showing how evolutionary psychology undermines any justification for a universal moral doctrine.

[Philip Zimbardo] That’s a good point. You said it right. Our life is organized around a bunch of heuristics to say “Under ordinary circumstances, when the majority of people see something a certain way, it’s probably the way to see it.”



[Frans De Waal] The interesting thing about my position is that it’s really the old Darwinian position: human morality is an outflow of primate sociality. That’s how Darwin saw it—it’s an outgrowth of the social instincts. It’s also very close to a Humean position and to Adam Smith. It’s a moral sentimentalism—the view that emotions drive morality…. Natural selection can produce the social indifference you find in many solitary animals. But it can also produce extremely cooperative, friendly, and empathic characteristics…. But if you look at the neuroscience literature on human empathy, it’s obvious that it’s an automated reaction. That’s a strong counterargument to the claim that empathy is a contrived, culturally influenced trait.



[Tamler Sommers] Of course; it’s still their family, right? As you note, the dark side of our nature is that we favor the interests of our “in-group,” especially the family, although that can be broadened a little. But the farther that goes, the less we care about others, and the more we’re willing to act violently toward them and neglect them.



[Frans De Waal] I think that human morality evolved as an in-group phenomenon, to strengthen the in-group and increase its cohesiveness. This was partly needed for competition with other groups. So what you did to the other groups didn’t matter. You could hack them to pieces, that would be perfectly fine, as long as you didn’t hack each other to pieces—within the in-group. And that’s a really interesting thing. The worst side of human nature, which is really intergroup violence between religions and between ethnic groups or nations, this side is also linked to the evolution of morality. And that’s also why if people now argue that we need to expand morality and have universal human rights, and that we need to care about people elsewhere in the world, they have a big challenge ahead of them…. Given that we are wealthy as a nation, in that sense, we ought to care about others. But as soon as there’s a crash in our economy, like in the ‘20s, say, something really serious, will we still care about distant people? Human caring is predicated on affordability…. Our first priority is the survival of ourselves and our close kin.



[Michael Ruse] My position is that the ethical sense can be explained by Darwinian evolution—the ethical sense is an adaptation to keep us social…. I think ethics is an illusion put into place by our genes to keep us social.



[Joseph Henrich] People have trouble with this because they believe that our way of viewing reputation is the Way everyone thinks about reputation. But among the Machiguenga they don’t have these kinds of obligations to other families. They have obligations to their extended kin units and that’s pretty much it…. So one of the things about the evolution of complex societies is that there had to be a shift away from focusing only on your family and your kind, to focusing on these larger groups. So people say they value their families, but actually people in our society value their families a lot less than, say, the Machiguenga—who are entirely devoted to their families and don’t allocate labor to society…. [Iraqi Chaldeans] have an ethnic identity, which is tied to their religion and language. They have strong ties to family and community, and so they have these small grocery stores [in Detroit], which are highly successful. But they avoid hiring non-Chaldeans, average Americans. And they have quite different norms about giving to charity. It’s a big reputational hit if you don’t give to Iraqi charities. They don’t seem to care about other charities.



[Tamler Sommers] Is it like with the Machiguenga, where if you give to a non-Chaldean charity that money could be going to help Chaldeans?



[Joseph Henrich] Right, so it’s actually bad to give to non-Chaldean charities. And they support political candidates who are Chaldean, which goes along with their sense of identity.



[Jonathan Haidt] There are a couple of watersheds in human evolution. Most people are comfortable thinking about tool use and language use as watersheds. But the ability to play non-zero-sum games was another watershed. What set us apart from most or all of the other hominid species was our ultrasociality, our ability to be highly cooperative, even with strangers—people who aren’t at all related to us. Something about our minds enabled us to play this game. Individuals who could play it well succeeded and left more offspring. Individuals who couldn’t form cooperative alliances, on average, died sooner and left fewer children. And so we’re the descendants of the successful cooperators…. So I think that with morality, we build a castle in the air and then we live in it, but it is a real castle. It has no objective foundation, a foundation outside of our fantasy—but that’s true about money, that’s true about music, that’s true about most of the things that we care about.



[Stephen Stich] Batson is a great psychologist but his nomenclature leaves something to be desired. What this unfortunate name refers to is the egoistic hypothesis that says you help people because seeing their distress, being aware of their distress, causes you to be distressed. And your motive in helping people, the deep motive, the underlying motive, is to alleviate your own distress—your “aversive-arousal”…. The term “moral realism” is used in many different ways. As I use it, it is a label for a family of theories. What they have in common is the view that moral judgments or moral beliefs are either true or false, correct or incorrect, and that some moral beliefs at least are true. My work has focused on a subset of moral realists, who argue that we should expect convergence or agreement in moral judgment under some type of idealized condition, like full agreement over relevant nonmoral facts. So for example, you and I might disagree about a moral matter if we disagree on a factual matter; we might disagree on the right policy for dealing with global warming if we also disagree about what’s causing global warming. One important group of moral realists, which includes many of the moral theorists associated with Cornell University, believe that if you were to completely eliminate all factual disagreement, you’d eliminate most moral disagreement as well…. Yes, right. The central idea is that morality is in some important ways analogous to science. In science, one expects there to be plenty of disagreement on the hard questions, but one also expects convergence over time on an increasingly large number of issues. That, of course, is what we’ve seen in disciplines like astronomy, chemistry, and biology, and it’s what moral realists expect in morality as well. And it’s here that I think that the empirical evidence is crucially important. Because what my collaborators and I have been arguing is that this isn’t true. Our view is that of course you’d eliminate some moral disagreement if you eliminated factual disagreement, but there would still be a great deal of moral disagreement left, because moral disagreement does not arise only from disagreement over nonmoral facts. So these are the targets that we have taken aim at, these so-called “convergentist moral realists.” We think that empirical work tends to undermine the claim that convergentists make, although of course the issue is far from settled.



[Tamler Sommers] Could you describe what a norm is on this account?



[Stephen Stich] Norms, as we conceive of them, are mentally represented rules specifying how people should or should not behave. They serve to trigger emotions and moral judgments—probably the emotions play a very important role in the production of the judgments. So the crucial bit here is that there is a component of the mind which is in one respect like the language faculty. (Only in one respect, let me stress.) It’s an innate part of the mind whose function is to acquire information from the environment and to store it and use it. And we believe that, as in the case of language, once those rules are in place, it is very hard to dislodge them. In particular, learning a bunch of facts is not going to do it. So, to use a crude analogy, when you learned English as a child you internalized a set of rules in the part of the mind devoted to storing language competence. You can learn facts until you’re a very old man, but that won’t stop you from being an English speaker. Similarly, we claim, once you take on board the norms of the surrounding culture, there are no facts you can learn that will get those norms out of the part of the mind devoted to storing norms…. None of them is correct, no matter what culture you’re in. So that’s why I don’t like to call it relativism. In some sense it’s more radical than relativism. My view is that norms and the moral judgments to which they lead aren’t in the business of being true or false.