0:33 Intro. [Recording date: December 23, 2014.] Russ: I want to mention that as we have done in the past, we'd like to know your top episodes of the year. To participate, go to econtalk.org, where you will find a link in the upper left-hand corner to a survey that will give you a chance to tell us a little bit about yourself, give us some general feedback if you'd like, as well as voting for your 5 favorite episodes of 2014. That survey will stay up through early February of 2015; and I will announce the results some time in mid- to late February.

1:05 Russ: Now, on to today's guest, Joshua Greene, Professor of Psychology at Harvard U. and the Director of the Moral Cognition Lab there. He is the author of Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, which is our topic for today's episode. So, this is a fascinating, thought-provoking, and very ambitious book. It's got an enormous amount of stuff packed into it--ideas, claims for making the world a better place, some fantastic thought experiments. We'll try to do justice to the book. I want to start with what you call our tribal nature. You argue that we have evolved to be fairly effective cooperators within our tribes, but not so good cooperators with other tribes. Explain what you mean by that--what you mean by 'tribes' and the tragedy of common sense[?] morality. Guest: Right. So it begins with a question of will: what is morality, to begin with? And what I think, and a lot of other recent commentators and some people in some sense going all the way back to Charles Darwin think morality is fundamentally about is our social nature. And more specifically about cooperation: that is, what we call morality is really a suite of psychological tendencies and capacities that allow us to live successfully in groups, that allow us to reap the advantages of cooperation. But these tendencies that make up morality come primarily in the form of emotional responses that drive social behavior and that respond to other people's social behavior. I think a natural starting point begins with a familiar story to an economist: this is the tragedy of the commons, which I can talk about a little bit, if you want. Russ: Yeah, go ahead. Guest: So, the tragedy of the commons is a parable told by the ecologist Garrett Hardin. He tells the story of a bunch of herders who share a common pasture, and these are rational, self-interested herders who ask themselves, 'Should I add more animals to my herd?' And they think, 'Well, if I add more animals, that's more animals that I have at market and that's good. That's the upside. What's the downside? Not so much downside: we're all sharing this common pasture.' And so they say the benefits outweigh the costs, and they add more and more animals to their herd. So then when they all do this, there's not enough grass to support any of the animals and they all die and everybody is worse off. And that's the same as the tragedy of the commons. It's basically a parable about the problem of cooperation, which is really the problem of how do you get people to put collective interest over self-interest. Russ: With the key point that by doing so, they'll be better off. Guest: Correct. Russ: Their self-interests will actually be served. So it's not a literal sacrifice. It's a sacrifice in the short run, for a longer-run benefit. Guest: That's right. If it's a repeated game then it's in everybody's long term self-interest. I think that that's right. In the short term it's a conflict between self-interest and collective interest, but in the long term, a cooperative system is one that makes everybody better off. Although at any given moment it may be possible for someone, at least in a short-[?] way to benefit themselves at the expense of the group. Russ: Absolutely. Guest: And so the idea is that our minds are designed to help us solve this problem. And you can think of us as having psychological carrots and sticks that we apply to ourselves and that we apply to other people. So, a psychological carrot that we apply to ourselves to be cooperative would be feelings of love and friendship and goodwill that motivate us to say, 'Hey, it's not just my sheep that matters. Everybody else's sheep, or at least some other people's sheep' motivates you to be cooperative. Or you could have negative feelings that act as a stick for yourself, like shame and guilt. I would feel ashamed of myself if everybody else limited the size of their herds for the greater good and then I cheated. And we have positive feelings that reward other people--so you have my gratitude if you keep your sheep in line. And we have negative feelings that punish other people--you'll have my contempt and my anger and my disgust if you grow your herd as much as you feel like without regard for the rest of us who share the pasture. So the idea is these feelings, these psychological carrots and sticks that we apply to ourselves and other people, that's the core of morality and that's what makes basic cooperation within a group possible. Russ: And just to mention Adam Smith, in The Theory of Moral Sentiments he says, "Man desires not only to be loved, but to be lovely." And so there's some self-regulating impulse to do the right thing, because you want people to respect you. And those carrots and sticks are flying around with all of our social interactions. So, it worked pretty well; and we had Pete Boettke on EconTalk talking about the work of Elinor Ostrom--she got the Nobel Prize. She explains that within small groups, they often devise norms and other voluntary, non-coercive ways to limit the tragedy. But the problem you are fascinated by--which I am, too--is when two tribes come along and they don't share the same morality. So, talk about the tragedy of common sense morality, as you describe it. Guest: Right. So, this is my sequel to Hardin's parable. And one version goes like this. So, imagine that there's this large forest. And all around this large forest are many different tribes. And these different tribes are all cooperative, but they are cooperative on different terms. So, on the one side you might have your communist herders who say, Not only are we going to have a common pasture; we're just going to have a common herd, and that's how everything gets aligned. Everything is about us. And on the other side of the forest you might have the individualist herders who say, Not only are we not going to have common herds; we are not going to have a common pasture. We are going to privatize the pasture, divide it up; and everybody's responsible for their own piece of land. And our cooperation will consist in everybody's respecting each other's property rights. As opposed to sharing a common pasture. And you can imagine any number of arrangements in between. And there are other dimensions along which tribes can vary. So, they vary in what I call their proper nouns, so that is: Which leaders or religious texts or traditions have authority to govern daily life in the tribe? And tribes may respond differently to threats and outsiders. Some may be relatively laissez faire about people who break the rules. Other people may be incredibly harsh. Some tribes will be very hostile to outsiders; others may be more welcoming. All different ways the tribes can achieve cooperation on different terms. They are all dotted around this large forest. And then the parable continues: One hot, dry summer, lightning strikes and there's a forest fire and the forest burns to the ground. And then the rains come and suddenly there is this lovely green pasture in the middle. And all the tribes look at that pasture and say, 'Hmmm, nice pasture.' And they all move in. So now we have in this common space all of these different tribes that are cooperative in different ways, cooperative on different terms, with different leaders, with different ideals, with different histories, all trying to exist in the same space. And this is the modern tragedy. This is the modern moral problem. That is, it's not a problem of turning a bunch of 'me-s' into an 'us.' That's the basic problem of the tragedy of the commons. It's about having a bunch of different us-es all existing in the same place, all moral in their own way, but with different conceptions of what it means to be moral. And so, if our basic psychology does a pretty good job of solving the me-versus-us problem of having basic cooperation within a group, the modern problem, both I think philosophically and psychologically is: What kind of a system and what kind of thinking do we need to regulate life on those new pastures of the modern world, where we have many different tribes with many different terms of cooperation, many different moral systems?

9:07 Russ: Before we go further, I want to just ask you an aside question that I thought about as I was reading the book, which is: You argue that we evolved morality to help us solve these kind of problems. Why do we have different wants? And in particular--we'll probably come back to this later on--I'm more of a bottom up guy than you are; you are a top down guy, more than I am. You concede in places that bottom up is good; and I of course concede in certain places that top down is good. But overall, we have a philosophical difference. And you identify that difference to some extent with the northern and southern tribes--the northern tribes being more individualistic-- Guest: Right: metaphorically northern and southern. Yeah. Russ: And southern tribes being more collectivist. As you point out, there's obviously lots of gray areas in between. Why do you think there are such different ideologies to start with? Why am I a bottom up guy, and why are you a top down guy? And you talk a lot about the fact that, of course, we both think that we are right. And we both think we have evidence for why we are right. But, given that the world's a complicated place, how do we get that difference to start with? Why don't we both have the same morality toward how we solve problems? Guest: Well, so I'm not sure exactly what you mean by bottom up and top down, but I actually have, I think, the leading scientific explanations are at least what I would call pretty bottom uppish. So, a couple of places here. Joe Henrich and colleagues, for example, have collected evidence from small-scale societies all around the world and found quite a bit of variation in terms of how people cooperate. In the "lab"--that is, having them play standardized economic games, and then also in their everyday life. So, take the Lamalera of Indonesia. These are people who make their living by hunting whales in collective hunting parties. So, their livelihood depends very much on cooperation. And sure enough, when you have them do public goods games, prisoner's dilemma--so the kinds of economic games that model the tragedy of the commons, they are exceptionally cooperative. You have other societies where people hunt individually--I hope I'm getting this right, but the Machiguenga of I believe Peru but certainly in South America, they hunt as individuals and individual families; and they they play these economic games, they are much less cooperative. Which is not to say that they are not cooperative people, but they tend to cooperate within family as opposed to across families, at least economically. Now, if you live in a place where there are whales to be hunted, then there are advantages to having a cooperative way of life. If you live in the Amazon where there aren't whales to be hunted and the way you get food is by just going off in your own directly and finding what you can, then that lends itself to a more individualistic society. There is a paper that came out a couple of years ago, or actually maybe it was just this year, by Kensayama[?] and colleagues arguing that there are big differences between cultures--and this is going back to some ideas by Richard Nisbett and colleagues--cultures that cultivate wheat versus cultivating rice, the more collectivist cultures of Asia, are ultimately driven by the original rice-based economies that lived there: rice cultivation can be incredibly productive but requires a lot of intense cooperation and Nisbett has also for example cited evidence about more individualistic tendencies for people who live in herding cultures where it's a mountainous region and you are not going to be growing crops on the ground but instead are going to be herding sheep, let's say. That ends up leading toward more individual societies. So, I'm not sure if we actually disagree on this. Russ: I don't think we do. At all. I'm trying to get a more nuanced view, which I think is in the book, which is: The tribe we're in is not just a result of evolution. It's also cultural and depends on our situation. Guest: Oh, absolutely. Absolutely. Yes. No, I think that what we're born with is a set of options. It's a lot like language, right? All humans, all healthy humans are born with the capacity for language. But whether you end up speaking English or Chinese or something else is going to depend on the environment, the linguistic environment into which you are born.

13:45 Russ: Let's talk about the two Trolley Problems and what you learn about morality from those, because obviously there's of variations on the Trolley Problems that you talk about in the book. But talk about the two basic ones and talk about what you mean by automatic mode and manual mode, which I found very interesting. Guest: Okay. So, before I get to trolleys specifically, let me say a little bit about how I think this connects to the first set of questions you asked about the tragedy of the commons, the tragedy of common-sense morality. Because one of the main ideas of the book is we have two kinds of problems; we also have two kinds of thinking. And that the our gut reactions, our intuitions, what I call our automatic settings, which I'll explain in a moment, do a good job of solving the original tragedy of the commons, but they create the problem of the problem of common-sense morality. That our gut reactions about how we ought to live make it harder for us to live in many ways in a pluralistic world. So, let me give you my metaphor, which is familiar to people who have read--well, at least the idea is familiar to people who have read Daniel Kahneman's book, Thinking Fast and Slow, and a lot of the research on dual-process decision-making. My preferred metaphor for this is the visual SLR (Single Lens Reflex) camera--so, a camera like the one I got many years ago now; has the automatic settings on it. So just for everyday use, if you are taking a picture of a mountain from a mile away in broad daylight, you put it in landscape mode and click, point and shoot, you've got your shot. Or if you are taking a picture of somebody up close in indoor light then you put it in portrait mode and click, you've got your shot. And it also has a manual mode where you can by hand adjust the f-stop and everything else. And I say, why does the camera have these two different ways of taking photos, your automatic settings and your manual mode? And the idea is that this allows you to navigate the tradeoff between flexibility and efficiencies. So, the automatic settings are very efficient, point and shoot; and they are good for the kinds of situations that the manufacturer has already anticipated. Like taking a landscape picture or taking a standard portrait picture. But the manufacturer also knows that there are going to be situations that the manufacturer isn't going to specifically anticipate; and so the manufacturer also gives you a manual mode where you can adjust everything yourself. The manual mode is very flexible, but it's not very efficient. So you can do anything with it, but you have to know what you are doing; it takes time; you might make a mistake. And this design of having both, overall makes a lot of sense, because sometimes, most of the time, you can get by just pointing and shooting, and that's what you really want. But occasionally you want to have the flexibility to put the camera in manual mode and get exactly what you want, depending on [?] conditions-- Russ: And if you don't you are going to get a really bad picture sometimes. I think that's the-- Guest: Right. Exactly. So the idea is that the human brain has the same design: that we have automatic settings, and we have our manual modes. Our automatic settings are our gut reactions, our largely-emotional responses to situations, especially social situations, that tell us: That's good, that's bad, this is what you ought to do, this is what you ought not to do. We also have a manual mode; we also have the ability to step back and think in an explicit, deliberate, what you might call, in a somewhat loaded sense, rational way about whatever it is that's facing us. And we might override some gut reaction we might have because we'd say, well, in this case, even though it feels like we should do this, it actually makes more sense to do that. So, with this idea in mind of the tension between our automatic settings and our manual mode, our gut reaction and our slow, deliberate thinking, all introduce, as you said, the Trolley Dilemma. This is the philosophical problem that got me interested, well, really got me started in my research as a scientist. So, one version of the Trolley case goes like this. You've got a trolley headed towards 5 people, and you can save them but they are going to die if you don't do anything. If you hit a switch you can turn the trolley away from the five and onto another track, but unfortunately there's still 1 person there. And if you ask most people, 'Is it okay to turn the trolley away from the 5 and have it run over the 1 person?' depending on who you ask and how you ask it, about 90% of people will say, 'Yes.' Russ: Better that one person dies than five. Guest: That's right. The tradeoff is between 5 lives and 1, and the particular mechanism is hitting the switch that will turn the trolley away from the five and onto the one. Parallel case, which we'll call the Footbridge Case: This time the trolley is again headed towards 5 people, but now you are on a footbridge over the track, in between the oncoming trolley and the 5 people. We stipulate the only way that you can save them now is to end up killing somebody. So, there's this large guy, wearing a large backpack, who is right next to you. And you can push him off of the footbridge and he'll land on the tracks and he'll die--he'll get killed by the trolley--but it will stop the trolley from running over the 5 people. Now, to cut down on the number of angry emails that you get from people, I have to make some stipulations clear. We are stipulating that, a). You cannot jump, yourself. The only way to save the 5 is-- Russ: You're not big enough. Guest: That's right. Not big enough. You cannot jump, yourself. And yes, this will definitely work. And I know you've all been to the movies and sometimes you are able to suspend disbelief, and I ask you to do the same thing here. And we ask our participants, when we do these experiments, to do the same thing; and in general they don't have any problem doing this. Here, one of the questions is: Is it okay to push the guy off the footbridge, use him as a trolley stopper to save the 5 people? Most people say no. There are some populations where people are more likely to say yes. But in general, take an American sample, somewhere between about 10% and 35% of people will say that it's okay to push the guy off the footbridge; most people will say that it's not okay. So, interesting question: What's going on? Why do we say that it's okay to trade 1 life for 5 when you can hit a switch that will divert the trolley away from 5 and onto 1, but it's not okay to push the guy off the footbridge--even if we assume that this is going to work and if we assume that there's no other way to achieve this worthy goal. Most people still say that it's wrong. We're coming up on a decade and a half of research on or stemming from this moral dilemma. And we've learned a lot. It seems that it's primarily an emotional response to that physical action of pushing the guy off the footbridge. And you can see, for example, in a part of the brain called the amygdala, which you might think of as a mammal's early-warning alarm system that something may be bad, needs attention, maybe not a good idea--you see that alarm bell going off in this basic part of the mammalian emotional brain. And the strength of that signal is correlated with the extent to which people say that it's wrong to push the guy off the footbridge or whatever it is. You also see increased activity in the dorsolateral prefrontal cortex, which is the part of the brain that's most closely associated with explicit reasoning, or anything that really requires a kind of mental effort, like remembering a phone number or resisting an impulse of some kind or explicitly applying a behavioral rule. That's sort of the seat of manual mode. And these two signals from different parts of the brain, one a kind of automatic response to the action and the other reflecting the balance of costs and benefits, do get out in the brain; and in some people they go one way and in some people they go the other way. And if you give people a distracting secondary task, then it slows down their utilitarian judgments--that is, the judgments when they say that it's okay to kill 1 to save 5. If you give people more time, they are more likely to give a utilitarian judgment. People who give more reflective answers to tricky math questions are more likely to say that it's okay to push the guy off the footbridge. If you give people a drug that in the short term heightens certain kinds of emotional responses--so the drug used in the experiments is Citalopram, which is an SSRI (selective serotonin reuptake inhibitor), kind of like Prozac, people are more likely to say that it's wrong to push the guy off the footbridge. If you give people an anti-anxiety drug, Lorazepam is the one used in the study I have in mind, they are more likely to say that it's okay to push the guy off the footbridge. And so there's a lot of evidence, from a lot of different kinds of experiments. Brain imaging, behavioral manipulations, pharmacological manipulations, looking at patients with different kinds of brain damage--they all support this kind of dual process story. That is, that there's a gut reaction that's saying, 'No, don't push the guy off the footbridge'; and then a more conscious, explicit, calculating response that says, 'Well, but you can save 5 lives; don't you think that makes sense?' And--well, I could go on.

23:13 Russ: Talk about how you might want to exploit or use those differences--and I just have to say as a footnote: There's a lot of experiments in economics that make all kinds of different claims about behavior, and one of the aspects of these experiments of course--it's really a big one in the footbridge example--is that this is a very alien experience for most people. And I think the challenge in interpreting, part of it, is the fact that, if it happened every day--if people were constantly shoving people over footbridges--maybe people would have different responses. Guest: Absolutely. Russ: There's a grappling uncertainty issue. And even though you say don't be uncertain, I think that's the automatic part maybe that's kicking in, not necessarily the morality. But let's put that to the side. It's definitely true that we have some gut reactions about some things and then some more pensive and thoughtful reactions. But others--what's the implication of that for these tragedies of the common-sense morality, these philosophical, ideological moral differences between tribes and groups? Guest: So, there are a few dots I think that need to be connected. So, if you sort of follow the arc of the book, the first part is about the two tragedies and their different structure. And then the next part is about morality fast and slow in general. Initially it's just illustrating the idea that our moral thinking involves a tension between gut reactions to certain types of actions that are generally bad but maybe not always bad. And then a kind of cost/benefit thinking that can either be selfish, or it can be impartial in the case of the third-party observer saying, 'Well, isn't it better just to save more lives?' What I propose as a solution to the tragedy of common sense morality is a much maligned and poorly named philosophy which many of your listeners will be familiar with, known as utilitarianism. Russ: Oooooh. Guest: Boo. [?] Russ: That was 'oooh.' Just suspense. It wasn't necessarily--I have an anti-utilitarian streak, but I a pro-one, also. So, I'm ambivalent. That was just 'oooh.' Go ahead. Guest: Okay. So, I think utilitarianism is very much misunderstood. And this is part of the reason why we shouldn't even call it utilitarianism at all. We should call it what I call 'deep pragmatism', which I think better captures what I think utilitarianism is really like, if you really apply it in real life, in light of an understanding of human nature. But, we can come back to that. The idea, going back to the tragedy of common-sense morality is you've got all these different tribes with all of these different values based on their different ways of life. What can they do to get along? And I think that the best answer that we have is--well, let's back up. In order to resolve any kind of tradeoff, you have to have some kind of common metric. You have to have some kind of common currency. And I think that what utilitarianism, whether it's the moral truth or not, is provide a kind of common currency. So, what is utilitarianism? It's basically the idea that--it's really two ideas put together. One is the idea of impartiality. That is, at least as social decision makers, we should regard everybody's interests as of equal worth. Everybody counts the same. And then you might say, 'Well, but okay, what does it mean to count everybody the same? What is it that really matters for you and for me and for everybody else?' And there the utilitarian's answer is what is sometimes called, somewhat accurately and somewhat misleadingly, happiness. But it's not really happiness in the sense of cherries on sundaes, things that make you smile. It's really the quality of conscious experience. So, the idea is that if you start with anything that you value, and say, 'Why do you care about that?' and keep asking, 'Why do you care about that?' or 'Why do you care about that?' you ultimately come down to the quality of someone's conscious experience. So if I were to say, 'Why did you go to work today?' you'd say, 'Well, I need to make money; and I also enjoy my work.' 'Well, what do you need your money for?' 'Well, I need to have a place to live; it costs money.' 'Well, why can't you just live outside?' 'Well, I need a place to sleep; it's cold at night.' 'Well, what's wrong with being cold?' 'Well, it's uncomfortable.' 'What's wrong with being uncomfortable?' 'It's just bad.' Right? At some point if you keep asking why, why, why, it's going to come down to the conscious experience--in Bentham's terms, again somewhat misleading, the pleasure and pain of either you or somebody else that you care about. So the utilitarian idea is to say, Okay, we all have our pleasures and pains, and as a moral philosophy we should all count equally. And so a good standard for resolving public disagreements is to say we should go with whatever option is going to produce the best overall experience for the people who are affected. Which you can think of as shorthand as maximizing happiness--although I think that that's somewhat misleading. And the solution has a lot of merit to it. But it also has endured a couple of centuries of legitimate criticism. And one of the biggest criticisms--and now we're getting back to the Trolley cases, is that utilitarianism doesn't adequately account for people's rights. So, take the footbridge case. It seems that it's wrong to push that guy off the footbridge. Even if you stipulate that you can save more people's lives. And so anyone who is going to defend utilitarianism as a meta-morality--that is, a solution to the tragedy of common sense morality, as a moral system to adjudicate among competing tribal moral systems--if you are going to defend it in that way, as I do, you have to face up to these philosophical challenges: is it okay to kill on person to save five people in this kind of situation? So I spend a lot of the book trying to understand the psychology of cases like the footbridge case. And you mention these being kind of unrealistic and weird cases. That's actually part of my defense. Russ: Yeah, there's some plus to it, I agree. Guest: Right. And the idea is that your amygdala is responding to an act of violence. And most acts of violence are bad. And so it is good for us to have a gut reaction, which is really a reaction in your amygdala that's then sending a signal to your ventromedial prefrontal cortex and so on and so forth, and we can talk about that. It's good to have that reaction that says, 'Don't push people off of footbridges.' But if you construct a case in which you stipulate that committing this act of violence is going to lead to the greater good, and it still feels wrong, I think it's a mistake to interpret that gut reaction as a challenge to the theory that says we should do whatever in general is going to promote the greater good. That is, our gut reactions are somewhat limited. They are good for everyday life. It's good that you have a gut reaction that says, 'Don't go shoving people off of high places.' But that shouldn't be a veto against a general idea that otherwise makes a lot of sense. Which is that in the modern world, we have a lot of different competing value systems, and that the way to resolve disagreements among those different competing value systems is to say, 'What's going to actually produce the best consequences?' And best consequences measured in terms of the quality of people's experience. So, that's kind of completing or partially completing the circle between the tragedy of the commons, that discussion, and how do we get to the Trolleys.

31:06 Russ: Yeah. So, there's some things about the utilitarian idea that are deeply appealing, and you do a beautiful job making the case for it. And you spend a lot of time conceding there are problems with it and then giving what you think is the best answer; and I found those very interesting. Not totally persuasive, but provocative. I want to raise a couple of issues and let you respond. So, the first is that: I think part of the reason that people have problems with pushing that guy off the bridge is: there's an arrogance involved. Which makes me nervous, as a northern herder in your example. Guest: Right. Russ: So, I like the idea of going around saving lives. And people make lots of claims for--the death penalty saves lives; it doesn't take lives, it saves lives. And there are a lot of different claims that people make. Ultimately most of those claims come down to empirical claims, somewhat supported by evidence but not totally, completely, ironclad, about how x leads to y. And one of the main themes of EconTalk is that, I'm [?] humble about that connection between x and y. And I'm thinking, you go out there pushing people off of footbridges, you're actually a dangerous person. You are not a moral person. You're going to run amok. Guest: I agree. I think what you are essentially doing is making a good, deep-pragmatist, long-term utilitarian argument against being too quick to implement what might narrowly seem to be a utilitarian solution. Russ: And that's really by the way--that's a nice way to put it. That's really what economists do, by the way--often what economists say: 'Not so fast.' Right? Guest: Right. So, I think it depends on the case, right? When it comes to--take something like physician-assisted suicide. Right? You might have a kind of footbridge sort of reaction: I think the American Medical Association are a lot of people, too, which says, it's just wrong for you to intentionally and actively end the life of a patient even if they want to. Right? It pushes--I'm willing to bet it pushes that amygdala button. Russ: Yeah, big time. Guest: Right? But, you might say, 'But the greater good is served by not forcing people who are suffering and who have no quality of life and no hope of a better life to go on and suffer and wait for the disease to kill them instead of them dying their own way.' Now, on the one hand, there's something--I think about that caution that says, 'Well wait a second. This could go terribly wrong.' If we have doctors who are too quick to say, 'Oh, you want to die? Oh, here you go.' Russ: It's a slippery slope argument. Guest: Yeah. So, on the one hand you want to be careful and you want to listen to that amygdala signal that says you are playing with fire here. But at the same time, you don't want to give it an absolute veto. And so I think that the kind of skepticism about overly ambitious social policy is a good skepticism. At the same time, I think it is often possible to do things that feel wrong but that actually end up making things better. Russ: For sure.

34:35 Russ: So, let's talk about the basic idea. You actually--in the book you sum it up in three words: maximize happiness impartially. And of course by happiness, you don't mean necessarily, although it could include dancing at a party while drunk or gorging on ice cream. It's a richer concept. Sometimes we call it flourishing here on the program. Or I think the fancy name is eudaimonia. I don't know if I'm pronouncing it correctly. I think that's Aristotelian. And it's about--there's a whole very rich menu of stuff that give us a feeling of pleasure, of utility, of satisfaction, deep tranquility, serenity, etc. And we're going to be open about--we're not going to try to narrow down that definition. So I'm with you there. So, for me, as an individual, me, just me, I face tradeoffs all the time about satisfaction and pleasure and happiness. How long should I stay at work? Should I watch the football game instead of helping my kids with their homework? These are all questions that we face every single day as individuals and we do our best, and sometimes we make mistakes that we regret; and we understand that: life isn't perfect. And morality to some extent, and self-help books, are trying to help us navigate those tradeoffs. The problem I have with your tradeoff is--and I understand the desire for a common currency across these tradeoffs--but they are across different people. And I can't measure happiness. Even if I could I'm not sure that I can imagine an entity that would come up with the right desire to make those tradeoffs. So, we think about this in a political context, which is naturally what you do in the book. So, here we are in the United States. We're in this pasture. We're all here together. We have very different philosophies. Unfortunately, we don't really have--not only do we disagree, even if we agreed, you and I, on what the right, say, way to adjudicate our dispute, we don't really have a mechanism for implementing it. We think we do. We call it democracy. But it's a very imperfect mechanism that often exploits our differences for the benefit and gain of individuals. So it's not obvious to me that it's even a good idea to say, Let's pretend we could decide what is the greatest happiness across these 330 million people, let alone the 7 billion, and then hope that somehow it'll get implemented. Is that really a practical solution to our political problems? Guest: No, I don't think that there is any alternative. I think that we are living someone's attempts to adjudicate these tradeoffs of values, and we can either just accept what the powers that be put in front of us, or we can vote our conscience and try to change them or vote our conscience and say, yes I endorse this. I think that what you're objecting to is the difficulty of the problem, not an inherent problem with the solution, if you want to call it that, that I'm proposing. So I think it's easier to think about these things with a concrete example. So, take the case of raising taxes on the wealthiest Americans. Now, let's suppose that I know that this is controversial. But let's suppose that government spending can provide good stimulus to the economy and can increase employment and make things better off for the people who are employed as a result. Okay, so you have to do a tradeoff. You would have to say, How much do the wealthiest people lose by having their incomes reduced by some amount from someone who is making half a million dollars a year, and they might pay, instead of paying 30% in taxes they'd pay 40% or something like that, versus the benefits that go to people who now have jobs as a result of expansion of the public sector, or children who have a better shot at living the good life because of increased commitment to early childhood education, etc. There are a lot of empirical assumptions here or questions here. But if we can at least agree on the empirics, then there's the question of, Okay, is this tradeoff worth it? I don't think there's any way to avoid asking that question, and I think that in a lot of these cases, it's actually pretty clear--that, for example, taking people who are already very wealthy and reducing their income somewhat doesn't really do much to their happiness. Whereas if you provide opportunities to people at the bottom of the scale, that actually can make an enormous difference in their lives. So, you know, I think that the alternative is to just say, let it just evolve the way it evolves without consciously thinking about this as a social problem. But I don't think that that's a better alternative. Russ: Well, that's because you're a southerner. I'm a northerner, and as a northerner, I say, if we get the government out of this, the private sector, charity and other ways, will be done to help poor people. They'll take money from rich people. They do give it voluntarily--maybe not so much as we'd like; certainly not as much as they'd give if they were forced to give. But the real issue I have, and this is my meta-meta morality, I guess, and I think it's an interesting thought experiment--the real problem I have is the empirical assumptions that you need to make for some reason don't appeal to me. And they do tend to appeal to people who are the collectivists. Right? So, you made a lot of--you just gave a couple; we could think of 10 more: better schools, better pre-schools, more training programs, greener this, reduce carbon dioxide emissions, stimulate the economy, reduce unemployment. And most of those things everybody agrees on would be good if they happened. But strangely enough--and this is, to me, a different kind of tragedy--the people who are from the north, us individualists, we seem to think that the empirical evidence is very unconvincing. Whereas the people who are in the south seem to find it extremely compelling. Guest: Right. Russ: So, what it comes down to is a pretense, what I would fear--it's a pretense we are doing something scientific by just looking at the outcomes rather than arguing about our principles. 'We're just going to see what works the best.' But that's kind of a false--that's kind of an illusion, I worry. What do you think? Guest: But why--I see this problem on both sides. I think that both sides-- Russ: I do, too. Guest: interpret the evidence. The evidence in social science is almost always ambiguous. And both sides interpret the evidence so as to support the kind of social policy that they intuitively favor. I think that's a problem on both sides. Russ: I agree. Guest: But, you know--I think it's not an impossible task to sort out the fact from the bias. And the signal-to-noise ratio may be lower than we'd like, but I still think that there is a signal there. I think one thing that we can do, and this is one of the major practical points in the book, is to not think of these social problems when we are really trying to have an honest discussion about it in terms of rights. Russ: Yeah, I really like that, by the way. Even though I've probably made those rights arguments. I thought this was fantastic. Go ahead. Guest: And I use the language of rights as well. I think it has its place, I also argued in the book. But if something becomes a matter of rights--take capital punishment; it's the public's right to see justice done, which means having the person killed; or capital punishment is a violation of human rights, as Amnesty International says--if you make something about rights then it essentially leaves the realm of the empirical, because we can essentially use the language of rights as a front for whatever our automatic settings say, for whatever our amygdala says. Right? Russ: Yep. Guest: And so, one way to try to make progress from both sides is to say, Okay, we're not going to discuss these problems in terms of absolute rights. Because we have no way of figuring out what rights people really have in some ultimate metaphysical sense. And instead we can ask, which kinds of policies actually work. A lot of these things are difficult because we can't do controlled experiments--we're not rats living in a lab. We're people living in a society where it's almost impossible to do controlled experiments with things like the death penalty. Russ: Or a stimulus. Guest: But we can look at other countries that don't have the death penalty and say, well, do they have rampant murder problems? Or, is there something fundamentally different about those societies that's making them relatively murder-free compared to the United States? I think that the empirical battle is winnable, but it's 10 steps forward and 9 steps back.

43:56 Russ: So, let me phrase the challenge in a different way. You concede[?] at one point in the book--you reject it, but you concede[?] at one point in the book that people think we're already doing this. We favor the policies that work out the best, or that create the most happiness, or that are good for most people, or the "best policies." And isn't part of the problem really that we're really pretending what we're arguing about? It's all rhetoric? We all have our stories to tell: as Ed Leamer says, we're pattern-seeking, story-telling animals. So we cherry pick our data. And it's just, all this utilitarian stuff, all it's really doing is just giving me a different rhetorical frame. I'm not really going to make progress. So tell me something cheerful. Guest: Uh, so, let's take the case of prison policies and things like solitary confinement and other exceptionally harsh treatments that exist in American prisons. You're seeing, now you're seeing a lot of this in the news. For a long time people on the Left have been saying these practices of exceptionally harsh punishment in prisons is not doing anything to help anyone; it doesn't deter crime very much because most would-be criminals are not paying attention to these levels of details. Russ: Worse. Could be worse. Guest: It makes things miserable for the prisoners. Russ: Could be worse. Guest: Sorry? Russ: Yeah, it could be worse for society. It reduces their ability to come out and do something productive. Guest: Exactly. Right. And what you're seeing now is people on the Right who are coming around to say, Look, it's not productive; this is not helping. This is a place where we're actually I think just beginning to see a consensus on Left and Right, at least on certain flash-point issues like solitary confinement and things like that. And it's really driven by evidence. Russ: That's a good example. And I'd use the drug war as another example. It's hard for--there are a lot of people who see it as a rights-based issue: people should not have the right to harm themselves. And when they see the effect of the drug war, they start--some, not all--but some people do change their minds based on the fact that they actually don't think it's making the world a better place. It's not reducing necessarily even the amount of drugs being taken; it's corrupting the police; etc. So, I don't mean to argue that empirical evidence or reality doesn't come into it. I'm just a little worried about the bigger, overarching claim.

46:48 Russ: Let me ask you a couple of different challenges. This is a little bit like ask the doctor; these are hard ones. Uber, the car-sharing, taxi-ish service you can use on your iPhone, recently got in trouble in Sydney, Australia during a crisis situation, and it's happened with other natural disasters: there's an increase in demand somewhere, and the Uber algorithm raises the price. Which draws more drivers into the area. And as an economist, whether I'm a southerner or not--or northerner or not, I mean--that kind of--I love that. I see more people getting out of town. A lot of people can't see it. They don't care, even. They see that it's just wrong to take advantage of people and they think Uber is immoral. And to me it's amoral; and in fact, it's good. So, why do you think people have that reaction to so-called price gouging? Guest: So, I actually haven't followed the details of the Uber situation, and I would say, whether or not I think it's a good or bad thing will probably turn on facts that are not much discussed in the case. So, I think the kind of standard [?] response to price gouging is, you know, there's a flood and the people who are selling buckets are suddenly selling them for a thousand dollars each. And the idea is, you are exploiting those people; you are making it harder for people to deal with their emergency and they could be losing an awful lot. Because you're saying, this is a chance where I could make an extra buck. And so from a utilitarian perspective, you are saying, okay, so you get a little extra money selling your stuff and the other person's house gets flooded--or I should have said fire. In a fire there's a person selling buckets. And the other person's house is burning down, and you're concerned about making a few extra dollars taking advantage of someone in need. There I think the utilitarian analysis clearly says, Price gouging is terrible. You are taking a little gain for yourself relatively speaking, because someone is desperate and they are trying to save their house, which is worth much, much more to them. If that's what's going on, then I think price gouging is bad, and it might be good to have regulations. Russ: And that's a world where there's a fixed number of buckets. Guest: Exactly. Russ: And a fixed number of buckets [?]-- Guest: Now, what's going on with [?] Uber, is all of these people saying, 'You know, I'm willing to work overtime', essentially: 'I'm willing to add extra travel capacity; but I'm not willing to do it for my usual price. I'm willing to do it for a little bit more; but fortunately there are people who are willing to pay for it.' I actually think that that is, overall, a better thing. So if it's actually increasing the availability in a time when people need it, that's better. Now, it would be better still if people said, 'You know what? I'm willing to do this as a kind of partial public service where I will get paid for it but I'm not going to increase my rate even though I could.' That would be even better. But we naturally compare it to Uber at the usual price instead of someone staying home and not driving at all. So, when I said that I think it depends critically on facts that aren't normally discussed, I would say it really depends on whether or not the alternative is not providing the service, as opposed to providing the service at the usual price. Russ: So, I'm going to concede my utilitarian side here, agree with you in the following way. Which is, I think one of the things that's often missing from these conversations, and it's missing from some of the moral dilemmas in the psychology literature that you cite, is an awareness of what Hayek called the knowledge problem--the fact that knowledge is dispersed and it's very hard to get it in the real world into people's heads quickly. So in the case of Sydney, a lot of people didn't know that there was a crisis going on. A lot of people didn't realize there were hundreds, maybe thousands of people that wanted to get to the airport. And maybe if they knew they would have volunteered to help them. They would have done a bunch of things. But that app alerted dozens or hundreds of drivers that there are a bunch of people who needed help. Guest: Right. Russ: And that price played an incredibly important role. Guest: Yes. Russ: So, my utilitarian side, where it agrees with you, is that I actually am naive enough to think that if more and more people understood that phenomenon, they would be more understanding of higher prices in crises. That's my idealistic, utilitarian side. Guest: Yeah, nope, I agree. I think it's well said.

51:31 Russ: So, I want to take an example you use that I found really interesting; I think all of us have to think about it, whether we are utilitarian or not. It's an example you take from Peter Singer. You say, you are out strolling in the park and you come across a shallow pond, and there's a small child stumbled into it and is going to drown. You can wade in and save the child, but you are going to ruin your $500 suit. And most people say, you are morally obligated to wade in. You have to give up the $500 suit to save the child. The problem is that it's much more difficult to then say: Instead of buying the $500 suit, you should have sent it to a charity in Africa to save a child's life, and maybe two children. So, talk about that issue from the utilitarian perspective and how you respond to it. Guest: Right. So, I think that Peter Singer had one of the most important insights of the 21st century. Which is the nonobvious moral equivalence of those two cases that you describe. Which is of course controversial, but I think he's basically right. And I think that this is reflected in our intuitive morality, both as a result of our biology and our cultural experience. So, you know, we evolve both biologically and culturally to live in relatively small groups in which we cooperate: we solve the tragedy of the commons with the people who are immediately around us. And so when you imagine seeing that child right in front of you, that pushes those emotional buttons that say, You have to do something; this is a person that counts, this is a person who is or is likely to be a member of your community. But, we didn't evolve to cooperate with or even care about people on the other side of the world. And so, from a biological perspective, the mystery is not why are we indifferent to far-away suffering but even why do we care about the people in front of us? But, I argue, as many people argue, that this is what morality is about: it makes you willing to pay that cost, at least in the short term, to benefit somebody else. But overall we all end up better off if we all have these moral impulses. So, I think that this is essentially a limitation of our intuitive morality, that through some combination of biological and cultural shaping, it pushes our emotional moral buttons when we have the child right in front of us, but children or even worse, adults, on the other side of the world don't push our buttons in that way. I think that if we're looking to construct a meta-morality, that is, to have a kind of moral standard that can work for the whole world, as opposed to just the tribe, then it's going to require valuing the lives, valuing the wellbeing of distant people as much as we value the wellbeing of people who are nearby. Maybe not in our hearts, but at least in terms of the kinds of policies that we feel that we can publicly justify. Russ: Yeah. And my first thought when I read your example is that this knowledge problem, which is--when I give the $250, the $500 to the charity, I'm not sure it's really going to make a difference. And of course that could just be my rationalization for why I can be selfish and hold my head high. So, I think your book makes us think about those issues in a very thoughtful way, and I think one of the biggest lessons of the book is: Slow down. You are so sure that you know what the right thing is: step back and be open. And this is a theme of Jonathan Haidt's also, who you cite and who has been a guest on the program. It's hard, but try to put yourself in the shoes of somebody else's morality. It's a very productive thing to do. Guest: Yep. Think slow when it comes to morality is I think one of the major points of my book. Russ: Unless you are on the footbridge. Then you have to think fast or it's too late. Let me raise a different set of issues. As an economist I think sometimes about the minimum wage and how controversial it is, and the arguments on both sides. And deep down I do like to think that it's a utilitarian issue, is that: What is really best for poor people and low-skilled people and does this really help them or does it hurt them? And both sides have evidence, of course. Guest: Right. Russ: And just as an aside you make a great point that a lot of people are northerners because--they are individualists because they are selfish. And it gives them cover. But their selfishness, what I think you failed to point out, is that southerners sometimes like to run people's lives, and like power. Each side has its own sort of evil twin, evil cousin, that if we're not careful-- Guest: I have to say, though--I think things are not quite as symmetrical on that point. I really do think that selfishness is pretty basic and pervasive for humans and for other animals. I think that the idea of the liberal who inherently wants to run other people's lives--I actually think that that's a myth. I think that that's a boogie man. I think that--there are certainly plenty of misguided liberals and liberal policies, people who think something is going to help but it actually ends up making things worse. But I don't think that a desire to sort of run other people's lives is actually a major force behind either well-guided or misguided liberalism. That's my take on it. [?] Russ: Well, I like your asymmetry point. The problem is, is that centralizing power can lead to totalitarianism, and it often is justified because it's benign. And of course it's rarely--in my opinion, it's rarely benign. Guest: That I agree with. I would say the rank-and-file liberal voters, let's say, I don't think are particularly interested in running other people's lives. But I agree that there is a strong tendency towards mission creep and that once individuals have a certain power to do something, then they have an incentive to maintain and expand that power. I think that that's absolutely right.

57:58 Russ: So, what I was going to say, though, before I digressed: minimum wage is an important thing, I think. A more important problem is: What do we do about people who are struggling to acquire skills or who have trouble finding a job, and it's of course a very complex problem. We have some empirical evidence, hard to argue about. Let's take a bigger problem, what I would call a bigger problem, which is the problem of what you might call the 'bottom billion.' People are not just struggling to express themselves, or making less than they otherwise would, but are near death. Really tragic, horrible situations around the world. I don't see that very much as a common-sense morality problem; it's more of a power, 0-sum game where people are taking money from other folks and keeping a system going that is good for them and not so good for the rest of the people. Guest: These are in autocratic regimes, like perhaps the best example would be North Korea, right? Russ: Correct. Guest: Where you have a powerful elite who are basically holding a lot of human potential hostage. Russ: Right. Do you agree with that? Guest: Hmm? Yeah. I think that a lot of the world's worst situations are the result of corrupt politics. You know. As opposed to real, sort of moral disagreement among communities. Russ: Yeah. I think, other than global warming, which potentially threatens the planet--although I'm a skeptic, to some extent--of course. [?] You're not. As we would expect. Most of these problems are--many of our worst problems are not meta-morality problems, it seems to me that we just don't know what's going on. Either we don't know what's going on fully, or there's something more basic going on. So, again, I like your attempt to avoid conflict; I'm not sure it's the central problem. Guest: But even then though, we as third parties face a moral question, right? Which is: Do we intervene? And if we do intervene, how do we intervene? Do we use force to overthrow an oppressive regime? Or, do we impose economic sanctions as a result of what we see as human rights abuses? And there are disagreements within our own community about how if at all we should respond to people who are being oppressed by bad political arrangements. So I think that in a sense the powerful economies of the world, those nations could get together, pretty much do what they want to most of the world's nasty autocratic regimes, and it's a question of, why don't we, and is it wise restraint or a lack of moral will or something in between that prevents us from doing that? Russ: I agree with you there that it's not so much to me the utilitarian argument but the consequentialist argument, relative to, say, a rights-based argument. People will argue we need to intervene in this situation because it's just the right thing to do. Those people over there, their rights are being violated; we have to help them and we have the power to do so. And I look at it and say, Well, we've tried this 9 times; it worked 1 of them. That's not good. Maybe we should be more cautious.

1:01:32 Russ: Let's close with a philosophical issue which is really beautiful in the book, where you imagine--you have an incredible thought experiment. And you use it to argue for utilitarianism. I wasn't persuaded by it, but I loved it. I thought it was great. So, imagine we could create a world of three different kinds of species. Species 1 is Homo selfishus. Species 2 is Homo justlikeus. And Species 3 is Homo utilitus. Tell us what are those three species and what you come down with--what's your argument? Guest: Right. Yeah. So, let's provide a little bit of a background for this. I think that one of the big problems philosophically with utilitarianism is the Peter Singer problem, and seeing where it goes. That is, what utilitarianism essentially says is that, at least if you are an ideal utilitarian, you'll turn yourself into a happiness pump. That is, you will just use whatever resources you have to make the world as happy as possible. And what that means in practice is using all of your resources to alleviate the misery of people who are in the worst possible shape. Right? And so there's nothing left for you personally, nothing left for your friends and your family--it's all just going to the bottom 1%. And so, how do you make sense of that, because that seems to be above and beyond? And it seems to be a point against utilitarianism, if it's overly demanding. So it's essentially a question of how does a utilitarian or deep pragmatist deal with this over-de mandingness objection? And my answer is to say, Look, instead of putting the blame on utilitarianism, why don't we put the blame on ourselves, but accept that there are limits to how much we are going to do about it. So, when I have a birthday party for my son and my daughter instead of just giving the money to charity, question, can I really justify that in utilitarian terms? And in a sense, I can't. But at the same time, you have to operate within the limitations of your own mind and your own species. We didn't evolve for universal benevolence. And so it's not, I think, really in the cards for us to try to go there, at least not directly. Nevertheless, I think that we can step back and recognize that there is something better about universal benevolence; and that's what this thought experiment is about. So, what we say is: suppose that you are a god, or God, or just in charge of the universe, and you can create a new species; and Homo selfishus is a species of people where they only care about themselves, themselves and a few other people, and they do everything they can to amass as many resources for themselves as individual and don't care about anybody else. And this ends up being a Hobbesian nightmare, and obviously this is not a very good world to live in. So we'll say, okay, we are not going to create that species. The real contenders are what I call Homo justlikeus and Homo utilitus. Home justlikeus is, we care a lot about ourselves and the people with whom we have close relationships, our close friends and our family and to some extent about people with whom we have a certain shared identity. And most of the world, we care about in a distant kind of way but not enough to make much of a sacrifice. So, we can know that there are people who, children who are dying of preventable diseases, and we say, 'Well, I'd like to help but instead I'm going to renovate my kitchen because I'd like it to look nicer.' And in that world, a lot of people are very happy but there's an enormous amount of preventable misery that doesn't get prevented because people aren't willing to make any kind of sacrifice for people with whom they don't have a kind of personal connection. And then Homo utilitus is this species where everybody loves everybody, or at least everybody is willing to make sacrifices for the wellbeing of other people. And in that world, you might imagine, mindless drones who have no personality or no personal relationships. But I think that's the wrong way to think about it. But I think the right way to think about it is in terms of just some of the people who are a bit more heroic than most of us--someone who's willing to donate a kidney to a stranger, or someone, like Wesley Autrey who is willing to dive in front of a subway car to save a guy who is having an epileptic seizure from being crushed by a train. If you had a worldful of people like that who have friends and family and take care of themselves but who are willing to make sacrifices for other people when there are other people in great need, I think the world would be a lot happier. And even if we don't have it in us to make those sort of sacrifices, we sort of fallible humans, we can see that that would be a kind of better species, the kind of species you would choose to make if you were in charge of the universe. And so the idea is if you actually step back from the limitations of our human values, we can see that even as we are unwilling to abandon the selfishness and the parochialism of our commitments, we can see that there would be something admirable, or that it would be more ideal if we could expand our concern, even if we don't see ourselves doing that any time soon. Russ: Yeah. I just want to comment: it's interesting in Jewish law, you are obligated to give 10% of your income to charity. You can give up to 20%, but after 20% you are discouraged because you risk becoming poor yourself. And that's again a kind of-- Guest: I didn't know about that. Russ: A limit. So that's kind of a utilitarian, consequentialist motive in there.