0:36 Russ: Intro. [Recording date: January 11, 2012.] Ideas in new book; you frame the book around an interesting thought experiment to help us understand the nature of prosperity. What's that thought experiment? If a society's sole objective was to maximize general prosperity and it could choose the moral beliefs of the people that comprise it, what kind of moral beliefs would it pick? What would they look like? What kind of characteristics would they have? Guest: The reason for doing that is I had become disenchanted with the progress that we have been making as a profession on what's commonly now known as the Development Puzzle. Russ: Which is? Guest: Basically, economics did really well through the 19th century, the beginning of the 20th century, working out the essential logic of the price system. And that was a huge triumph, a great gift to mankind. And I think we basically got that right. But as Thomas Kuhn has pointed out, when you have a new paradigm, you always say that things are great; you start to answer a lot of questions; but over time you start to peter out. The usefulness of the paradigm starts to peter out. And that happened with the neoclassical paradigm. So, what then happened? Well, in the 20th century, institutionalism was re-resurrected, I should say--it was already there to some extent--to fill in the gaps. The basic insight there was while there's nothing wrong with neoclassical economics and our understanding of markets per se, we have to recognize that they exist in a context; that they rest on an institutional foundation, as it were. And once we did that, then a whole bunch of puzzles became solvable. We were able to make some real progress, including but not limited to Development Economics. We certainly made a lot of theoretical progress. That kind of work resulted in Nobel Prizes for people like Ronald Coase, Oliver Williamson, Doug North. I would argue, though, that that has begun to lose steam. We have found that when you drop institutions into less developed countries very often they either do nothing or they are subverted and co-opted and become vehicles of opportunism themselves. So, something else must be missing from the story. Barry Weingast, who is a political scientist at Stanford has a great way of putting the problem. He said if you needed a copy of the U.S. Constitution, you could always go to South America because there's a ton of photocopies of it floating around in the form of their Constitutions. Yet you don't get a United States down there. And you can't tell the standard argument that they don't have the right kind of requisite conditions, because as recently as the late 19th century, Argentina had higher per capita income than we did. So, they have all the stuff that they need, and they even have, superficially, a Constitution, and so on and so forth. So, much of the institutional apparatus is there. Or apparently there. And yet the don't get what we get. Because apparently--they have a court, but maybe it's not quite like ours. They have laws, but legislation doesn't quite work in terms of how it's enforced. Etc. So, there is a puzzle, still, which is fundamentally we don't fully understand why some countries do much better than others. Right. And you are trying to fill that gap. That's what got me interested in this area in a broad kind of way.

4:59 Russ: So, this thought experiment is to think about what role moral values might play in helping to create prosperity, and you focus on the issue of trust in dealing with strangers in large group situations because that's necessary for specialization. Is that correct? Guest: It is. The way I approach the whole thing is to say: Look, if we are trying to figure out what kind of moral beliefs would do the best job supporting the development and operation of a market system, the first thing we have to do is figure out what exactly needs to be going on to have a well-functioning market system. That stuff's all well known. Basically, Adam Smith is right about this. The issue of distribution is important but not nearly as important as the issue of having enough stuff to divide up in the first place. Really comes down to specialization. Societies that are able to effectuate dramatic specialization through very large scale production are those that are going to have levels of productivity that are many orders of magnitude greater than other societies. And we've known this for a long time, although it's surprising how few younger economists are really aware of how dramatically different the level of productivity is when you allow specialization. Well, I shouldn't say almost nobody, but many economists don't have the pin factory example memorized, for example. Which I require my Principles students to do because it is such a shocking increase in productivity. But be that as it may, the question then is: That's what it takes to work, but now does specialization present any kinds of problems? Obviously if it was really easy to effectuate tremendous gains from specialization, everybody would do it. But not everybody does do it. What's the problem? Well, when you have dramatic specialization to increase the productivity like that, you are going to invite a problem of localized knowledge that is quite similar to a local knowledge problem that was addressed by Hayek across the whole of society. As you know, Hayek argued that the price system solves a problem and the problem that is solved is reconciling the localization of knowledge. Because we have a price system, we don't have to know what each other is doing or why. All we have to do is pay the market price, and as a result, we'll pay the full social opportunity cost of using that resource. And that effectuates efficient coordination across the whole of society, even though we don't have to know that much about each other--because everything we need to know is already embodied in that price. That was a fabulous argument. But I would argue that when you look inside firms, which is where all the stuff gets created in the first place, we have a similar kind of local knowledge problem. The larger a firm is and the more complex its production is, the more likely it is that there are people who know things that nobody else knows. Or even can know. And as a result, if people in that situation are not able to take full advantage of that knowledge, we are just throwing away a tremendous amount of efficiency, much like we would be if we didn't have market prices across all society. Russ: The problem within the firm: there was a big fad in business schools in the 1990s. I don't know if that fad is still going in the business schools any more but there was a big fad in business schools and management and the business literature about the capital stock of knowledge within a firm--that there was a lot of specialized knowledge and localized knowledge that you are talking about embodied in the individual workers. But they would come and go, so how do you preserve the knowledge that the firm has at any point in time to make that more efficient despite the reality of turnover. I don't know if they made much progress with that; obviously there is one move toward using prices within a firm; I don't think that's been terribly successful. But it's certainly true that at any point in time within any large organization whether it's a business or a non-profit, there is an immense amount of specialized, sometimes localized, but specialized knowledge that isn't written down anywhere. It's just embodied in the heads of people who happen to be employees at the time. And how do you get that to be used effectively is a major problem for any successful organization. Guest: Right. And that's kind of a stock concept of it, and that's certainly correct. But the problem is every bit as daunting in a flow sense, which is what Hayek would have emphasized. That is, things are changing constantly; the problems of the day today are different than yesterday. And they just come at you constantly. And the person who is in the best position to answer those questions is the one who has a great deal of localized knowledge regarding how a particular area of the firm works. What I introduce in the book is that there is a form of opportunism that has never really been codified in the past. It's what I call third-degree opportunism. And that's opportunism of the form that there's an action set and other people in the firm, or the firm owners or CEOs or whatever may know a proper subset of that action set; but a person who is on the ground, as it were--I like to say a middle level manager--knows a much larger number of actions than that action set. And if an action is one that is profitable but not the most profitable is known to the person with local knowledge but not known to the others, and the person who possesses the local knowledge knows this, he might pick an action that is good enough but is not the best. And that would not be consistent with maximizing profits and not be consistent with efficiency. And this is a very daunting problem. I call it 3rd degree opportunism. It's very a very daunting problem because it's a problem that gets worse the bigger firms are. Because the bigger firms are, the more specialized the knowledge, and by definition the more likely you've got a situation in which an individual has an informational advantage over those that he would have to answer to or coordinate with. Russ: You are not talking here about--you talk about these other phenomena, which I'm going to mention, which is shirking, where obviously sometimes an employee can work less hard than his boss might know about and enjoy some leisure on the job. That's not what you are talking about. You are talking about a very specific kind of opportunism, right? Guest: Exactly. I'm talking about a form of opportunism that cuts to the very heart of whether a firm is run in entrepreneurial fashion or in bureaucratic fashion. This is a fundamental tradeoff, because if you aren't able to delegate managerial responsibilities vis-a-vis what we call relational contracts--in other words, contracts that are flexible enough to give those who possess local knowledge--if we can't do that then we are throwing away all of the efficiency gains, what I would call Hayekian gains, that would come from fully exploiting localized knowledge. However, relational contracts that would confer that kind of discretion would by definition open themselves up to opportunistic exploitation that constitutes what Bob Frank would call a golden opportunity. The reason why is by definition if nobody else can know what the optimal action is then there is no way you can be in a sense caught cheating because no one else knows what the counterfactual could have been. Only you know that. So, this is kind of an inescapable problem associated with the efficient use of local knowledge within firms. And I think it's a very deep problem; it's a very fundamental problem; and it cuts right to the heart of production and to the heart of the difference between a bureaucratically run firm and an entrepreneurial firm.

13:35 Russ: But it goes beyond that. There are so many transactions that--and you talk about this in the book--you and I are going to make a deal; there's going to be a contract between us. Not a handshake deal; there is a contract. But it's impossible for the contract to specify all the possible conditions, including conditions where I might do something on your behalf without your knowing it's even possible. Because you don't have that localized knowledge. And I always think, when I think about these kind of problems, about selling a house or buying a house, where we have this unbelievably important asset being exchanged for money and we have this unbelievable set of paraphernalia and bells and whistles surrounding it--title and page after page of contractual agreement on all sides, about what we are going to do on each other's behalf. But despite all of that, we leave much of it unspecified because it's too costly to specify everything; and more importantly we can't anticipate everything that could happen. And so inherently at some point there is either trust or there is this random legal action; and of course legal action is really unpleasant. So, obviously the more trust that's involved the better it is because we avoid the complexities of legal action and all of its costs. But--you have to trust the other person to a certain extent. And how do you generate that trust? Especially in this situation, which is the one I want to focus on because it's the center of the book for the rest of the way out, which is: One of the parties knows something nobody else knows, and knows that by either taking an action or not taking an action, good or bad will occur. How do you get that person to do the right thing? And if you can do that, if you can get a world where people do the right thing even when they are not observed or monitored, you can really exploit these potentials for specialization and trade, exchange; and you won't be able to exploit them if that trust isn't there. Is that a good summary? Guest: I agree with that totally. I think that's basically correct. My particular approach doesn't really view it as what do we do to make that happen, although I have ideas, of course. I basically am working backwards. Russ: What's necessary to be true. Fair enough. Let's turn to that. Obviously there are many other ways you can check opportunism generally. There's repeated dealings, there's reputation, there is police rules, monitoring of various kinds. But we're going to focus on the most difficult problem to monitor, observe; because that's really important to keep in the back of your mind as you are listening to this. Because obviously markets and societies find ways to deal with many of the problems associated with opportunism. This particular kind is special. Guest: It's special but it is more frequent and it is more fundamentally important than one might first suspect when one first thinks about these things. Part of the reason why is most of the cost is unobserved. Most of the cost takes the form of economic organizations that don't exist, or institutions that don't exist. So, I would argue that the preoccupation with incentive-compatibility mechanisms is the result of kind of a survival bias. In other words, you study what's there to see and for most of human history, what we have observed are institutions that exist to solve these kinds of like shirking and so forth that are pretty frequent precisely because being able to trust people you don't know is something that has been extremely rare throughout human history. It's even rare today but if you go back 500 years or so, I would say it was completely rare. Nobody had the kind of moral beliefs that would be required to get you to a condition of genuine and generalized trust at the same time. Russ: So, something has changed and part of your argument is going to be, although you don't deal with this in depth in the book: that change helped facilitate the explosion in our standard of living. Guest: That's right, and actually, I'm writing another book now that deals exactly with that issue. That's a huge issue all by itself.

18:31 Russ: Let's go back to the moral issue now, which is: What's necessary to create behavior on the part of individuals basically to turn down, reject, and resist the chances to be opportunistic when nobody is watching? What do we need? There are a couple of things that you need. Guest: Number one, the person's predilection to be trustworthy cannot be merely an exercise in incentive compatibility. Which is what most economists want to do. They want to model trust behavior and trustworthiness as an exercise in incentive compatibility. Russ: Explain what you mean by incentive compatibility. Guest: It's the idea that it's an exercise in enlightened self-interest because it's in your own best interest to behave in a trustworthy manner. The most common example is to say: Markets breed honesty and honesty breeds markets. Suppose you've got a guy and he's a car mechanic. If he behaves in an untrustworthy way it gets back to the customers; he has less business. If he behaves in a trustworthy way, he gets rewarded for that by virtue of having more business. And so on and so forth. So, that's an example of the kind of argument that most economists like to make about trust. Which is: It's no big deal, it's easy to explain. It's in your own best interest to be trustworthy anyway. That's all well and good but the problem is if that's all there is to trust then trust is going to fall down exactly where the word is most meaningful. This is such an empty approach that Toshiyo Yamagishi, who is a pretty famous social capital theorist, sociologist in Japan, says this isn't even trust at all. We should call it assurance; that's all it is. Russ: I agree. I don't trust you. I just know you are going to act as if you were trustworthy. Not the same thing. Guest: And Oliver Williamson is very dismissive of a great deal of the trust literature; and he would say that this is what he would call calculative trust, which is a contradiction in terms anyway. So, for a situation in which there is a genuine golden opportunity is possible. Russ: Explain that again. Guest: A golden opportunity is a situation in which the person who may or may not behave in an opportunistic way believes there is zero probability of being caught. In any way, shape, or form. They can do it and they can get away with it, perfectly. Russ: And this terminology comes from Robert Frank. Guest: Yes, Bob Frank first introduced that phrase I believe in 1988 in the book Passions Within Reason. That's the first place I ever saw it. You've got to be able to deal with that. And so, Frank's argument, and I think he was absolutely right although he was kind of dismissed at the time, was that the only way to bust out of that is for trustworthiness to be based on moral taste. If it's in any way an exercise in rational behavior, it's not going to work for a golden opportunity. So, the thing that's producing the trustworthiness has to be in a sense pre-rational, antecedent to the rational calculation problem. So, he said it had to be moral taste. It was a heretical thing to say when he said it and people have largely dismissed it. And I think that's been a huge mistake. Russ: They dismissed it because economists generally don't like arguments based on taste. They prefer to use arguments based on prices, incentives, etc., institutions as we talked about. But this is basically saying you'd better have a taste for being good. Or not doing a bad thing. It had better be part of your makeup, to solve that. And that is an unappealing argument methodologically. It could be true--which is the problem--but it's unappealing methodologically partly because you don't want to be in a position to say: Well, the way we'll make the world a better place is we'll get people to be better people. That obviously--most economists are uncomfortable with that kind of logic. But that doesn't mean it's not true. Guest: This one's also uncomfortable with it. I don't like arguments that are grounded in taste, but nature doesn't care what we like. The explanation just is what it is. If it is indeed the case that tastes carry the day, then it's incumbent upon us to move forward with that as our working theory. Turns out things are not quite as bad as people think, and we can circle back to this later when we talk about culture. But anyway, you were asking what do we need: Well, first of all it needs to be taste. That's where Bob left it. He just said it's got to be taste. I pushed the ball down the field by saying if it's got to be taste, then what kind of moral taste? And then I worked through the thought experiment to discover that first and foremost, if the reason why you think something is wrong is because of the harm it does to other people, which is by the way what I would call harm-based moral restraint, and that is kind of the foundation for why most of us are reluctant to be opportunists. But if that's the only reason why you won't behave opportunistically is because of the harm that's done, then the problem is, if you are in a situation where you think nobody is going to be harmed by your opportunism, you'll still be opportunistic. And just think about it for a minute. That is not a big problem in very small group society, where you live in hunter-gatherer bands or small tribes. The number of people involved is fairly small, so even if we don't get caught, we do know that our actions might measurably harm someone that we care about, or maybe we don't care about him but we don't want to be feeling like we hurt somebody. Russ: By the way, we should mention: guilt is a lot of what we are talking about here. Talk about that for a second. Guest: Guilt is the mechanism through which all of this works; and the question is how do you put guilt to work? You put guilt to work by having moral values that actuate it. The point of my book is that moral values are important also, but even more important is how they are structured, because otherwise you are not going to get guilt triggered in the right sort of way. Russ: And this point about small versus large, I found very interesting, because basically what you are saying is that guilt is going to be triggered by empathy. When I realize that I'm harming someone I'm going to feel bad about that, which is I think a universal truth. We may differ in how bad we feel about harming others and differ dramatically in how we emotionally react knowing we've hurt someone; but the insight you have which I really like is: you might be wrong, but if you don't believe you are hurting anyone, either because you don't perceive it or it's so small--the harm is spread out across many people, as it would be in a large group--the guilt is going to be very small. And you give the example, which I thought was very good, of a false insurance claim. Explain how that would work. Guest: The basic idea is usually when we do something in a small group to behave opportunistically, somebody gets hurt and we feel guilty about it. But the greater the number of people in the group over which the cost of that harm is divided, the more likely it is that there will not be a single human being who is harmed and who we can therefore empathize with and therefore sympathize with and therefore feel guilty about having harmed. Russ: Or, if they are harmed, it's by such a small amount they might not even perceive it. Guest: At some point we don't even have to make that qualification. If I exaggerated my income tax deduction, if I got $1000 more dollars back from the government than otherwise, there is not a single person on the planet who is harmed. There isn't. We don't even need to quibble. We are talking about way less than a penny per person in the United States. People can't even perceive that. It's not even there. Noise swamps it by orders of magnitude. So, no one is harmed. And that's why many people who seem to be nice guys and seem like they would never do anything to hurt you or your family or anybody, very generous, good people, might cheat on their taxes. Russ: Or inflate their expense account at work. Guest: Exactly. And that's a fundamental problem. It's a problem everywhere, but it's an especially big problem in countries outside the West. Outside the West, if people feel like they are not hurting anybody, they really feel like they can just do whatever they want as long as they don't get caught. So, you are only left with incentives to combat opportunistic behavior. So, the point of that is that harm-based moral restraint is not enough to deal with the empathy problem; and the empathy problem is fundamental because it's a problem that gets worse the larger the group size is. And you are going to be an impoverished society if you can't sustain very large institutions, large markets, large firms. Bigness is the key. Smith is right, and getting big means that our hardwired sense of moral restraint is going to fall down on the job. Russ: Because that's a small group thing. Guest: Right. Because we are a small-group species.

28:42 Russ: Let me raise Immanuel Kant here for a second. The only thing I understand about Kant--which is I think an important thing, though--is the categorical imperative. In the categorical imperative he says that you should take an action or avoid an action, when trying to decide if it's the wrong thing to do, you should imagine if everyone did it, if it was common practice, rather than just you doing it. And that's his way to solve this problem, right? I always use the example: Sampling all the fruit everywhere in the grocery, or reading all the books in the bookstore while drinking coffee, which most people say: Well, that doesn't hurt anybody; it's no big deal. And to some extent that's true; but if everyone did that instead of buying the fruit or the books and just ate while they were there, there wouldn't be grocery stores or bookstores; and I consider those immoral acts. When I tell people that, they get mad at me. But I think that's correct. And that's one way to solve the problem. But you don't deal with that. Or do you? Guest: No, I do. In the book I compare the moral foundation after I completely work the whole position out to what other philosophers have had to say, and one of them is Kant. I think what Kant was doing is he was giving a rigorous voice to changes in moral beliefs that were already underway. So, in other words, I don't think he's somebody who brought about these changes. I think he's somebody who is simply echoing them. They are already in the culture and he is codifying it and making it rigorous. I think that people who like Kant or know Kant are going to say: the principled moral restraint, which is the thing that I'm going to say solves the empathy problem, makes a lot of sense to me. Principled moral restraint is the idea that I'm not going to do this particular negative moral action not because of the harm that it does but because I believe it's wrong in and of itself. Russ: Even though it would benefit me. Even though it's in my own self-interest. Aside from the guilt. With the guilt aside, say, it's in my financial interest to do this, it's morally wrong; I'm not going to do it and I'm not going to get caught. Guest: Right. And many economists balk at this. Not to pick on Oliver Williamson, but he and I have argued about this over the years a great deal. And I would say to Oliver: Suppose you are at 7-11, and there's only one person working and he has to duck in the bathroom. And suppose you knew the security camera wasn't working, so you knew with certainty you could steal a candy bar and get away with it. You are not going to steal that candy bar. I know you are not. You know you are not. And you know that I know that you are not. Russ: And it's not because: Well maybe it really is working. That's not the reason. It's just you don't think it's right. Guest: I think that's how Oliver describes principled moral restraint but doesn't realize it. Russ: Well, I think a lot of economists are uncomfortable--it comes back to my methodological point. I think everyone accepts that as true. I think there are economic ways of looking at that. I think if the candy bar would save your child's life, even though it might be wrong you might be more likely to steal it rather than just to appease your sugar demands for just a few minutes. I'm willing to accept the idea. Guest: Sure, but there's a qualitative difference between stealing the candy bar to save your child's life and saying I know that stealing it was wrong but I don't care; I'm going to save my child's life versus not believing it's even wrong in and of itself. Russ: I agree with you. Guest: There's an example that can tease that out. Russ: I'm just agreeing with you that people do act that way, they feel that way, they refuse opportunities because they think they are wrong; but that economists may be uneasy about invoking that for methodological reasons.

33:01 Russ: So, principles moral restraint is obviously, undeniably a way to solve the opportunism problem; but you have more to say about it than that. Guest: Well, it's a necessary but not sufficient condition for solving the opportunism problem. It solves the empathy problem, but there's another problem. Russ: The empathy problem meaning that you might have trouble feeling that there are actual people being hurt. Guest: Right. Even if you solve the empathy problem, you have another problem, and that is someone could feel guilty about undertaking a negative moral act, let's say an opportunistic act, and they feel extremely guilty about it because they possess principled moral restraint; maybe somebody's hurt, maybe somebody's not, but that's beside the point in this case; but they feel guilty about it, so they have principled moral restraint. There's no issue there. But they may also feel guilty about not being able to take a positive moral act they could have taken if the negative moral act is undertaken. So, this is what I call the greater good rationalization problem. And it is really a huge problem, because this is a device that many advocates use to rationalize their actions in ways that after a while we come to take as reasonable but not so long ago we would have viewed as patently wrong. Russ: So, give an example. Guest: In the United States today the conversation begins far downstream of whether it's legitimate to take money from other people to solve some kind of social justice kind of problem. If you go back to say 1870 and you say: I've got an idea. We should have the government take a bunch of money from these people and then give it to those people because these people have a lot and those people don't have very much, the vast majority of Americans in 1870 would have said: You can't do that. That would have been self-evident. Although it would be nice to do this nice thing, that would be an inappropriate use of government power. It absolutely would not fly. But over time our sense of what's normal becomes a new normal--popular phrase now--and those kinds of things are just taken as the way it is. Russ: And there are a whole bunch of rationalizations for why that's a good idea. But certainly you are right--there is only a small group of people who would view that as immoral. Guest: Now. Russ: Correct. Guest: But the greater good rationalization problem is a fundamental problem for trust because if I believe that you might feel more guilty about helping someone that you could help if you cheated me, even though you like me, I still can't trust you. I have to believe that you are the kind of person who would say: Just because you could cheat me a little and help your nephew a lot, you don't do that sort of thing. You don't even think in those terms. That's not on your radar screen. I need to believe that about you to trust you completely, in other words to genuinely trust you because I've reached a rational conclusion that you are genuinely trustworthy. Russ: Does that play out in the firm example? Guest: Yes. Russ: How would that play out in that example. I understand it where, let's say, you are really wealthy; I'm buying some property from you; I'm going to use the property for some good cause; and I might convince myself that it's okay to cheat you. And I accept that that's a problem. How would that play out in the firm? Guest: Suppose you are a middle manager in the firm, and you are working really hard, and you work really hard because you believe you are going to be well taken care of by the firm. You are going to get your just desserts. The firm is going to shoot straight with you. You can trust them completely. Things might not work out because you might make a mistake; you might make the wrong call about this or that investment, but that's different. It's never because the firm might cheat you. Intentionally. Or the firm might reach the following kind of conclusion: there's a reduction in demand for the product; they've got to let some people go; if they let you go, you'll probably be able to find another job somewhere; you are pretty talented. But there's this other guy who doesn't work real hard at all, not that gifted, certainly doesn't put the kind of time in that you have; but he has somebody in his family that has a pre-existing condition and if they cut him loose, it will be much more difficult for him to get health care. So, it would be nice to that person. So, in an effort to be nice to that other person, you end up being fired. Now, even if in some ultimate moral system that's the right outcome, it doesn't matter. Your willingness to do that as the firm's CEO, to me, affects my behavior and my willingness to trust you and therefore make firm-specific human capital investments in your firm because that could happen to me. That's a quick and dirty example. Russ: That's interesting, but is that third-degree opportunism? Is that undetectable? Guest: Well, it doesn't have to be undetectable. I'm saying this is another issue. There is a hierarchy to the argument; they don't all have to fit, all be in the same stream of subsets. You asked for an example; I gave you a quick and dirty one. Russ: So, let me just stick with that for a minute. Is the worry then that I as the employee--if I can't trust you, I'm not going to make the investments in myself in the firm that I would otherwise make? There's loss of output there. Guest: Yes. That is true; and that's what I said. But that's not the real point. The real point fundamentally is: I'm not able to trust because you are willing to reduce my welfare, knowingly, even though it's not the appropriate thing to do by the rules of the game; we spelt it out; because you think there's a greater good that can be achieved by doing this to me. Russ: So, the basic point of this, then, is that because of this greater good possibility, people may do things that may violate negative prohibitions, because of the greater good. And so what do you have to say about that? Guest: I certainly feel like you've engaged in a negative moral act because I've lived up to the terms of the contract. By any objective reading of the contract that spells out my employment relationship to you and the other employees' employment relationships to the CEO, I shouldn't be fired; he should. I got cheated.

40:23 Russ: But you want to find a mechanism for reducing even that. Guest: My point is that you will not have a social norm of unconditional trustworthiness if you don't also deal with the greater good rationalization problem. And the solution to that problem, just as principled moral restraint solved the empathy problem, lexical primacy of the obedience of moral prohibitions over the obedience of moral exhortations solves the greater good problem. Russ: Explain that. Guest: There are two types of moral statements out there. There are those that exhort us to take positive moral actions, to do things that most people would say are good and right and problem. And then there are prohibitions against negative moral actions; we are being prohibited from doing things that harm or are inherently wrong. If moral beliefs don't just list a bunch of moral values but also adduce a logical structure on those values such that the obedience of moral prohibitions comes first and foremost, and the obedience of moral exhortations is only meaningful in the value of a person's morality if and only if they've satisfied the obedience of moral prohibitions--in that case, that person will never trade a greater good kind of outcome against opportunism. So, you don't need to worry about being a victim of their opportunism, because they felt justified in doing so because there was some kind of positive moral act that they felt morally compelled to do because in their moral belief system, that utilitarian comparison compels them to cheat you. You don't have to worry about that because they don't have that kind of moral belief system. They have a moral belief system that says: First, don't engage in opportunism. Russ: So, this is basically: the ends never justify the means. And the emphasis there is never. And if you know you are dealing with somebody like that, that would be really good, because you know they wouldn't exploit you, justifying it in their mind that there's something better coming. This would make politics very different, I would just say as an aside. Guest: Oh, absolutely. Maybe most people aren't that way, Russ, but you do know people who are that way. Russ: I do. I think it's good to be that way. Let me just make an aside on something we haven't talked about yet, which is the--and this is very Smithian--the role of self-deception. For me, once you open up the argument that maybe this is for the greater good, you start to go down a slippery slope of justifying what you are doing that's really for you but you'll tell yourself: It's not for me. Of course not. It was for my nephew. Guest: I discuss that explicitly in the book. Russ: I don't remember that part. Where does that come in? Guest: It would be in the chapter on duty-based moral restraint. Russ: Sorry I missed it. That's to me the danger. Many people would argue for "modern morality" you take in the greater good. That is the moral action. But my view is that's a slippery slope. Guest: Right. I have a section titled something like: When greater good rationalization becomes self-serving rationalization. And I give some examples of how easy it is to happen. Very concrete examples. But I don't remember any of them off the top of my head.

44:07 Russ: So, let's summarize here, because we've gotten into some interesting, complicated stuff. Let me try to see if I know where we're at, which is: If we could live in a world where we knew that everyone believed that the end never justifies the means, we would live in a world where we would know that the person on the other side of the transaction, whether it was within a firm or across firms with exchange, especially with strangers, that we could trust them. And that would allow us to engage in transactions we otherwise either couldn't engage in or could only engage in at great cost, because of the other ways we try to solve that problem. And you are saying there are some that could not be solved in any other way--those would be the golden opportunity ones. Guest: Right, but you are only half right. Russ: What am I missing? Guest: You have to have both principled moral restraint and the greater good rationalization to solve the greater good rationalization problem at the same time to produce a condition of duty-based moral restraint, because--let's go to the greater good rationalization problem. Suppose by doing a particular thing that's a negative moral act, on paper, I can do this wonderful thing. That's the greater good rationalization problem. But if I don't possess principled moral restraint, even if I possess greater good rationalization, even if I possess lexical primacy, so I'm not subject to greater good rationalization, if I don't possess principled moral restraint, that action that I would have to undertake in order to facilitate the positive moral act won't be regarded as a negative moral act if nobody gets harmed. Russ: Correct. Agreed. Guest: Both have to be in play in order to have duty-based moral restraint. And duty-based moral restraint is not even enough to give us the moral foundation. But I figured we would wander to the next stage.

46:14 Russ: So, at this point--I didn't feel this way when I was reading the book--but at this point in the conversation, I'm starting to think: This is hopeless. This is too high a level of moral foundation to expect in our fellow citizens. Guest: Actually, that's something I talk about explicitly in the book. There's a new movement in our society; it's a cottage industry, in character and morals education of children. And I'm sure you've seen some of this in your own life, if you talk to people in public schools they'll have these character and moral education programs. And, what you'll find is people who teach in these programs create the impression in children's minds that moral dilemmas are everywhere. Everything is complex and hard and there's no such thing as black and white; everything is a shade of gray. Russ: You buy an apple and it's from New Zealand and you've killed the planet. You'd better study it; you'd better look at it; good luck. Guest: Right. And they make these arguments that people like Dave Rose and Russ Roberts are unsophisticated and incapable of sufficiently nuanced reasoning in order to be a truly moral person. My response to that is: Utter nonsense. The problem is, what they are doing is, they have an implicit theory of morality that's actually very, very old. It is just rank utilitarianism. It's a perfectly good approach to morality if you live in a very small group. It gives you the most efficient outcomes if you live in a small group. No doubt about it. And that's what you are hardwired to believe. And that's why they have such an easy time persuading people they are right, because you are trying to persuade people of a particular type of moral belief system that they are hardwired to already be ready to be receptive to. The problem is, all of these nuanced analyses and all of these exceptions and conundrums that they have have the result of bringing a knife to a gunfight. They are taking a very small group sense of morality and applying it to the modern world. What I would say: the moral foundation is a much more complex set of moral beliefs; but people don't have to know the theory behind the set of beliefs in order to abide by the beliefs. Let me give you a simple illustration that I did in the book about this. Suppose you had a guy who was a mechanic; and this mechanic had a set of tools. And the set of tools was like a little kid's craftsman's starter set, but he's working on a 2010 Honda Accord. There's going to be some real problems, because advanced engines are very complex and they require many specialized tools, and so on and so forth. As a result, the only way you are going to get that car repaired is if that mechanic is extremely smart, clever, and creative, and with bandaids and duct tape can somehow get these tools to get the job done. Because the tools are not up to the job. The tools are simple and therefore it requires a very brilliant and thoughtful mechanic to deal with the complex car. Instead, suppose you had a guy who was trying to repair the same car, same repair, but he has the full complement of tools that are provided by the factory--really advanced stuff, lasers, and you name it, the whole nine yards. It would be a mistake to infer that the car is not complex because it's so easily repaired by the appropriate tools. The fundamental problem is, and these people are making the argument--and you are making the argument--that the moral theory is too complex; what I would argue is actually what's going on is their theory is simple and it's applied to a complex situation. This theory is adequately nuanced itself to deal with the very society that it gave rise to, so as a result the actual execution of the theory is actually quite simple. In other words, the rules of thumb one needs to abide by in order to abide by the moral foundation are actually very simple. Russ: I agree with that. Guest: The fact that it's a demonstration that the theory works has nothing to do with the execution of it. Russ: This comes back to my philosophy professor, Dr. Smyth, who in trying to summarize pragmatism and the thought of Charles Pierce--and it's a very Hayekian insight--the way he summed up one aspect of it was: Your grandmother is right. Meaning your grandmother has a bunch of rules of thumb about right and wrong--don't do this, do this, do that--and if you ask her why or why not, she doesn't have an explanation. She just says: That's always the way it's been and that's the right thing. You are suggesting that if we live that way, or the fact that we have lived that way for a long time is part of the reason we are so successful as a culture. And as an economy. Guest: Yes, and that way, what is required to live that way, doesn't require twenty hours of schooling. It requires many years of continuous reinforcement in order to build the character to produce the moral conviction behind a belief, but the beliefs themselves are pretty simple. Don't do stuff, don't do negative moral actions. Just don't do them; and just because nobody gets hurt, that doesn't mean you can do it, either. Because it's not about the person who is getting hurt or not hurt; it's about you. If you steal, even though nobody gets hurt, you are still a thief. So don't do it. Period. Don't even consider it. Don't even run it up the flagpole. That's not that complicated. And then secondly, if somebody says to you that you should do something that you know is wrong but it's okay to do it because there's this other good thing over here that you can make happen if you do otherwise, you need to realize that that is the language of a charlatan, that that is inappropriate, that you are being sucked in. We don't do things like that. Russ: Some of us try to raise our children that way; some of us do not.

53:17 Russ: Let's move away from the morality. Let's talk about the implications for growth, development, and our standard of living. If this is correct, and much of it seems correct to me, there are two implications. One is: Societies, cultures, that have successfully inculcated the view that stealing is just wrong, don't do it, you never want to perceive yourself as a thief--and that's either done through religion or other cultural means--those societies find it easier to specialize and grow. Societies that don't inculcate that or haven't--again there's no thing called society that tries to, but societies with individuals who have not adopted those beliefs are going to find it much more difficult to grow and be successful, because specialization and exchange in large groups is going to be much more difficult. Two questions. Number one: What's the evidence that this is true? It has an appealing casual truth to it. Might there be some specific evidence that it's true. And the second question I would have is: It seems to me, and we've talked about this informally in the last few minutes, that there's been an erosion of that moral imperative in the United States at least over the last 30-40 years. Do you think that's true and do you see any signs that it might make a difference in how we behave towards each other? Guest: Well, as far as evidence, we do have empirical work on measured trust across the world, and measured levels of trust do co-vary well with economic performance and general quality of life in societies. That suggests that however it is they are able to achieve this trust, if they can, it does pay off. And so that doesn't cinch the argument, but it's certainly consistent with the kind of evidence that we would need to see. Russ: Aren't there people who have done experiments--this reminds me of these experiments where you take a wallet, you leave the wallet in the middle of the street, and in some cultures, you find a wallet that isn't yours, you stuff it in your pocket as quickly as you can and hope nobody is looking or notices and nobody says: Hey, what have you got there? You just take the wallet and you get home and take the money and dump the rest in the garbage. But there are other cultures, and we know this happens, where people find that wallet and they return it to a stranger with the money in it. Guest: And if a person was asked to come up with a list of societies where they think most people would act the latter way, they'd probably be right. Their preconceived notions are basically right. And most of those societies are well-developed and prosperous societies. But my point gets behind that point. My point is that in order to get to that condition, moral beliefs have to have a particular kind of structure. If they don't have that kind of structure, you won't have the unconditional trustworthiness and you therefore won't have an environment of trust. Because it will be unsustainable. People will not extend trust if they are continuously punished for doing so. If it's not rational to extend trust, you don't. Russ: Like a sucker, and after a while you'd rather not be a sucker. Guest: Right. Russ: The second question was: Do you sense an erosion in these attitudes in civilization, in Western society. And one thing you might talk about is: where do those views come from? Do they come from folk wisdom? Religion? Does it matter? And where are we headed. Guest: Robert Putnam has documented a pretty-much across-the-board reduction in measured levels of trust. He's focused on social capital, but he does measure trust directly. Eric Uslaner has also done this. From 1950 until the present, it's pretty grim. In the United States, the downward slope is clear. Measured level of trust and trustworthiness are both going down through time. Russ: I was going to interject--I don't believe in the Great Stagnation, which we interviewed Tyler Cowen on this program about it, but this could be an underlying cause of that, if you believe that. It does raise the question: we've been a pretty successful economic society since 1950, so you have to explain despite that erosion why we've done so well with large scale specialization in organizations. Guest: Well, Adam Smith once said: There's a lot of ruin in a nation. Russ: True. Guest: Charles Murray has made an argument kind of similar to this, that in Scandinavia things are moving in the wrong direction. But they've built up a huge pile of cultural capital. They are going to have to make a lot of withdrawals from their cultural account before you get close to the margin. But in my view this reduction in measured trust does comport well with changes in moral beliefs in our country. I've detected it over the course of my own life. The kinds of things that people would say now or say five years ago that would have been laughable even when I was a kid. Let me give you a quick and dirty example. When Jesse Jackson was caught moving funds from the Rainbow Coalition, that he directed, to a woman that he had impregnated--and he was caught dead to right, there was absolutely no way of defending the behavior, he got nailed--many people said: Well, but you've got to look at the whole picture and all the good he's done, give him a break. I really do believe that in 1950 if someone had said that publically, they'd have just been laughed by virtually everyone. Are you kidding me? The guy basically engaged in overt fraud.

1:00:09 Russ: I don't know. It's an interesting thing. This is part of the challenge of this kind of research agenda; I don't say this as a criticism because I think what you are trying to do is extremely ambitious and it's very interesting. But there is still a remarkable amount of moral sanction and shaming and other activities for people who are fraudulent or who cheat. Just look at baseball. Very few people say--there's some variance on how people respond to the steroid issue, but so many people just say: Well, they cheated; it's wrong; end of story. Well, everyone else was doing it. Doesn't matter; it was wrong. Well, they didn't really enforce it. Doesn't matter; it was wrong. Guest: You are conflating two very different functions of the brain, though. You are conflating conviction born of a deep belief with a habit of mind. Are you familiar with Kohlberg's stages of development? Russ: I am. I don't like them. Guest: But many people who are just very mechanical about their moral beliefs really are behaving in a simple-minded kind of way. Cheating. But what if? It's just cheating. That's observationally equivalent between having somebody who abides by the moral foundation in a self-aware way and it's unyielding and has deep conviction to the possibility that a person is just--whatever the rule is and for whatever the reason, they accept it and they simply employ it. Let me give you a counterexample to what you are talking about. There have been numerous studies of high school students and college students cheating in college, and high school. This has been looked at over and over again. The amount of cheating has never been zero, of course, but it has gone up dramatically in the last 25 years. Moreover, in the past when you asked students why they cheated and they explained why they cheated, they almost never excused the cheating; they never downplayed the moral import of it. They would say it was wrong but they had to do it. Today, though, increasingly--I don't remember the proportion but it's a shockingly high proportion--most of them report cheating at least once; and a shockingly high proportion of those who report cheating at least once say: What's the big deal? In other words, they make an argument that is very consistent with the absence of principled moral restraint. Because their argument is: I cheated; so what? Nobody got hurt. I didn't take anything from anybody. Nobody's worse off. Teacher's not worse off; I'm certainly not worse off; nobody in the class is worse off; what difference did it make? And the answer of course is, at that margin it makes no difference at all. But my point is that it's indicative of a shift in moral beliefs themselves, the way we organize our thoughts, and it's very frightening. Russ: But you are suggesting--what your book suggests is that this change in our view of what is right and wrong, assuming that is really actually happening, and I think it could be right, is going to affect our economic activity because of the change in trust. Guest: Well, yes. And I think it already has to some extent. Look at the kinds of loan behavior that was going on in the financial crisis. I know people in the real estate business both on the mortgage side of it and in the house sales side of it, and it was just amazing what was going on. Many people knew what was going on was wrong. And they just shouldn't be doing it. But they thought, well, nobody's dying because of this. Russ: Or what about that the ultimate people who are going to pay for this--it's going to be spread out over a large group, a corporation will lose the money or it might be taxpayers; in fact it's good because I'm putting a person in a house. I'm thinking about the person who convinces somebody to take out a loan, doesn't require the documentation that would be necessary or the credit rating; both parties wink and say: Hey, this is good. Guest: And that con worked okay and almost nobody ever got hurt as long as all the house prices kept going up. The standard argument was, well, worst comes to worst, you can't handle it, you sell it. What's the big deal? But prices go down and there's the big deal. Russ: The fundamental question of this approach--of course, these kind of things happened in the past, the question is have people changed the way they feel about them? Did they feel differently in 1880, 1920, 1960 versus today? And the answer is: Maybe; I don't know. Guest: Well, there are tricks--obviously we can't go too far back--but there are tricks to teasing out these things. With survey data that matches up to trust experiment data; and I'm working with some economists around the world to develop variations on well-known trust experiments that can be put together with data that's generated from self-report information about how people feel about various kinds of moral statements, they order them and categorize them. What we can do is we can infer from how they would categorize these things in a survey instrument, we can infer how close they would come to the moral foundation, and then we can match that up to how they performed in the trust experiment, and see if indeed people who give us self-reported information about how they valued things relative to one another and therefore have moral beliefs that comport with the moral foundation actually are more trustworthy.