0:36 Intro. [Recording date: April 21, 2011.] Game theorist; recently wrote an afterword for the 60th anniversary of von Neumann and Morgenstern's classic work on the theory of games. You wrote it from the position of a skeptic. Excerpt from the essay: "So, is game theory useful in any way? The popular literature is full of nonsensical claims to that effect. But within the community of game theorists there is sharp disagreement over its meaning and potential usefulness...." So, this is kind of a shocking thing to write, as an Afterword. What kind of reaction have you gotten from your colleagues? First of all, you were reading it and I like it. My response is very positive, if negative a few years ago. I am a skeptic, it's true, and I'm not ashamed of that. In fact, the opposite. I think to be a skeptic as an academic is the fuel of academic life. I think one of the problems of economic theory in general is I do feel there was too little skepticism and too much feeling we are on the right track toward fulfilling some goals that personally were not believable. Second, there is one point that I emphasized in the part you read, which is the fact that there are many game theoreticians that do want to find game theory to be applicable, applied. From the beginning of my academic life, I have never felt any obligation to do something which is useful, which is applied. I don't think that's a sin--not at all. If something that I do will pave the way to the moon or to cure cancer, I would be very happy, but I don't believe that's the case, and I think we should look in academic life in general to much more modest usefulness, more from an abstract point of view, from an intellectual point of view. And not in the sense that most people understand the word being useful. Now, when you ask about the response of the people around me, well, people sort of know me for many years and I did not hide my views about it. It's true that probably I was less clear about my views when I was very young. I remember the first time, around the early 1980s, I was participating in some conference, and somebody told me he did some experiments that were not very consistent, he thought, with the model I wrote about bargaining. And I didn't understand what he wanted from my life, namely, of course the model does not have to predict anything; it does not have to be verified in life; it's not a model like a model in physics is. And it was not supposed to be useful in any way. I called it a model of bargaining because I wanted to emphasize that there were many models of bargaining; and my model is just one stories; there could be many stories; and if it's useful, it's only in an indirect way. So, I think in time people got more to know me and they understood that these are my views. It's true that only in the last 10 years or so, especially in the late 1990s, that I expressed my views in a more coherent way, more clearly about the role, the usefulness, or the non-usefulness of game theory. Now, I have to say also that some people did not like it at all. It's not that I got threats to my life. I fully understand why. They feel they should do something useful in life. I say it very positively about them. I think there are people who were educated, brought up on the idea that they were not allowed just to be professors in universities just for thinking for the sake of thinking. But, they need to do something that will be useful in the regular sense of the word. For them it's not easy to accept this position. Now, of course, I should also say, especially in the last 15 years or so--it was not like that 20 or 30 years ago--we are also experiencing some economic interest around. Once game theory became applied in the sense that there are companies and individuals becoming consultants and giving advice, and those who get nice amounts of money in return, then of course the interest also plays a role. We economists are also human beings and economic agents; something we also take into account. So, I don't blame anybody for cheating, nothing like that. Very moral and very nice people around game theory. But I think interests are also involved. And some people feel their work is very useful in a concrete way, either driven by idealistic positions about academic life or they are driven by more human incentives or interests like other people. So, I don't feel isolated.

10:04 One of the nice things about this position: It's not like every day I get responses or many emails from thousands of people. But it's nice to get very positive responses from young people, from students. It's nice because it's coming from something more pure. In general I believe in young people more than I believe in people my age. Listeners will hear some echoes from a recent interview we did with Freeman Dyson, where he talked about the value of being a heretic and made some similar comments about the reliability of science. We're talking now about social science and game theory in particular; but it is a fascinating thing. In my experience when you confront people with the idea that their certainty is not as high as they claim it to be, their reaction is very angry and aggressive. They don't say: That's interesting; maybe I should be a little more open. Actually their reaction is that you don't know what you are talking about, you don't understand, you don't know the field. But in your case, they can't say that. You know what you are talking about and you do know the field. Awkward. Some people must be very defensive. Some are; and some will probably not say it to my face, but probably say some bad things about me. And I will not hate them for that. I understand that this is the game of life. For saying that, I don't need to know game theory. One point I would like to emphasize is that many of the game theoreticians and economics theoreticians in general could make a lot of difference to the world. But my argument would be the following. I think what happened is that economic theory attracted many smart people, many very smart people. The difference between game theory, economic theory, and pure mathematics is that those people who come to economic theory are very often also with a very good touch to the world. Not only their IQ is very high, but also to do game theory you do need to be in touch with the world, in the sense of understanding how people argue, what is important in a situation, and so on. Now, that's something you don't learn in game theory; but I think if you are a good game theoretician or good economic theoretician then this is the view you need to come with. I think what's going on is that game theory and economic theory, many of the people who work in those fields, have this special ability of analytical ability and some touch to the world which makes them very valuable independent of the theories or the concepts they come with. I do believe if you take an intelligent person, like many of the names--don't want to mention particular names here--and you tell them that in the next year you don't do game theory but just consult to the American government agency or the Israeli army or to some private company, that these people will give many good advices; and also give many very bad advices. But some of the good advices will be quite original and from social or economic point of view. But this does not mean that the people who are describing themselves as game theoreticians, who are game theoreticians, that the sectors are really in a concrete sense does not really mean, not at all, that they really use the theory itself. Of course, game theory, like logic, like the Talmud, like other philosophy--these are fields that sharpen the brain; and in this respect trains people to contribute something to the world. To be successful academics, you have to be original, and originality is not something you learn. It's something you are born with or your mother gave you with her milk. That makes many game theoreticians quite valuable from a social point of view. But the one thing I disagree with is that I disagree that they really use game theory. One of the things that motivates me in life is my hate to pretention, my hate of the use of authority in cases where there is no real authority to be based on. It's very tempting; in my own life I also was confronted with this temptation. It's very tempting to say I know something that you don't know; I am a professor; I am a game theoretician; I know some people who won the Nobel Prize; etc.--and therefore, listen to me. I know something that the layman, the politician doesn't know. Very seductive. It's tempting, but I think we should and personally I hope I did succeed not to be tempted by that. In the last 10 years or so, many times I did talk in some public gatherings, some public lectures in Israel. And I've been from time to time in the newspaper in Israel; and one thing I emphasize from the first sentence is: Listen, I am talking to you as a citizen of Israel and of the world, but not at all as an expert at anything which is relevant.

17:08 That's a very deep observation. I've been struck by it in the current economic situation. I may have made this joke before, in an earlier program: People come up to me and said the middle of the crisis that people must be coming to you for answers. And I always say: Not the smart ones. Because I don't have any answers. I may have an insight or two from the study of economics, but people don't want an insight or two. They want the truth. They want the answer, the solution. And, as you say, there's such a temptation, a seductiveness to being treated with honor and glory as if you are something special. Exactly; and you use the word solution, that people look so much to. Part of the success of game theory is the rhetorics that was used in game theory; and the term solution--we talk about solutions in game theory. We don't just talk about equilibrium. We talk about solutions. This word in English or in Hebrew, I think it triggers people to believe that here we give something very concrete, the solution to a dilemma, to some conflict. There's a famous example of a shipwreck--I think it's in a novel by Joseph Conrad, Typhoon--where all the jewels and boxes of treasure that people have put on board get all mixed up together. And the problem now is how do you get people to tell the truth about what was in their box? Everyone will have a tendency to exaggerate. So, there's a very elegant solution; we'll not talk about it here--that will encourage people to tell the truth. The idea is that you don't give people back necessarily what they say, but you compare what they say and what everyone says to the total amount. Beautiful, elegant piece, I think there was a little article on it in the Journal of Political Economy--but the idea that that would work the first time as a solution in the chaos, in the aftermath of a typhoon when people are traumatized, that they would just all act like the game theorists say they would, is stupid. There's a logical beauty to it, but no captain would use it. He'd be a fool to use it. I think this is true about many of the so-called game theoretic solutions. It's like logic. It shows something; we learn something from it, something which might or might not be useful. But to say that they wrote the model, have some implementation problem and I found some mechanism which the Nash-equilibrium or the only Nash-equilibrium, or the only whatever equilibrium is something I would like actually to implement--to say this is something close to being a recommendation for the world, I wouldn't call it stupid, but I would call it pretentious without a good justification.

20:21 The other thing I'd like to observe about your point, which I think is very true, that people like to be treated with seriousness--it's interesting to me that, I think most economists generally would be insulted when told that most of your insights come from the fact that you are really smart. Yes, it's useful to have studied economics; it hones the mind. I would even go farther than that--it sensitizes you to unseen effects, to the effect of incentives, to the role of market forces which are difficult sometimes to be aware of; and for a game theorist you are sensitized to strategic interactions. And those can be very important. And yet no one says: This had nothing to do with my training. It's just me; I'm smart. That might be a nice identity. But as you point out, people don't want to sell themselves that way. They sell themselves as the keeper of the flame, the person who has this insight from the club, from the cadre of insiders called economists or game theorists. I find that psychologically interesting. I think many people would love to discuss that they are smart, but I think it's very difficult to say: I'm smart, my IQ is such-and-such, listen to me. Very easy to say: I have 50 papers in Econometrica, and therefore I am an expert; listen to me. In general I don't believe in [?]. Somebody wants to get it, somebody wants to pay for that, somebody wants to listen, somebody would like to cover himself. The worst case for me is when people are presenting political views; my own country, where some people are supporting the political views with sort of scientific justification. That only happens in Israel. Doesn't happen in the United States. We're all just scientists here. We have the same problem here. The worst part is, it's dishonest. We pretend we are scientists, but we are not. We are cloaking our biases in scientific rhetoric. I would be softer than you in one point: I would not say it is dishonest. I would say we are cheating ourselves, that the desire of people to make claims which are not just their views and not just derived from their ideological positions, is somehow so strong that people are just persuading themselves. I don't want to speak any evilness, any dishonesty; I think some of the people who do it are honest people, fine people, but I just think they are wrong in their positions. I agree with that softening. Series of podcasts on the Theory of Moral Sentiments, there's the line of Adam Smith, paraphrasing slightly: Man wants to be loved and to be lovely. We want to have people think highly of us, and we want to earn that respect and honor. So, we have a tendency to delude ourselves that we've actually earned it when in fact we haven't. Basic impulse is good. It's good to be liked for genuine reasons. But, we have a tendency to self-deceive. As you say, it comes with the mother's milk.

24:29 I want to talk about a rather iconic example from behavioral economics literature that you've critiqued in a paper on behavioral economics; happens to be an example that came up in the podcast with Dan Pink and may have come up another time, forgotten which podcast. Now-famous example of an Israeli daycare center. They had a problem, people were coming late to pick up their kids, taking advantage of the daycare center. As an experiment they instituted a fine, where if you were late picking up your kids you had to pay an additional fee. They found out that it actually increased the number of people who came late; and this was seen as some sort of refutation potentially of downward sloping demand curves. I think a lot of the claims for this example are overblown, but I want to get your reaction to the science and objectivity of it. I prefer to talk in general about experimental economics, methods or assumptions. My interaction with experimental game theorists in economics started when I met years ago Amos Tversky. He became quite a friend of mine until he died in 1996. One meeting, I told him: Amos, I don't understand what you are doing. What you are doing is completely obvious. I don't understand why you need experiments to support your observations. You understand how people reason; you have interesting observations about those; it's enough that you write those observations like philosophers. Philosophers usually publish; they don't support their observations with experiments. The interaction with Amos for a few years, including one paper that we wrote together, taught me a lot about the meaning of experimental economics. I do still believe that most of the interesting stuff in experimental economics is something that comes from people with understanding, with good observations, the ability to observe how people think and reason, and of course with the ability to make some abstractions from that. Once they come up with some observation, if the observation is good that usually, you hear it and you say, Yes, that's [?]. Introspection, which I believe is the main tool of experimental economics or the cognitive psychologist is something that overall this is the most important tool in this field. Nevertheless, we require from a social point of view that it's not enough to say: I believe that or I think that and I checked with my friends or my son or my wife; we need a little more than that. And then it's become quite an out. And I think that people like Amos Tversky or Daniel Kahneman were like artists for me, namely they have this wonderful ability to take such an observation, from introspection, observing other people thinking and reasoning and somehow inventing some sort of experiment that demonstrates the point very well. That's something that is not easy. Once we see the experiment, sometimes we think it's so easy, everybody can do it. But that's not the case. In the paper I wrote with Amos, we went through 20 or 30 pilots until we came to the final phase of the experiments that we did. It's not me that I did it--it was Amos, after so many years of experience. It was not easy at all to find the right way to ask the question to demonstrate the point we wanted to demonstrate. The point was very clear; we didn't have any doubts that the point exists in human reasoning. So, my point is the following--one of the two. If we say economics is really like philosophy, and it's enough to say some people reason in this way or there is some elemental reasoning that is common to many of us, then we don't need experimental economics. If we do the experiments, then we need to do it in a very careful way and it's not enough to claim that the conclusion is correct. In some sense, the conclusion is very clear from the beginning; in some sense you don't need the experiments to believe in the conclusion. This particular case that you mentioned, common sense tells us if you know you don't have to pay anything for some service, then you may actually be more considerate of the interests of the guy who gives you the service. On the other hand, if he charges you something--and it's a little bit annoying that he charges you this extra--then you start to do your maximization. Might be that if the kindergarten teacher does not charge anything then I will be careful to come on time, and if he charges a lot of money I will be very careful to come on time, and if he charges some peanuts or a few dollars, I might say it's not worse for me to be half an hour late and pay the $10. This is something which, if you ask people in the street they probably didn't think about it with this abstraction, but if they think about it, they will agree that that's more or less the case. So, if you want to say that's the case and the incentives are not monotone in their influence, then. And also that there are non-monetary influences, cultural disapproval, other forces--it's not so complicated; that's part of economics, too. Right, but what I'm saying is that if you want just to say that's the case, then I don't think you need to do any experiments. If you want to do experiments, now you have to judge the experiments and not the conclusions. I think the general mistake that many of us make is we say: Aha, here's somebody who--and I don't refer to this particular case but in general to many papers in experimental economics in the last 20 years where actually people were quite impressed by the conclusion, probably because the conclusion was useful to construct new models. But in some sense you didn't need the experiments to be persuaded.

33:00 Now, as I said before, if you do the experiments then I would like to require them to be done in the right way. How do we usually judge experiments? Usually by using some statistical tools. You use a statistical test, you run the program, you get t = whatever, and it's better if you get p below 5%; and if you get 6% it's a disaster and if you get 4% you are happy and it's the end of the game. One of the things which as a profession we hide or don't take into account, the big concern about results in my opinion is not uncertainty, which is measured, but the uncertainty about the way the data are recorded, the uncertainty about the way the experiment is done. This is something that my feeling is that economists are not critical enough. We don't check the details; it's not politically correct to say to somebody: I don't think you have done the experiment in the protocol you are reporting to. And, therefore the significance of the experiment, without getting into details about case A or case B, in general I am much more skeptical about this fact. Let me be bold; let me shock you again. Most of the facts that are reported, I don't believe the facts. I believe the conclusions, but not the facts. One thing that I learned in my life is we are not angels. There are very few angels in the world; and I am not an angel. Working in experimental economics requires in some sense to be an angel--namely, it's very easy to deceive yourself, to do the wrong calculation and then not to check your calculations if they go in your direction. I give lectures about new economics and one of the things that I do in this lecture is show some data about eye tracking. Some work with some friends of mine from the Weizmann Institute and a student from Tel Aviv U. And they show some data, nice pictures, recordings of the eye-tracker. What is eye tracking? The story is that in order to, we try in general in what is called new economics to understand the way people reason. They try to do it usually measuring these things. It comes from your brain, your body, gives us some hints about the way you reason before you make your decision. This is one of the most important, most interesting developments in economics in the last 5-10 years. A lot has to be said about the way this was developed. Personally what I did with some friends, we did some eye-tracking experiment so we get people to choose between two factors, like x dollars with probability p and y dollars with probability q, and you wanted to see how people reason about it, using the eye-tracking, which allows us to follow the eye movements--namely whether they compare the dollars to dollars, the probabilities to probabilities, or whether they made some calculation like multiplication of the x dollars with the probability p versus the y dollars with the probability q, etc. So, we did this experiment. I do these experiments mainly because I am interested in the methodology of economics and I wanted to understand better how people do these sorts of things. And one thing that I learned better in my life was that the best way to criticize something and understand how people do something is to do it yourself. When you do it yourself then you find the dirty tricks of the field on one side; on the other side you are also becoming more sympathetic to the difficulties other people face when doing their research. In any case, we have some very nice movies where you can really look at the movements and look how the eye movements went; and you say: Wow, it's very clear what's going on in his mind; something quite funny, quite excellent. Now, I show these movies to a crowd and then I tell them: Look, the entire data that you saw is wrong. I showed it already in 20 or 30 lectures and nobody until now realized why the data were wrong. People could see it because they had the entire information; and the entire background to make the conclusion is in some sense wrong; and they don't see it. And actually, we, my co-authors and me, were watching this movie for days and weeks until there was some accident that caused us to realize that the data was wrong. Because what's happened in this case was there was a mistake of the program and actually everything up/down was reversed. Whatever was up was down and whatever was down was up. And people could realize it because all the eye movements started it from the bottom. And it's very strange--people started to look at the screen from the bottom and not from the top, even in a country like Israel where we write to right to left--we don't read from bottom to top. Nevertheless, we don't criticize enough the way we take for granted what we see is correct. The way that the data was recorded and collected was fine and we don't search for fundamental mistakes in the way that we analyze the data. We do look for statistical tests, which actually are very good tests for the ability of the data, if you mean everything after you run the statistical test was correct. So, I think one thing I have tried to encourage in my research, my lectures--other people to be skeptical about the data in experiments and not just to check the statistical validity but to go farther to the very fine description of the experiment. Details are extremely important and may make a lot of differences.

41:25 But this is a general problem. You pointed out very eloquently in the case of experimental economics, in your paper which we'll put a link up to, you give some nice examples, what you might call casualness in the collecting of the data or the imprecision of it. The problem I have--and this is true of every type of empirical work, not just true of experimental economics--the researchers don't reveal what they actually did. They tell us an ex-post story that they tested this and it came out, and we were right. You don't know how many times they ran the regression, how many specifications they used, how many times they didn't like the outcome and said: Well, we'd better try it in log-log form instead of in the levels. So, in the example of the experimental literature, when I read these pop psychology books that report to teach us some new thing about economics, I'll always ask myself: Can you replicate it? If I did the test would it come out every time that way? Did you just find it once? How many times did you do it. There's this classic example in a book I like a lot, James Surowiecki's book The Wisdom of Crowds, a lot of interesting things in that book. But one of the things is that when you have people guess something, the mean sometimes is the right answer even though no one individual guesses accurately. Sometimes true, sometimes not true. But for fun, I did an experiment with my class. I filled up a big jar with beans; walked around, asked people how many beans were in the jar to see if the mean was anything close. It turned out that almost every single person underestimated how many beans were in the jar. Let's say there were 800, most people said 500, 600. But of the 30 people I asked, one person said, like 8000. It was impossible that there would be 8000; he just looked at the jar, gave a wild guess, was totally off. But when you included that 8000 it came out very close to the mean. Now, does that confirm the theory? And if I had been trying to disprove the theory I would have said to myself: Do you really think there are 8000 in there? Come on. And then he would have said 800. And then I would have shown that the theory was wrong. And as you point out: In the protocol, was I careful not to do that? Did I keep every outlier? Or did I say: Well, that's an outlier and did I pick and choose? Did I run it three times before I got that result? These are the questions we need to ask all the time of experimental economics and empirical economics, and we usually don't, because we are not angels. We need to confirm our biases. In connection with that, let me add two or three points. One thing that is certainly that one thing needs to be changed in economics: I have heard already many people agree with that point; we like this culture of replication. I think you do something original; I am not sure about it; I run it again. The chances I will be able to publish it, whatever the results are very small. That makes it very rare that people will replicate; and if they replicate, usually they have the incentive to approve the result and then to make some sort of secondary experiments that under some different conditions with some changes and they have some other phenomenon; but basically they have to replicate to approve the original experiment. I think this culture is very bad. There is room, I think, for giving some credits to people who do replications, whether it works or fails. Of course it has to be venerated less than some big invention, but the profession does not give any incentives for people to do that. Second, there is a big problem about the protocols that experimental economics use. At the moment, experimental economics requires certain protocols which became very rigid and extremely difficult to publish, whether you are a famous person or a young person. Very difficult to publish stuff which does not follow these protocols. The protocols require certain incentives, namely giving the subjects some money and require what they call laboratories, a nice word for a bunch of computers. I think the outcome in my opinion is devastating for academics producing, because first of all the incentives are not serious and there are pieces that show that these amounts that are given do not really incentivize people in the way that makes much difference between this protocol and a protocol when you are telling people you are not going to get any money but now let's imagine that you are now in a protocol situation where such and such thousands of dollars are at stake, etc. That's because people are very good at fantasies, in putting themselves in imaginary situations; and then actually I believe the reports about what they would do, of course it's not exactly what they would do, but give us some information, in my opinion better, than what we get when we pay $5 to students for sitting thirty minutes at your computer and doing some very boring game. The worst of it is it prevents, at the moment a lot of impossibilities to do very huge experiments via the web. In the last 8-9 years I've done a lot of experiments on the web and I think the results I get are not less valuable and in my opinion even more because I am talking about many thousands of subjects and participants. I think the results are not less reliable, at least for many of the games or decision problems I am working with. Very difficult to publish it, to draw the attention of the community because the community requires these sorts of protocols. Now, why does the community require these sorts of protocols? Let me again push the position that they cannot take the position--everyone is an economic agent, and behaves according to incentives but we are not. Firms create cartels, and we are not cartels. I think the economic profession at least in certain ways behaves like a cartel, and the cartel has barriers to entry. That's the basic strategy of any cartel. The cartel of experimental economics puts a barrier to entry. Because in some sense it sounds very easy to do experiments. This is one of the misleading facts for graduate students. Many graduate students who don't succeed to do theory or elaborating macro models or whatever, then they feel we can finish the Ph.D. quite easily because they will get some money from their supervisor and run some experiment, usually get some results; and we can publish it and get the Ph.D. What the cartel does it put some barriers to entry to prevent that too many people will do this sort of thing. But these days, people have good access to the web, to subject around the world and can gather information about the way people reason in general. Including in economic situations. Much better than putting guys in a lab for 30 minutes.