0:33 Intro. [Recording date: January 8, 2015.] Russ: I want to remind listeners: please go to econtalk.org; in the upper left-hand corner you'll find a link to a survey where you can vote for your favorite episodes of 2014. Now for today's guest, Nassim, Nicholas Taleb. [...] Today we are going to be talking about a recent paper of his, co-authored with Rupert Read, Raphael Douady, Joseph Norman, and Yaneer Bar-Yam on "The Precautionary Principle (with Applications to the Genetic Modification of Organisms)," and other general issues related to risk and ruin. Nassim, welcome back to EconTalk. Guest: Hi. I'm always honored to be on your show, but also I have to admit that also it's a pleasure, conversation with you. Perhaps we think too much alike, and it may be a problem from a scientific standpoint but it's always a pleasure. Russ: Well, it looks like two data points; it may only be one. That's correct. Let's start: what is the precautionary principle and why is it important? Guest: Okay. There's some water on the floor. Do you drink from it? Would you drink from it? No. Why do you not drink from water on the floor, if you are thirsty, you are very careful. But you have no evidence that it's poisonous. Uh-huh. So you are making a decision without evidence. This is the exercise of the precautionary principle in your daily life. In other words, for things for which you don't have evidence, you try to stay cautious until you accumulate the evidence; then you can pick the risk. Russ: So, it's useful in situations where, you call it 'non-evidentiary problems.' Guest: So, technically, the definition of the precautionary principle is on decision making, what should be accepted or rejected in situations for which you do not have enough evidence or you do not have evidence yet. In other words, scientific knowledge has not been sufficient in establishing a clear cut answer about things, like what you exercise in daily life. 99% [?] are based on precautionary principles in our daily lives. But there is something much deeper there, is that as people are getting more and more into techniques of risk management, they tend to forget that most of the risks we are taking are of non-evidentiary nature, in the sense that the evidence comes always too late. And this is what we're trying to avoid. This is a very general concept, that people who know have always understood in history in decision making, and the problem of what we call 'scientism,' in the Hayekian sense, Hayekian/Popperian sense, scientism, this idea of using mechanistic tools from science to make claims and techniques, scientism has blinded people to this sort of reasoning. That is, effectively more rigorous than science because you have an asymmetry: you may die if you are wrong; and if you are right, [?] very [?]. Russ: And you argue very thoughtfully in the paper that experts are important, but you have to pick the right kind. Guest: Very often, people in a given profession develop expertise about what they are doing. In most domains, they don't quite have a grasp of the risks, simply because their knowledge, professional knowledge that may help you do a lot of things, but particularly if it's academic, it's not going to help you understand the risks. This we've seen in many domains. Like, traders understand the risk because they are pretty much risk managers, there to be risk managers. But, say, people that we've encountered, [?] for example, they understand return but they don't understand the risk of something. But what they don't understand typically is that the risk belongs to a completely different category. In other words the tail risk, the risk of ruin, is very different from knowledge. So, for example, your risk can increase while your knowledge is increasing. And we have shown, in the paper and some derivations elsewhere, how for example sometimes you bring something new, a new technique, for which you understand the benefits are going to be great. And what you do is increase both the benefits and the risk of ruin. So we end up worse off than we started, sometimes from a pure problem with another one. Is this clear enough, or should I-- Russ: I think-- Guest: Let me continue--yes. Go ahead.

5:46 Russ: I think we talked about this in a previous episode. You have to make a distinction between the process and the consequences of the process. Right? So I think-- Guest: Exactly. So, some people--[?]--they understand biology, okay? They understand very well [?]. And science is not about making claims about risk. Science is about making some verifiable and generalizable claims, from a given process; and that someone else can read[?] and continue and improving on a body of things. But it doesn't make claims about risk. So, we notice that neurobiologists, or biologists in general, but particularly but the same[?] was done on neurobiologists, quite general, through that profession, through the broad field, they understand what they are doing, but the claims of evidence are usually more than 53% of the time in that experiment, in that paper which seem that they get things wrong in that they are making the claim statistically. So, a statistician can direct once that higher than a neurobiologist in a scientific claim. And the error is common, is, for example, the testing whether a is better than b is a test of significance of a and the significance of b, and without testing the significance of the difference between a and b. May be technical for the common person, but it's a know-known[?] statistics. And yet more than half of papers in top journals in neurobiology make that mistake. Russ: Yeah, it's a great point. Guest: So, a statistician--and the way these people operate is they know biology a lot; but there's a cop called a statistician on top of them, who studies the paper and puts a stamp on it. And typically runs the data himself or let him run the data on SPSS (Statistical Package for the Social Sciences) or something. And then give his approval. So, knowing biology doesn't mean you understand the evidence. Okay? And this is quite good. Now, once that higher up is that understanding statistical evidence, doesn't mean you understand statistical risk. And that's [?] how. Many people we have discussed the [?] problem. I wanted to detail [?] analysis, any kind of statement about some kind of technology that may [?] masses of people. Many of these people think that they have evidence; and then you read their papers and you look at it, and no statistician would ever let you say 'I have evidence that'--this is again the Black Swan problem. A statistician would only let you say, 'I failed to reject the null at x% confidence. This is what we brought up, which all of us are doing in our lives. So here you see that statistical evidence or what we call the mechanism doesn't say anything about a tail. Russ: Well that's the distinction-- Guest: The tail. Statistics is what happened with that band and do we have enough data to make that claim that this works. If it doesn't say anything about what happens if that claim is wrong, and they give you, they say, okay, there's 1% probability or 2% probability or 5% probability of that claim being wrong. But what happens when it's wrong is usually a different business. And that's where risk measurement starts. And that's my profession. Russ: And of course therefore Nassim is the expert of experts. You have to be careful. It is a comforting thought for you. Maybe not for us. Guest: No, no, I'm not an expert on experts. Our job is the left tail, which is a sub-specialty. Russ: Yeah, that's true. Guest: But, so when it comes to right tail or benefit, stuff like that understanding process, body of distribution, we have no specialty; or we may understand some things but we don't rank higher. Guest: So, now that I give you hierarchy--I said, neurobiologists and I said on top of neurobiologists you have statisticians saying whether or what their claim was or whether the claim missed the statistical evidence or not; and then higher you have the left tail, and it's a complete different business that we have discussed there. Now one simple analogy of why people sometimes in the profession are not qualified to talk about risk of the profession is what we call the Carpenter Fallacy. If you want to understand the risk of ruin or sequence of bets. It's a standard result in probability. But who would you go to for that problem? Would you go to a carpenter who builds roulette? Or would you go to a probability person? The carpenter may claim, 'Hey, you know what, you are insulting me. I know very well how this is built,' and stuff like that. But his knowledge of the carpentry involved in building the roulette table doesn't allow him to make claims as to the probability distribution of what is going to happen. And then less even about claims concerning large deviations, the long sequences of tail events. You see my point. Russ: I do. Guest: So here we have [?]. This is where we are positioning that precautionary principle--it's about saying that are in the business of that very left tail, who are completely different, a different science than yours. Science never really talk about left tails. Only journalists think science talks about that--or bad scientists. And then you need a cop for that. That's it.

12:12 Russ: So, the way I think about it, that I learned from your paper is really a distinction between harm and ruin. In one world, you play poker every night; and some nights you lose a dollar, some nights you make a dollar. Some nights you might lose $5. But if you are in a neighborhood poker game, you are not going to lose your entire wealth. You are not going to have ruin. But you are dealing with cases--you are making a crucial distinction between harm, which is that 'some nights I might lose a little money,' versus being wiped out. In the case of the globe, you are talking about extinction. Guest: Exactly. So, what happens is that, to frame it with the discussion of the three layers of knowledge from the biologists to the statistician to the risk analyst, the body of the distribution is particularly[?] the job of the statistician. Variations, all these things. It's not part of our job. Our job is ruin; completely different dynamics. And for many probability distributions, there is a complete decoupling between variation and ruin. You remember, when I published The Black Swan it was in April 2007; if I received a Mexican peso for every time someone mentioned the Great Moderation to me, that the world is becoming a lot safer because it's less volatile, I would probably own a big strip of land in northern Mexico. And then of course, sure[?] that the crisis happened; and then it was not a change of regime. It was nothing. Just that they are making claims concerning tail events from observations about the distribution. And for the class of distribution that we used to work with, with fat tails, these claims cannot be made at all. So the risk can increase while at the same time variation can get smaller. And this is where Ben Bernanke went from, because he was not trained enough in statistical, in fat tails, to understand the risk. Another [?] problem. Russ: Why are-- Guest: Let me, steal, here--let me steal a method[?] you gave me; actually I've used it before and I gave you credit the first couple of times and I stopped giving you credit. So maybe--so I owe it to your listeners, is I learned from you something. It's that, you remember when you were talking about the difference between a systemic and fat-tailed systemic event, and a capacity[?], a smalltime [?] capacity--that if a plane crashes, it's a tragedy because it will kill the people on the plane and it's a great loss--very bad news. But a plane crash will not kill every single person who ever took a plane before. Whereas in some domains, such as finance, for example, banks can lose in a single quarter every single penny they ever made before. So in fat-tailed domain you have to be very careful because the tail is absorbing--it's a lot worse, but that's only money. It's a lot worse when we talk about finance. Vastly worse, because this is not renewable. Go ahead.

15:45 Russ: So, talk about the underlying processes. It's a little bit puzzling to an amateur as to why fat tails are so important. So, for example, if I have thin tails, well, it just means that ruin is just very unlikely. It's still possible, though. So why are fat tails important? Guest: Fat tails are important because number 1, you don't notice the variation, as I said are compressed, so you don't notice that the risk is present. In a thin-tailed domain, evidence can accumulate as to the riskiness of something. If you go to Las Vegas and are there for 3 days, you pretty much understand everything; you can predict anything that can happen. Because in the thin tails are so tractable and the law of large numbers operates very quickly. For fat tails, you need a lot more data to know what's going on. And when an event happens it can hit you big time. And the consequences of the event can be monstrous. Which is why we cannot be casual about fat-tailed domains. Now, we can ex ante figure out that something is fat-tailed: we know the ecology is rather fat-tailed, and that the crises in an ecosystem are not systemic because we have isolations. We do not have a large scale, generalized--or we did not have that before GMOs (genetically modified organisms), why I am worried about GMOs. Russ: Well the example you give that I think makes that so clear is a forest fire. Forest fires are extremely destructive. But there are all these natural built-in barriers: there's oceans, there's rivers, there's mountains, there's natural firebreaks that keep a fire from being a catastrophic event. But what you're worried about is something that has the potential to cross those barriers. Guest: Exactly. So the way we say that nature has not blown up at least in the history of the process we have zillions of variations, trillions and trillions of variations on mother Earth, and it did produce some tail events but not pronounced enough to cause extinction. So even if we adjust by what we call survivorship bias or some similar principle, we just can make claims that nature seems to have survived thanks to a mechanism by which capacities stay relatively local. So things don't spread. So in other words, plane crashing doesn't kill every single passenger on other planes or every plane before. Things stay confined and isolated. We had that in economic life of course until globalization; so what happened, a crisis now took place on the planet in 2008 and there is no place to hide. Or almost no place to hide. In the ecology, it's going to be worse. We used to have an island separation, every island barrier, which produced effectively some diversity, because diversity is much higher in [?] square meter on an island than it is on a continent. And we're losing it. And we're losing it through a lot of methods. But we'll come back to that in a minute. Now I have one other element of fat tails I want to add so we can inform the rest of the conversation, which is as follows. Many people understand that there is a risk of ruin, and it could be very small, and sometimes we've got to take it. Many people understand that. But few understand that risk needs to be zero. Not small. Why? Because think of what happens in the sequence of risk-taking. If you take a risk, say with Russian roulette, a risk of ruin, and survive, what would you do next? You may take it again. So many risks that are very, very small, because you've survived them, lead to 100% risk of ruin. Russ: Right, because you get--well, it's a couple of things we've talked about before which I find extremely powerful, which is what you call the Turkey Problem, which you get from Bertrand Russell: every day the turkey is being taken care of by the farmer and thinking--every day he gets additional and new evidence that it's safe. It's fine. He's got a good life. Until Thanksgiving comes and he's killed. And similarly, Value at Risk (VaR) in the financial crisis--it's working; it's fine; we're making profits every quarter; we're very prudent; we're very careful because we have this tool that we use. And--I may have mentioned this before: I have a friend who is skeptical of your work. I won't name him on the show. But he says to me, 'Oh, everybody knows Value at Risk is dangerous.' I say, 'Well, it's true.' In theory. But after a while if you keep using it, you'll probably get lulled into--if you are not careful and if you don't have other feedback loops that make you wary, you are very likely to start thinking 'I've got this licked.' So you fire the Russian roulette; the bullet doesn't kill you because it's got a thousand chambers. Or maybe 100,000. But if you live for 40 years, you are in trouble. Guest: Yeah, exactly. So, this is what people fail to get: that ruin is not a renewable resource. It's [?] insurance. Russ: Explain. Guest: Let me explain. If I play Russian roulette, if I play things like that, I'm not--the probabilities add up. So mountain climbers have a very small probability of dying in any given episode. What happens? Hey, they survived. So they're going to attempt to do it again. So eventually their life expectancies are going to be much shorter. Because they do a lot of it. So, on the repetition, you end up with 100% guarantee of ruin. So you lose--it's a resource that's not renewable. And people fail to look at risk that way--it's that, you look at the risk of one episode, not succession of tail risk taken by the planet. So I have no problem of people taking risk so long as everyone stays local, not--doesn't [?] the whole human race. Russ: And you mentioned insurance because it's like the cat having 9 lives? You get another-- Guest: [?] with insurance you have a cash flow. And they understand the problem very well, since Cramer [probably Harald Cramér--Econlib Ed.], the guy who studied insurance. They looked at some process that compensate the risk you are taking because you are making some money to accumulate in some reservoir that's going to be depleted, but not 100%. So the idea is to calibrate the risk taking to what you are getting into the reservoir. In insurance you can do that. In ecology, and many domains, you cannot do that, because the reservoir is not being filled. We are just wasting risk. You see? So what happens in the end, risk accumulates to 100% probability of ruin.

23:19 Russ: So, let me ask one more general question and then we'll turn to GMOs and environmental issues more generally. In the article you talk about a contrast between bottom up, local events leading to thin tails, whereas global, connected, top down events are going to be fat-tailed. Talk about that. Guest: The best way before getting to the statistical taxonomy of these things is through a common--probably your next best economist. Who is your next best economist, after Adam Smith? Russ: Uh, that would be F. A. Hayek. Guest: There you go. So let's talk about Hayek. You see--by now I can read you. Russ: I got nervous; I got nervous there for a minute. But I got the right answer. I'm relieved. Guest: What was the idea of Hayek? Why did Hayek want distributed knowledge in society, nothing, no monopoly of knowledge by anyone? Because he wants the errors to be distributed. He thinks that the system knows more than any individual part of the system. And also be he thinks we cannot forecast--the mind cannot foresee its own advance. That's another profundity, not just we can't forecast: we can't forecast how we are going to forecast in the future. So really, let's call it Popper/Hayek because they really worked on that together and the two friends were brilliant in slightly different domains. So, Hayek was against--what? Against a top down social planner who thinks he knows things in advance, can't foresee results. And makes--because the person first of all has arrogant claims that may harm us, but also because of mistakes--he's not going to foresee his own mistakes; and mistakes will be large. So you see where I'm coming from? Russ: Yep. It's Adam Smith's man of system, also--same problem. Guest: Let's continue with Hayekian thought. And this led Hayek to stand against what he calls 'scientism'. Scientism is an unscientific use of science, that I've encountered with pro-GMO people who keep attacking me--the scientism, because they say, oh, I'm for science; risk management is science fiction. And then there's no point, there's nothing wrong. Hayek has solved that problem of scientism and false claims, what, 50, 60 years ago. And he effectively is a man who is vindicated. There's something even more interesting than that about Hayekianism. You know, the opposite of Hayek--people who did exactly what he was against--were the Soviets. You know? Russ: Yeah. Guest: Now, it so happened that there's a branch of mathematics largely developed by the Soviets in dynamical system, one just got the Abel Prize, in that tradition, started by the Soviet Union, in the heyday of Soviet science, about nonlinear dynamics. And the last one was the billiard ball fellow, Yakov Sinai, who got the Abel Prize. And he's probably the most crowned mathematician alive today. Now, what is this Soviet mathematician saying? You know what, in a complex system you can't predict. That's sort of what they said. Financed by who? A social planner. Russ: Yeah, it's ironic. Guest: But nobody saw the contradiction. That if they are right, then they should have no Soviet system. It's ironic, but let's not laugh too early, because it looks like many people are making that mistake. Russ: Well, it's a common problem. Guest: [?] But making it is different when you switch domains and the fact is the mistake isn't a mistake thinking that an environment is predictable when it's not. It's a mistake of not realizing that an idea developed in one domain can apply to another one, while accepting that these two domains have same operating mechanisms. So you continue Hayek effectively looked at nature as a format by which things--he sort of like thought of nature directly and indirectly and thought of the organic directly or indirectly as operating according to his principle of distributed knowledge. And technologies. And tinkering--away from that central planning mode. Russ: Well, that's why the latest paper on macroeconomics that claims that such and such an intervention is good for the economy, or bad for the economy, is the same as the epidemiologist who claims that drinking coffee or wine or whatever it is, is good or bad for you. And they find some data-- Guest: I would say--it's benign to say coffee is good or bad for you. But it is a benign claim. And some such claims can be rigorous. But let's say now, a Soviet planner, one that comes to nature. Aha--GMOs. You see where I am coming from?

29:09 Russ: Well, let's talk about that. Guest: So, GMOs. If you look at evolution, if you look at how things get from point A to point B, it's by small tinkering, where mistakes are kept small and local. And you cannot foresee interaction in a given complex system unless you experiment with things. And that's Hayek, that's the mathematics that we have behind us [?], and the entire class of [?]-- Russ: Schumpeter. Yeah. Guest: I don't know about Schumpeter, but I know about the real mathematicians who worked on these problems and dynamical systems. You cannot really forecast interaction in systems that are too complex. And you can explain it to someone, you can explain limits, with all kind of incompleteness theorems that we have; or with simple example of billiard balls. So, the problem that natural systems--this is universality of complex system--has opacity, if you look at them from the standpoint of a social planner. But they are very understandable if we look at them from the perspective of a complex system that has evolutionary attributes. So, what you do is, time counts a lot. You put things together, let them interact, and then there's some dynamics of interaction; and you see if the system doesn't blow up, then it's a good system. If it blows up, then it's a bad system. And the system would anyway clean itself automatically using these mechanisms. And small tinkering. Russ: Feedback loops. Guest: Sorry? If it was feedback loops [?] things. In Antifragile I presented it in terms of different layers. You have a fragile layer at the bottom, like your selves. And then you have a hierarchy, above the selves, and you have individuals and then you have society. And then your families and then society and so on. And then humanity. And then--oh, species, and stuff like that. So you have hierarchies. And then you have, of course, evolutionary mechanisms at all levels of the hierarchies. So, this is how things work in nature. And I'm not saying anything that that's not true for evolutionary biologists[?]. That that's how the process of tinkering is accepted. Now, and that was bricolage actually--the word 'tinkering' I'm using now comes from bricolage, from the famous Monod and Jacob papers, two French people who got the Nobel in the 1960s. Now, when we look at GMOs, what are we doing with GMOs? We are skipping steps. A tomato, okay, there's a GMO tomato made according to the FDA (Food and Drug Administration) will be the same as a tomato--the same organically through natural mechanisms--or human breeding, even. But these steps are not the same as skipping zillions of steps to get to a tomato. We don't know in the soil what's going to do to other plants. We don't know what it's going to do to you. We have a lot of unknowns. So, when you have a lot of unknowns like that, you put the precautionary principle until further notice. So that's where we're going.

32:00 Russ: I interviewed Greg Page, who is the former CEO (Chief Executive Officer) of Cargill. And he accepts the idea that there may be some risk. But he, as you would argue, doesn't think much about ruin. So his view, and I think the view of many people in the industry, and certainly many scientists, whether they are tainted by self-interest or not, they would say, 'Well, look. People are eating these new tomatoes that have, say, the gene of a fish in it--or whatever has been done to it. And they are not dying. And it's hard to understand why you would be worried about the fact that there's going to be, say, a mass extinction of human beings from eating a GMO-modified tomato.' So, what's the scientific evidence? Guest: No, no, that's exactly what we want to avoid, having to talk about scientific evidence when the burden of the proof is on the GMO people to show us that they understand anything remotely about the tail risk. Which they don't. The tail risk is not someone dying from eating the tomato. That's not a big risk. No. That's not a systemic risk. The big risk is what can happen when you have two things going together--which is, what happens, Soviet style, is a combination of monopoly of some plants over others, that it's too large a system; and of course creation of other species that will themselves also be too powerful and then you may kill the GMOs or one may kill the other and you may have huge imbalances in nature. And these imbalances in nature can produce large deviations. This is our point. And we haven't seen any paper looking at the risk from that standpoint. And when people look at risk--we looked at them, some are using 1960s [?] error type reasoning, which of course is not, is too primitive to allow us to make any conclusion. And when people say, where is the evidence, tell them, 'Hey, you know, what was the evidence that smoking could cause cancer? What was the evidence that lobotomy was bad? What was the evidence that Teldane, Triludan, Seldane, Ecotrin, [...], where was the evidence that these were harmful?' Evidence showed up late. Sometimes--even in one case across a generation. So you have a problem with the reasoning of people invoking evidence when they don't know what they are talking about as far as evidence. No statistician would put his stamp [?] that we have evidence, that it is safe. They tell you failure to reject the null at this percentage. And, so they sort of agree with us that that tail is not investigated. We haven't seen an investigation of the tail that's properly done. Russ: But as you point out in the paper, if you are not careful, you can invoke that for lots of things. Guest: Exactly. So we are not invoking--for the nuclear, we cannot invoke the precautionary principle for the nuclear. Why? Because the nuclear will stay local. It doesn't mean it's not risky. You may want to ban nuclear, for risk purpose; but the nuclear, you cannot have, a Fukushima cannot lead to destruction in India. Or may, in India, but not in Lebanon. Or maybe Lebanon but not Cyprus. So you don't have--if you have now the same crops invading the whole planet, it's too much. Having GMOs on an island is one thing, and generalizing same[?] to the planet in the name of science is another thing. And I have heard--listen, if I had to inform you that it was Mexican peso, that if I had the Lebanese lira, or maybe the Turkish lira--now there is a lot of trading in the Turkish lira [?]--every time I heard people saying that we scientists with a zillion Ph.Ds. think that these securities are very safe, say in people speaking employed by Fannie Mae, Freddie Mac, or Morgan Stanley in 2007 before the crisis. And they would say no, there is zero probability of a fail in that. And even if you saw the Stiglitz-Orszag report about Fannie Mae. Russ: Oh, yeah. Guest: So, the point is you have to deal with skepticism in a corner of the probability distribution unless you have some strong feeling or really a very, very robust reasoning showing that this is not going to harm beyond that local [?].