Alan Forrester writes Criticising Taleb’s Precautionary Principle Paper, quoting Nassim Nicholas Taleb (and his co-authors):

The PP states that if an action or policy has a suspected risk of causing severe harm to the public domain (such as general health or the environment), and in the absence of scientific near-certainty about the safety of the action, the burden of proof about absence of harm falls on those proposing the action. It is meant to deal with effects of absence of evidence and the incompleteness of scientific knowledge in some risky domains.

I'm quoting this as context for what I say later. As a side note, I refuted the burden of proof idea in my Yes or No Philosophy.

The purpose of the PP is to avoid a certain class of what, in probability and insurance, is called “ruin” problems [1]. A ruin problem is one where outcomes of risks have a non zero probability of resulting in unrecoverable losses.

Taleb wants us not to use GMOs – genetically modified food like Golden Rice which helps provide more food and vitamins, especially for poor foreigners.

Forrester summarizes David Deutsch in The Beginning of Infinity (BoI) criticizing the Precautionary Principle (PP):

The PP assumes that new innovations will make the world worse and so that current knowledge is basically okay and not riddled with flaws that might lead to the destruction of civilisation. But our knowledge is riddled with flaws that might destroy civilisation. Human beings are fallible so any piece of knowledge we have might be mistaken. And those mistakes can be arbitrarily large in their consequences because otherwise we would know we were right every time we made a decision above the maximum mistake size. In addition, we can be mistaken about the consequences of a decision so a mistake we think is small might turn out to be a large mistake. The only way to deal with the fact that our knowledge might be wrong is to improve our ability to invent and criticise new ideas so we can solve problems faster. Taleb doesn’t address any of these points in his paper. He doesn’t refer to BoI. Nor do any of the arguments in his paper address Deutsch’s criticisms of the PP.

Taleb's argument is a Pascal’s Wager successor. Pascal's Wager says we should believe in God because the downside of being mistaken about atheism is eternity in hell. Meanwhile the downside of being a Christian, if God doesn't exist, is finite: e.g. some wasted Church visits and prayers. Even if the odds God exists are 0.00001%, given the stakes, one should believe in God and try to get into Heaven.

Pascal's trick is to compare an infinite downside (eternity in hell) with a finite downside (decades of having a worse life). The infinitely important issue will always win unless its probability is 0%. (Ignored is the possibility of a rational, atheistic approach to life helping create life-extension medicine that results in immortality.)

A "ruin" problem is, like eternity in hell, a problem with infinite downside.

With Pascal's Wager, one can argue that God's existence shouldn't be assigned any probability. Small probability is a bad way to deal with bad explanations, bad logic, bad reasoning, unanswered criticisms, losing the argument, etc.

With Taleb's ruin problems, there is risk above 0%. They aren't myths or superstitions like God or ghosts. They are conceivable scenarios.

Taleb uses his argument like Pascal's Wager, e.g. "No matter how increased the probability of benefits, ruin as an absorbing barrier, i.e. causing extinction without further recovery, can more than cancels them out." No matter how large the finite benefits, ruin always matters more. (Minor note: Taleb should have written "cancel" not "cancels".)

It's questionable that even the total extinction of humanity, or of all intelligent life in the universe, should be assigned infinite importance rather than just very very large importance. But I'll set that question aside.

There's a simple answer to Taleb. Everything he proposes also risks ruin. There are risks of ruin either way.

Taleb proposes, in short, to slow down industrial and scientific progress. He proposes more poverty for longer. He proposes more people being blind for lack of Golden Rice – and therefore they will be inferior scientists and inventors. He proposes more people dying for lack of food, or ending up in jail for stealing food – which gets in the way of being a philosopher, businessman, economist, etc.

Delays to industrial and scientific progress are risks of "ruin". They delay the time until we're a two planet species (or two solar systems or two galaxies). Every additional day we spend with a single point of failure (one planet) is a risk. Maybe that's the day a meteor, plague, alien invasion or other risk will ruin our planet. We're in a race against ruin. The clock is ticking before the next big meteor or other ruinous threat. The faster we improve our meteor defenses, and our wealth and technology in general, the better position we're in to deal with that ruin risk or any other ruin risk that may come up. There are some dangers that we don't foresee at all; our best defense against the unknown is to have lots of knowledge, lots of control over physical reality, and other general purpose tools and resources.

Slower progress with more poverty and misery is also a ruin risk for the individuals who go blind, starve, die of aging before a technological solution is available, etc.

And greater poverty and misery in the world, with worse science, increases our risk of ruin from violence. Our ruin could come from resentment from people who want Golden Rice and feel (reasonably, IMO, but it's a risk even if they're wrong) that we're oppressing them. Civilization may be destroyed by Islam, China, Russia or some other war. The sooner everyone lives in a much nicer world (paradise by current standards), the lower our risk of war.

Civilization may be destroyed by the spread of bad ideas. The more prosperity is brought by the use of reason (e.g. science), the more people will be impressed and value reason. Accomplishments help persuade people. The sooner the safer.

Perhaps Taleb things the destruction of civilization, and another dark ages, doesn't constitute ruin because one day people may reinvent civilization. But the destruction of civilization could result in extinction. It could involve biological warfare which creates a disease capable of killing us all. It could involve nuclear and chemical warfare which kills so many, and renders so much land uninhabitable, that everyone ends up dying. It could involve new weapons technology. If GMOs could ruin us, surely a violent conflict could where people are trying to cause mass destruction on purpose. If nothing else turns out to be more effective (doubtful, IMO), people could try to create harmful GMOs on purpose as a weapon.

Slower progress isn't safe. Nothing provides any guaranteed safety against ruin. In general, rapid progress is the safest option. The status quo isn't sustainable, as Deutsch explains in the "Unsustainable" chapter of BoI.

I wonder if Taleb tried to think of ruin problems affecting his proposal for the death of more poor non-white children and many other bad outcomes (even if no such thinking made it into the paper). With Guardian headlines like Block on GM rice ‘has cost millions of lives and led to child blindness’, a reasonable person would give serious consideration to not advocating more of that happening. Does it make sense that denying nutritious food to millions is the safe, no-risk option, while using science to improve their lives is the big risk? That's not impossible, intellectually, but one should make a serious effort to think of counter-arguments. But Taleb (in the full paper) didn't. He briefly suggested maybe the downsides of no GMOs are less than some reports because they have other causes which GMOs don't solve. OK but isn't there a risk that no Golden Rice has killed and will kill millions? Nothing he said could reasonably be treated as a reason that risk is zero. So then, did he analyze whether there is any way that that really bad stuff could lead to ruin? No, all he did is say:

Most of the discussions on "saving the poor from starvation" via GMOs miss the fundamental asymmetry shown in 7.

But it's only a fundamental asymmetry if there are no ruin risks associated with having governments forcibly malnourish the poor. But Taleb (and co-authors) didn't consider or analyze that.

What do I think of ruin risks? The short term affects our long term prospects, as I've been explaining, so they generally don't involve such a big difference as Taleb believes. In general, I think the right answer will be good in the short and long term, good in the big and small picture. We can make life good now and in the future instead of needing to make big, awful sacrifices to try to create a better future. Our success and prosperity now is what will lead to and create a good future.

Disclaimer: I don’t regard this as productive intellectual discourse. A reader might get the impression that this is the sort of critical debate which is supposed to take place between thinkers. I don't think so. I don’t think Taleb is making a good-faith or productive contribution to discourse. I don’t regard him as a worthy opponent. I think he acted intellectually irresponsibly, he’s not open to discussion or learning new ideas, and the bad philosophy thinking he’s a part of is one of the world’s big ruin risks. I regard my post as similar to debunking a UFO sighting.