Summary

Decision principles that place a strong asymmetric burden of proof on showing either that something is or is not harmful can lead to extreme conclusions. Symmetric approaches based on expected values work better. However, high degrees of caution are justified even from an expected-value standpoint when considering novel and transformative technologies.

Introduction

This essay critiques the basic notion behind the precautionary principle. I acknowledge that the precise definition of the principle may indeed consider the points that I make; if that is the case, then this criticism is not directed against technical application of the actual principle but, rather, against popularized and oversimplified conceptions of it.

When I first wrote this piece in 2005, I looked through Google's list of definitions for "precautionary principle" and found a few that stated the idea in the way in which I had always heard it:

"Better safe than sorry" attitude. The idea that, in the face of uncertainty, society should assume that potential problems are real and address them accordingly. (source) Assumption of the worst-case scenario with respect to actions whose outcomes are uncertain. (source)

In other words, when one does not know how harmful something will be, one assumes the worst until it is proven otherwise.

Counterexamples

This idea may sound very nice initially, but consider a few cases that one might encounter in practice.

Example 1 Suppose that Monsanto has just developed two new pesticides, Pesticide A and Pesticide B. Each has so far been subjected to only one preliminary study, so their actual toxicities remain uncertain. Yet, comparing only these initial results, it appears that A is very likely to be carcinogenic, while B shows no such signs. Following the simple precautionary approach given above, we must assume that both chemicals are seriously carcinogenic, because that is the worst-case scenario and their actual deleteriousness is not yet certain. But this would mean that regulatory agencies ought to spend just as many resources controlling and limiting the use of B as they do A—which is clearly not what actually ought to be done.a

Example 2 In 1985, EPA characterized the chemical dioxin as "the most potent carcinogen ever tested in laboratory animals" (source). One source of dioxin is the bleaching of paper with chlorine. It is true that confirmedly safe alternatives for bleaching paper have been discovered, but imagine that they had not. Suppose someone develops a new chemical that proves as effective and inexpensive in bleaching paper as chlorine. This chemical, too, creates bleaching byproducts, but they are entirely different from dioxins. Before any studies have been performed, the precautionary principle requires assuming the worst possible toxicity—perhaps as bad as the toxicity of dioxin. But in all probability, the new chemical byproducts will not be this baleful. If there is no cost for paper mills to replace chlorine with the new bleaching agent, then they clearly ought to do so, even if it will take years for the new chemical to be studied extensively. Yet the precautionary principle as outlined above rejects this action—or at least makes no recommendation about it.b

As these examples show, the problem with the simplified precautionary principle lies in its inordinately strong presumption of harmfulness, even when such harmfulness is not likely. Certainly we should not wait until adverse consequences are proven to take action—to that extent, the precautionary principle is right. But it can be just as shortsighted to assume great harm as it is to assume none at all.

Expected values

The best approach is to calculate an approximate expected value of deleteriousness (see "Why Maximize Expected Value?"); this provides an appropriate ethical tradeoff between risks of action and risks of inaction. Granted, coming up with probability values is an inexact and somewhat arbitrary process (indeed, the frequentist school of probability rejects assignment of such values even in principle), but what is the alternative? It is even more arbitrary to assume one particular result, as the simplified precautionary principle does. It is ultimately more helpful to say that A has a 0.9 chance of being carcinogenic while B has a 0.4 chance than to assume that they both have a 1.0 chance until proven otherwise.

Being cautious is important for transformative tech

In the two counterexamples above, the question concerned replacing one, known-harmful substance with another substance that was similar in kind to the first. In other cases, when the new innovation is fundamentally different and unknown, being cautious is much more important. This is not because of the precautionary principle but just because, even in expected-value calculations, taking a slow and steady approach is more prudent.

For example, consider molecular nanotechnology. Because it could be used in dangerous ways and is unlike past technologies, it's valuable to proceed slowly and carefully. If the technology works, we can have it for billions of years into the future; there's no hurry. But if it goes wrong, it could lead to social dislocation and may increase suffering in the future, possibly indefinitely in the worst cases. Hence, even an expected-value approach recommends a high degree of caution. One reason we can't be excessively cautious, though, is that altruists need to be able to develop safety measures against other actors who might otherwise outpace the altruists in technological development. That said, rather than fighting an arms race with non-altruists, we should seek ways to coordinate through technological controls in the style of arms control.