In 1986, the German sociologist Ulrich Beck published the first edition of his book, Risk Society.1 Beck's book has attained wide popularity, especially in Europe, for his analysis of the distance between scientific expertise and popular opinion in the understanding of potential harms. Time and again, Beck points out, the public complains that there is a harmful process at work, while the scientists who are appointed by authorities to assess the complaints conclude that there is no objective basis for concern. Too often, though, the scientific conclusions are advanced without involving the public and without taking into account conditions outside of the laboratory. Subsequent experiences, such as the Mad Cow disaster in British livestock, have only reinforced the problem. Public skepticism about genetically modified organisms and the food supply can be traced to the same roots. On both sides of the Atlantic, the example of HIV in the blood supply, the halting management of the crisis, and the resulting loss of public confidence in the system, constitute an all‐too‐familiar case for readers of this journal.

These kinds of examples and the conditions that Beck describes and explicates helped lead to the development of a policy standard, again especially in Europe, called the precautionary principle. In a 1992 statement, the European Environment Agency gave a succinct statement of the principle:

[I]n order to protect the environment, a precautionary approach should be widely applied, meaning that where there are threats of serious or irreversible damage to the environment, lack of full scientific certainty should not be used as a reason for postponing cost‐effective measures to prevent environmental degradation.

Further,

The precautionary principle permits a lower level of proof of harm to be used in policy‐making whenever the consequences of waiting for higher levels of proof may be very costly and/or irreversible (emphasis added).2

Paradoxically, technologic advancement itself is the source of the risks that concern us. In other words, scientific expertise has created the very conditions that society now doubts the experts fully appreciate. In Risk Society, Beck defines risk as “a systematic way of dealing with hazards and insecurities induced and introduced by modernization itself.” He goes on, “Risks, as opposed to older dangers, are consequences which relate to the threatening force of modernization and to its globalization of doubt.” The social and political conditions that affect current issues in transfusion medicine are thus part of a much larger “globalization of doubt” about expertise.

Although he does not allude to transfusion medicine per se, it is a paradigm case of risk in the sense that Beck writes about, for it has been a modernizing development in medical care and the risks it brings about would not exist without that modernization. The tensions within a “risk society” are brought to the surface when a new incident reignites doubt, such as concerns about West Nile virus in the summer of 2002. Sensitized by previous failures to satisfy public concern, responsible officials leap into action, often shifting their attention and energy away from known and measurable risks to persuade society that the unknown and often nonmeasurable risk is being taken seriously.

This reaction, while understandable in light of the critique leveled by Beck and many others, should give us pause. Along with policies intended to drastically reduce risk, it is an example of the way that the precautionary principle has been both transformed into a dogma and taken out of its original context. I call these tendencies “creeping precautionism” because the adoption of a precautionary stance may not even be noticed until it is too late, and because a reasonable principle can easily be turned into a self‐defeating ideology.

Consider two recent policies that public health and blood industry experts have cited in conversation as examples that might reflect creeping precautionism: p24 antigen testing for HIV, which may have prevented one case per year at a cost of tens of millions a year; and deferral of donors who have been European residents, when the risk of CJD risk remains highly theoretical with no documented case of transmission, and hundreds of thousands of units have been lost as a result. I am no authority on the complex medical and public health issues involved, but suppose for the sake of argument that these policies raise valid questions. How did they come to pass?

Clearly many elements came into play in these policy decisions. One that has not previously been identified is the fact that, as these policies were being formed, the precautionary principle came to have a great deal of influence in environmental health circles. In fact, I believe that we live in a period in which the principle has, without argument and often without conscious awareness, both been taken out of context, applied to problems other than catastrophic health concerns, and misinterpreted.

The original context was environmentalism, where ecologic complexities and uncertainties present risks that, for all intents and purposes, may truly be inestimable and irreversible. Those conditions may not apply in other policy contexts. The misinterpretation has been to suppose that precautionism requires that only zero risk is acceptable, or at least something quite close to zero risk. Even advocates of the principle would recognize these subtle shifts in thinking as errors. In particular, it is clear that no action or inaction is risk free, and that any decision has opportunity costs. The opportunity costs of precautionism in the blood industry are, among others, the loss of many usable units of blood and the alienation of potential donors.

I am not arguing that the policies in question are without merit nor that they should be repealed. Rather, I urge that a discussion begin in the blood services community about whether unwarranted precautionism has in fact crept into policymaking. This discussion should also take into account the peculiar stresses upon the role of expertise in a modern democracy.

To step back for a moment, it is useful to recall the classical origins of the idea of expertise. In his seminal work, The Republic, Plato uses an allegory to characterize the situation of the truly knowledgeable person. He spins a tale about a group of slaves bound to their places since birth, unable to turn their heads, and only able to view a wall in front of them. Behind them are carried ordinary objects, whose shadows fall upon the wall. Knowing nothing better, the slaves assume that the shadows are the real objects.

Now suppose, Plato continues, a slave manages to break his bonds and escape to the mouth of the cave. Unaccustomed to the light, he will at first be blinded by the sun, then realize to his horror that all his life what he has thought was reality was merely illusion. He tries to enlighten (pun intended) his comrades, but they mock him as mad. Discouraged, he takes his place among the slaves again.

The allegory implies that knowledge is painful, hard to achieve, and subject to ridicule by the ignorant. This is the uncomfortable position of the expert. The problem is especially grave in medicine, whose practitioners are supposed to be sensitive to the uninitiated. Yet, too much sensitivity is incompatible with the detachment and decorum that the physician requires to be effective. Sir William Osler, generally regarded as the father of internal medicine, commented on this dilemma of the health care expert over a century ago.

Imperturbability. . . . It is the quality which is most appreciated by the laity though often misunderstood by them; and the physician who has the misfortune to be without it, who betrays indecision and worry, and who shows that he is flustered and flurried in ordinary emergencies, loses rapidly the confidence of his patients.3

I write as a product and professor of the bioethical revolution in medicine, which is characterized by a deep suspicion of expertise, manifested as the insistence on informed consent and truth telling by doctors. This revolution is only about 30 years old in practice and would have shocked my father, a 1917 graduate of the University of Vienna medical school. The emergence of bioethics in the 1970s (my father died in 1974, too soon to experience the sweeping changes in medical ethics) is of a piece with the appearance of the precautionary principle of the 1980s. Both are rooted in skepticism of professional authority, and both have advocated lay involvement in medical decisions.

I do not advocate repudiating the new medical ethics. As the Yale University psychiatrist and law professor Jay Katz has long pointed out, the doctor‐patient relationship was for eons characterized by a doctor‐dominated paradigm that is best left behind. Yet, I worry that the critique of modernism, of expertise, has swung too far. In the clinical setting, I have seen this phenomenon manifested as the reluctance of physicians to give advice to patients who are facing a complex treatment decision. Often I hear complaints that doctors respond to patients’ requests for guidance with the demurrer that it is their decision, that they must exercise their self‐determination. This is a case of turning patient autonomy into a shield behind which doctors can hide. The ethical principle of autonomy should not be an excuse for abandoning the counseling that patients crave. Similarly, in the long run I think it is bad for the profession to give up the moral authority that comes with expertise.

Perhaps those with expertise in public health and blood should open a conversation about whether they have succumbed to creeping protectionism in an honest attempt to learn from the horrors of the HIV experience. I cannot say whether this is the case, only that the signs point to this possibility. On the importance of a philosophical view of scientific expertise, consider Osler again.

A rare and precious gift is the Art of Detachment, by which a man may so separate himself from a life‐long environment as to take a panoramic view of the conditions under which he has lived and moved: it frees him from Plato's den long enough to see the realities as they are, the shadows as they appear. Could a physician attain to such an art he would find in the state of his profession a theme calling as well for the exercise of the highest faculties of description and imagination as for the deepest philosophic insight.4

In our time, scientists have learned about the pitfalls of an aristocratic attitude. The challenge is to balance those lessons with society's continuing need for unshackled expertise. To this end, efforts should be made to develop evidence that quantifies, or at least aids in prioritizing risks, with the goal of evidence‐based risk assessment. Scientists and physicians should then recognize their social obligation to participate in policy debates in which their expert opinions can help the public balance imminent and more remote risks and evaluate the costs associated with risk‐minimizing initiatives. In this way, perhaps we can all meet the Platonic challenge of doing science in a democracy.