The levees of the Red River in Grand Forks, North Dakota, are built to withstand 51-ft water levels. In 1997, the National Weather Service predicted a flood, but despite a 35% margin of error for previous estimates, it emphasized that the river would crest at 49 ft at most. When the waters rose to 54 ft, wreaking havoc on the area, local inhabitants were shocked and angry. Why had forecasters projected such confidence in their prediction? According to Nate Silver, who describes the incident in The Signal and the Noise, “The forecasters later told researchers that they were afraid the public might lose confidence in the forecast if they had conveyed any uncertainty in the outlook.”1

In hindsight, it's easy to criticize the forecasters. Not only were they wrong, but their unwillingness to admit to uncertainty had grave consequences. Silver suggests that the flood was largely preventable: sandbags could have augmented the levees, and water could have been diverted from populated areas. Looking back, it's hard to see any downside to admitting that the prediction could be off by 9 ft either way. But forecasters faced a trade-off: communicating uncertainty often undermines perceived expertise, but if you don't communicate uncertainty and end up being wrong, you risk losing even more credibility. Management of the Ebola “crisis” in the United States has crystallized this dilemma.

The Centers for Disease Control and Prevention (CDC) has been widely criticized for projecting overconfidence in U.S. hospitals' capacity to manage Ebola. When two nurses at Texas Health Presbyterian Hospital Dallas became infected after caring for Thomas Duncan, critics cited the CDC's assurance that “U.S. hospitals can safely manage patients with Ebola disease.”2 When the CDC said the nurses probably became infected because of a protocol breach and it turned out there was no protocol, things got worse. We seemed to hit rock bottom when it was revealed that the CDC had given the second infected nurse permission to fly from Dallas to Cleveland.

Yet the agency faced a seemingly impossible task. Whereas the meteorologists' greatest risk in communicating uncertainty was a blow to their reputation, here the real risk is fear. Beyond its inherent unpleasantness, fear is a risk in itself because it demands a response. In 1976, for example, mandatory vaccinations ordered in response to a single death from H1N1 influenza caused some 500 cases of the Guillain–Barré syndrome, about 25 deaths, and vastly reduced uptake of regular influenza vaccination.1 Mandatory quarantines for Ebola aim to assuage fear but may pose greater public risk than no quarantine, if they make it too difficult for U.S. health care workers to provide aid in West Africa.

The challenge, when officials facing uncertainty seek to prevent panic, is that the perception of inadequate understanding by experts is one of several factors known to heighten fear.3 Ebola's characteristics encompass essentially all such factors: most of us incur the risk it poses involuntarily, it is novel and highly fatal, it has potential for unlimited growth, and it ravages people in dreadful ways. Because we can't change these fear factors, the one factor seemingly within our control, projecting understanding of the disease, feels imperative. Unfortunately, any new health threat comes with uncertainties, in our understanding of both the disease and the transmission risks. Yet experts' admissions of uncertainty about Ebola are frequently twisted into the frightening suggestion that our health authorities don't know what they're doing.

How can we communicate uncertainty without igniting fear and undermining public trust in health authorities? Peter Sandman, who has studied risk communication for 30 years, emphasizes the need to “proclaim uncertainty.” Recognizing that people are drawn to confident projections of any sort, Sandman argues that experts and officials should confidently indicate how uncertain they are. He suggests that when a crisis begins, we remind the public that our knowledge has limits, there will be bumps along the way, and we'll learn from our mistakes. Sandman cites Jeffrey Koplan, who, as CDC director during the 2001 anthrax scare, said, “We will learn things in the coming weeks that we will then wish we had known when we started.” Or as the World Health Organization's David Heymann said during the 2003 SARS (severe acute respiratory syndrome) epidemic, “We are building our boat and sailing it at the same time.”

Sandman advises against over-reassurance in communicating individual risk. If, during a crisis, health officials are perceived as overly reassuring, he says, the public suspects they're insufficiently worried or insufficiently candid and becomes more frightened. If officials instead admit their concern, people sense that the officials are doing the worrying for them so that they don't have to.

But though the notion of a finite amount of worry to go around is appealing, it seems equally plausible that worry could multiply exponentially. Moreover, we don't know whether Koplan's and Heymann's admissions of uncertainty were more effective than the alternative in assuaging fear and preserving trust. How many Americans, for instance, took prophylactic ciprofloxacin during the anthrax scare? Had Koplan been less up-front about his uncertainty, would such antibiotic use and its downstream consequences have been avoided?

Though we have much to learn about how to communicate uncertainty without increasing fear, one key insight from the science of risk perception may help elucidate what does not work to allay fear. My instinct is to tell people who fear Ebola how much more likely they are to be sickened by influenza or heart disease. If fears were guided by facts, such comparisons might help. But when we face an uncertain prospect that we deeply fear, we evince what Cass Sunstein calls “probability neglect”: we tend to conflate the horror of what might happen with the likelihood that it will.4 Unless we can prove there's zero risk, the dreaded event feels exceedingly likely, and thus making probabilistic comparisons may not feel reassuring.

But although facts can't overcome fear, perhaps we can prepare people to accept that the facts will change. Though the scientific process requires a willingness to question what we think we know, changes in our understanding can be perceived as flip-flopping or, worse, purposeful deception. I'm often asked, when revised guidelines replace old dogma with new, “How can we trust you when you're always changing your minds?” Clearly, appreciating this aspect of science is not instinctive.

Closely related to discomfort with changing facts is low tolerance for the bad outcomes that make us question our understanding in the first place. When the two Dallas nurses contracted Ebola, it became clear that whatever was considered adequate protection was not. Though the CDC recognized the error, admitted that it should have dispatched an expert infection team sooner, and implemented nationwide hospital-training efforts, confidence in the agency plummeted.5 People's intolerance for error was probably heightened by a perception of infection control as more intuition than science. But processes for reducing infections take time to get right; even a simple checklist for reducing central-catheter infections took years to study, refine, and operationalize. We must help people grasp that science doesn't happen instantaneously and that learning from errors is a sign not of incompetence but of experts doing their jobs.

Commercial meteorologists deal with the inevitability of error and the need to maintain public confidence by overpredicting rain, a solution they call “the wet bias” — nobody curses them on a sunny day if rain was predicted, but we go nuts when it rains and they said it wouldn't.1 Medicine has no such cushion: overpredicting the risk of a health threat can be more harmful than the threat itself. But the wet bias acknowledges a subtler, relevant fact: we pay much more attention to bad events (rain) than to good nonevents (no rain).

Public fixation on the four patients diagnosed with Ebola on U.S. soil, which has ignited fear and doubt about our health authorities' competence, highlights how easily the few bad events eclipse the countless good nonevents. What if we turned the conversation on its head? Is it not remarkable that, despite some early missteps, only two people have contracted Ebola in the United States and thousands have not? Or that everyone with Ebola who was flown here for treatment survived? We cannot rid the world of uncertainty or the inevitability of bad outcomes. We cannot calibrate our predictions to maintaining our reputations rather than to public health needs. But we can celebrate the good nonevents.

The quarantine of Morgan Dixon, the fiancée of Dr. Craig Spencer, the first U.S. health worker diagnosed with Ebola after a brief asymptomatic period in this country, has now ended. The lifting of that quarantine should also mean the end of concerns that Spencer — who, while asymptomatic, rode the subway, ran on the High Line, bowled in Brooklyn, and ate West Village meatballs — might have exposed thousands of New Yorkers to Ebola. But the Ebola epidemic in West Africa is far from over. Containing the epidemic requires continued efforts by dedicated international health workers and a willingness to trust that though our health authorities cannot know everything, they will do everything they can to protect us with the knowledge they have.