The profusion of information that keeps emerging about the growing COVID-19 outbreak presents challenges for reporters and the scientists they talk to when researching their stories. Good reporting and science have to distinguish legitimate sources of information from no end of rumors, half-truths, financially motivated promotions of snake-oil remedies and politically motivated propaganda.

While keeping track of the outbreak, we’ve become aware of how hard this vigilance is for even the most energetic and well-motivated scientists and journalists, given the firehose of available information from both traditional sources (public health authorities, journals) and new ones (preprints, blogs).

To help in this effort, we think reporting should distinguish between at least three levels of information: (A) what we know is true; (B) what we think is true—fact-based assessments that also depend on inference, extrapolation or educated interpretation of facts that reflect an individual’s view of what is most likely to be going on; and (C) opinions and speculation.

In category A are facts, such as that the infection is caused by a beta-coronavirus; that the initial viral genome sequences of the virus were very similar; and that human-to-human transmission happens frequently—along with the number of reported cases in various locations, and the like. Multiple lines of evidence, including peer-reviewed scientific studies and reports from public health authorities, support these as facts.

In category B is the vast majority of what we would like to know about the epidemic but don’t because no systematic data exist on the true number of cases in any location; the extent of community transmission outside of China—or the fraction of cases that are spreading undetected; the true proportion of infections that are mild, asymptomatic or subclinical; and the degree to which presymptomatic cases can be transmitted.

On these topics, experts can give opinions informed by their understanding of other infectious diseases; infer the consequences of available data (for example, they can infer unreported imported cases from the differences in reported imports in countries with similar travel volumes from infected areas); or perhaps gain insights from information that they have heard about and trust but that has not yet been publicly released. This category includes projections of the likely long-term trajectory of the epidemic. These views benefit from the expert judgment of the scientists who hold them and are worthy of reporting, but they should be distinguished from hard facts.

In category C are many other issues for which the current evidence is exceedingly limited, such as the effect of extreme social distancing on slowing the epidemic. There are also questions that will never be truly settled by data, such as those about the motivations of governments and health authorities. It’s not that these topics don’t matter. It’s just that they’re not accessible to science right now and may not ever be.

At their best, scientists and reporters are trying to do many of the same things—providing accurate information and interpreting it—but with different audiences and timescales. Beyond remembering the three different sorts of information that scientists can offer, how else can they ensure that they are doing this job well? We think several principles can help.

1. Seek diverse sources of information. Because no one has digested everything about the state of the epidemic, different experts will know different things and see different holes in our reasoning. This advice applies to scientists, as well as journalists: the best scientists— will consult their colleagues and ask them to find weaknesses in the scientists’ work before sharing the work more broadly—especially in a setting like this one, where the representativeness and accuracy of data are necessarily uncertain.

2. Slow down a little. We are all on a deadline of some sort to avoid being scooped. Someone on Twitter recently pointed out that facts about this epidemic that have lasted a few days are far more reliable than the latest “facts” that have just come out, which may be erroneous or unrepresentative and thus misleading. We have to balance this caution with the need to share our work promptly. Indeed, the categories of fact, informed belief, and speculation above are fluid, and given the fast-moving pace of information about the epidemic, a question that today can be answered only informed belief may perhaps be answered with a fact tomorrow.*

3. Distinguish between whether something ever happens and whether it is happening at a frequency that matters. A good example is the question of presymptomatic transmission. If it occurs frequently, it will make control measures that target sick people (isolation, treatment and contact tracing) less effective. It is very likely that presymptomatic transmission happens at some frequency, but the evidence is very limited at present. Knowing that it happens sometimes is of little use; we desperately need evidence on how often it happens. The same is true for infected travelers escaping detection. Of course, this event will happen for many reasons. Again, the question is how often it happens—and whether it leads to the establishment of local transmission.

Emergencies like this one lead to extreme pressure on both scientists and journalists to be the first with news. And there are perverse incentives arising from the attention economy we now inhabit—exacerbated by social media—that may provide short-term rewards for those willing to accept lower standards. Accurate reporting should be aware of this risk, seek to avoid contributing to it and rapidly correct falsehoods when they become clear. We have a common responsibility to protect public health. The virus does not read news articles and doesn’t care about Twitter.

*Editor's note: the final sentence of this paragraph was added for clarity after posting.