Our World in Data, an online publication based at the University of Oxford, announced on Tuesday that it had stopped relying on World Health Organization (WHO) data for its models, citing errors and other factors.

The group’s founder, Max Roser, said researchers are now using data from the European Centre for Disease Prevention and Control.

Until March 18 we relied on the World Health Organization (WHO) as our source. We aimed to rely on the WHO as they are the international agency with the mandate to provide official estimates on the pandemic. The WHO reports this data for each single day and they can be found here at the WHO’s site. Since March 18 it became unfortunately impossible to rely on the WHO data to understand how the pandemic is developing over time. With Situation Report 58 the WHO shifted the reporting cutoff time from 0900 CET to 0000 CET. This means that comparability is compromised because there is an overlap between these two WHO data publications (Situation Reports 57 and 58). Additionally we found many errors in the data published by the WHO when we went through all the daily Situation Reports. We immediately notified the WHO and are in close contact with the WHO’s team to correct the errors that we pointed out to them.

WHO, an agency of the United Nations, is responsible for international public health. Recent reports suggest US intelligence agencies relied heavily on WHO in its national assessment of the COVID-19 threat.

The errors and inconsistencies, which Our World in Data documented in a separate report, include discrepancies from nearly a dozen situation reports filed by WHO between February 5 and March 16. Our World in Data researchers said the way WHO was handling the errors was also a problem.

“The main problem we see with the WHO data is that these errors are not communicated by the WHO itself,” Rosen and his colleagues state. “[S]ome Errata were published by the WHO—in the same place as the Situation Reports—but most errors were either retrospectively corrected without public notice or remain uncorrected.”

The lack of good data available during the coronavirus outbreak has been a major source of frustration for economists, statisticians, scientists, and public policy professionals.

A Stanford University epidemiologist and professor of medicine, in a widely circulated Stat article, recently said the COVID-19 pandemic could end up being a “a once-in-a-century evidence fiasco.”

“The data collected so far on how many people are infected and how the epidemic is evolving are utterly unreliable,” said John P.A. Ioannidis, who co-directs Stanford’s Meta-Research Innovation Center.

These problems sound a bit like the local knowledge problem F.A. Hayek described nearly 80 years ago, which might explain the wildly inconsistent projections we’ve seen in COVID-19 fatality rates.

Government agencies, like people, are fallible. And the more we centralize decision-making and remove individual choice, the greater risk we face of having central authorities making sweeping decisions without the knowledge they believe they possess.

RELATED ARTICLES