Lorraine Daston



I am used to waking up in the seventeenth century. As a historian of early modern science, that’s where I spend a lot of time. But it is strange that everyone else is suddenly keeping me company there.

No, I don’t mean the plague. Fortunately for us, Covid-19 is nowhere near as deadly as the diseases caused by the bacterium Yersinia pestis. From its arrival in Pisa in 1348 to the last great outbreak in Marseilles in 1720, the bacterium killed at least 30 percent of Europe’s population and probably a comparable number along its path from South Asia to the Middle East. That would translate to ninety-nine million deaths in the US alone. No one, not even the gloomiest epidemiologists, think Covid-19 will carry off almost a third of the world’s population.

Yet, beyond that tepid reassurance, there’s not much consensus as to just how deadly the virus is; observed case-fatality rates in places where the disease has spread so far range from 12.7 percent (30.25 deaths per 100,000 inhabitants, this latter a better gauge when testing is still spotty) in Italy to 2.2 percent (3.14) in Germany, although the two countries have comparable (and comparably good) health systems. For the US, the current observed rate is 3.6 percent (5.04); in China, 4 percent (0.24). (All figures from the Johns Hopkins University Coronavirus Resource Center.) There is always variability in how the same bug affects different individuals: age, sex, income, medical care, genetic dispositions, nutrition, and many other factors all play a role. But within large samples of hundreds of thousands of patients, stable averages ought to emerge and converge, at least in roughly similar populations. Why are these numbers all over the map?

That’s what I meant when I said that we’ve suddenly been catapulted back to the seventeenth century: we are living in a moment of ground-zero empiricism, in which almost everything is up for grabs, just as it was for the members of the earliest scientific societies – and everyone else — circa 1660. For them, just figuring out what a phenomenon was (Was heat or luminescence or for that matter, the plague, all one kind of thing?), how best to study it (Collect comprehensive natural histories? Count instances? Perform experiments – if so, what kind? Systematically observe – if so, what exactly, and how long?), why it happened when and where it did, and, above all, what to do with it or about it — none of these basic questions had an agreed-upon answer. It wasn’t just a question of lacking knowledge. We will always lack knowledge, which is why research is never-ending. There was no settled script for how to go about knowing.

Of course, I exaggerate the analogy between then and now. Thanks in no small part to the ingenuity, sagacity, and sheer persistence of thousands and thousands of researchers since the seventeenth century, we are the heirs not only to knowledge (what a virus is, what it does, and how to thwart it) but also to a diverse repertoire of ways of knowing, from well-designed experiments and systematic observations, already being refined and yoked together in the seventeenth century, to chemical assays and statistical analysis to computer simulations. And by researchers, I mean not only natural philosophers in their curled periwigs or professors in their white lab coats but legions of lynx-eyed investigators everywhere, at sea and in fields, in cities and in kitchens, noting events and correlations: the bark that lowers fever; the cloud formation that portends a storm; the lackluster stone that shines in the dark with a cool light. They all helped draft our script for how to go about knowing – a lengthy, intricate, and well-rehearsed script that guides our efforts to understand, among many other things, Covid-19 and its perplexingly various manifestations.

Yet, in moments of radical novelty and the radical uncertainty novelty emits, like a squid obscuring itself in ink, we are temporarily thrown back into a state of ground-zero empiricism. Chance observations, apparent correlations, and anecdotes that would ordinarily barely merit mention, much less publication in peer-reviewed journals, have the internet buzzing with speculations among physicians, virologists, epidemiologists, microbiologists, and the interested lay public. Is it true that more men are dying than women, and if so, in which age groups? Are the differences between observed case-fatality rates real or an artifact of how much various countries test for the number of infected persons (the denominator of the fraction) and/or how causes of death are registered? For example, some countries count the death of anyone who tested positive for Covid-19 as a death due to the virus, no matter what other factors (such as diabetes, for example) might have played a role; other countries use dominant or proximate causes in their classifications; both systems have their pros and cons.

Quite aside from the fog of statistics, there are basic facts yet to be ascertained. Is the disease airborne (and if so, how long it can linger in the air)? Do some antiviral drugs help alleviate symptoms in acute cases – and for whom? How much do ventilators, even when available, prolong the life of patients sick enough to warrant their use? Does Covid-19 cause heart attacks? Medical staff from Wuhan and Hackensack, Seoul and London, Bergamot and New York City are frantically exchanging observations on Twitter about therapies and “curious cases” (a very seventeenth-century term).

At moments of extreme scientific uncertainty, observation, usually treated as the poor relation of experiment and statistics in science, comes into its own. Suggestive single cases, striking anomalies, partial patterns, correlations as yet too faint to withstand statistical scrutiny, what works and what doesn’t: every clinical sense, not just sight, sharpens in the search for clues. Eventually, some of those clues will guide experiment and statistics: what to test, what to count. The numbers will converge; causes will be revealed; uncertainty will sink to tolerable levels. But for now, we are back in the seventeenth century, the age of ground-zero empiricism, and observing as if our lives depended on it.

10 April 2020

Lorraine Daston is director at the Max Planck Institute for the History of Science, Berlin, permanent fellow of the Wissenschaftskolleg zu Berlin, and visiting professor in the Committee on Social Thought at the University of Chicago. Her most recent book is Against Nature (2019). She is a frequent contributor to Critical Inquiry and a member of the editorial board.

Daston’s CI blog post was recently featured in Clifford Marks and Trevor Pour’s New Yorker article “What We Don’t Know About the Coronavirus.”