There’s also a more existential shame. In an era when Big Pharma might have macerated the last drips of wonder out of us, it’s worth reiterating the fact: Medicines are notoriously hard to discover. The cosmos yields human drugs rarely and begrudgingly — and when a promising candidate fails to work, it is as if yet another chemical morsel of the universe has been thrown into the Dumpster. The meniscus of disappointment rises inside you: That domain of human biology that the medicine hoped to target may never be breached therapeutically. Of the several million chemical reactions in the human body, one estimate suggests that only 250 — a fraction of a percent — are currently targeted by our pharmacopoeia (this number changes every year, of course). The rest of our physiology is still impenetrable — invisible to pharmacology, like dark matter.

And then a second instinct takes over: Why not try to find the people for whom the drug did work? In O’s case, the sickest patients in the study had, indeed, developed a response. Couldn’t we justify using O for these patients?

This kind of search-and-rescue mission is called “post hoc” analysis. It’s exhilarating — and dangerous. On one hand, it promises the possibility of resuscitating the medicine: Find the right group of responsive patients within the trial group — men above 60, say, or postmenopausal women — and you can, perhaps, pull the drug out of the rubble of the failed study.

But it’s also a treacherous seduction. The reasoning is fatally circular — a just-so story. You go hunting for groups of patients that happened to respond — and then you turn around and claim that the drug “worked” on, um, those very patients that you found. (It’s quite different if the subgroups are defined before the trial. There’s still the statistical danger of overparsing the groups, but the reasoning is fundamentally less circular.) It would be as if Sacks, having found that the three long-term responders to L-dopa happened to be 80-year-old women from one nursing home, then published a study claiming that the drug “worked” on Brooklyn octogenarians.

Perhaps the most stinging reminder of these pitfalls comes from a timeless paper published by the statistician Richard Peto. In 1988, Peto and colleagues had finished an enormous randomized trial on 17,000 patients that proved the benefit of aspirin after a heart attack. The Lancet agreed to publish the data, but with a catch: The editors wanted to determine which patients had benefited the most. Older or younger subjects? Men or women?

Peto, a statistical rigorist, refused — such analyses would inevitably lead to artifactual conclusions — but the editors persisted, declining to advance the paper otherwise. Peto sent the paper back, but with a prank buried inside. The clinical subgroups were there, as requested — but he had inserted an additional one: “The patients were subdivided into 12 ... groups according to their medieval astrological birth signs.” When the tongue-in-cheek zodiac subgroups were analyzed, Geminis and Libras were found to have no benefit from aspirin, but the drug “produced halving of risk if you were born under Capricorn.” Peto now insisted that the “astrological subgroups” also be included in the paper — in part to serve as a moral lesson for posterity. I’ve often thought of Peto’s paper as required reading for every medical student.