One.

But you'd never know that from reading the press. Take a recent miracle procedure for multiple sclerosis. MS is a degenerative disease with no cure. In sufferers, the immune system attacks the protective layer around the nerves, disturbing the communication between brain and body — and causing a cascade of devastating symptoms: unsteady and jerking movements; loss of vision, bladder and bowel control; and eventually, early death.

In 2009, a breakthrough: a charming Italian researcher, Dr. Paolo Zamboni, claimed to have cured his wife's MS by "unblocking" the veins in her neck. He theorized MS wasn't an autoimmune disorder but a vascular one. The research was counterintuitive, it gave people with the disease hope, and it had an appealing personal tale behind it, involving one man's quest to save his wife. It was catnip for health reporters, who hailed "liberation therapy" as a romance-fueled medical triumph.

Sadly, however, Zamboni's discovery was more hype than breakthrough. What didn't get as much attention as his romantic quest was the fact that his study was small and badly designed. Other researchers who attempted to replicate his findings failed. Soon, anecdotes of patient complications and relapses emerged.



"What makes it news is that it's new. ... My view would be that brand new results would be the most likely to be wrong."

This cycle recurs again and again. An initial study promises a miracle. News stories hype the miracle. Researchers eventually disprove the miracle.

"There's a big, big, difference between how the media think about news and how scientists think about news," Naomi Oreskes, a Harvard professor of the history of science, recently told me in an interview. "For you, what makes it news is that it's new — and that creates a bias in the media to look for brand new results. My view would be that brand new results would be the most likely to be wrong."

Most medical studies are wrong

It’s a fact that all studies are biased and flawed in their own unique ways. The truth usually lies somewhere in a flurry of research on the same question. This means real insights don't come by way of miraculous, one-off findings or divinely ordained eureka moments; they happen after a long, plodding process of vetting and repeating tests, and peer-to-peer discussion. The aim is to make sure findings are accurate and not the result of a quirk in one experiment or the biased crusade of a lone researcher.

As science is working itself out, we reporters and our audiences seize on "promising findings." It's exciting to hear about a brand new idea that maybe — just maybe — could revolutionize medicine and stop some scourge people suffer through. We're often prodded along by overhyping scientists like Zamboni, who are under their own pressure to attract research funding and publications.



We don't wait for scientific consensus; we report a little too early, and we lead patients and policymakers down wasteful, harmful, or redundant paths that end in dashed hope and failed medicine.

This tendency could be minimized if we could only remember that the overwhelming majority of studies in medicine fail.



There have been more than 200 failures of supposed cancer breakthroughs in recent years Many of the experimental cancer therapies featured in the news seem to be the holy grail in cancer treatment, they're also the latest in a long line of seemingly "revolutionary" fixes. In fact, there have been more than 200 failures of supposed cancer breakthroughs in recent years.

A highly regarded service that vets new studies for clinicians finds — on average — only 3,000 of 50,000 new journal articles published each year are well-designed and relevant enough to inform patient care. That's 6 percent.

More often than not, single studies contradict one another — such as the research on foods that cause or prevent cancer. The truth can be found somewhere in the totality of the research, but we report on every study in isolation underneath flip-flopping headlines. (Red wine will add years to your life one week, and kill you quicker the next.)



For a study on whether everything we eat is associated with cancer, academics randomly selected 50 ingredients from recipes in The Boston Cooking-School Cook Book. Most foods had studies behind them claiming both positive and negative results.

Researchers cannot always replicate the findings of other researchers, and for various reasons many don't even try. All told, an estimated 85 percent — or $200 billion — of annual global spending on research is wasted on badly designed or redundant studies.



This means early medical research will mostly be wrong until maybe eventually, if we're lucky, it's right. More tangibly, only a tiny fraction of new science will lead to anything that’s useful to humans.

There is no cure for our addiction to medical hype

We now live in an age of unprecedented scientific exploration. Through the internet, we have this world of knowledge at our fingertips. But more information means more bad information, and the need for skepticism has never been greater.

I often wonder whether there is any value in reporting very early research. Journals now publish their findings, and the public seizes on them, but this wasn't always the case: journals were meant for peer-to-peer discussion, not mass consumption.

Working in the current system, we reporters feed on press releases from journals and it's difficult to resist the siren call of flashy findings. We are incentivized to find novel things to write about, just as scientists and research institutions need to attract attention to their work. Patients, of course, want better medicines, better procedures — and hope.



But this cycle is hurting us, and it's obscuring the truths research has to offer. (Despite the very early and tenuous science behind liberation therapy, MS sufferers traveled the world seeking it out, and launched political movements calling for resources to fund the treatment.)



For my part, I've tried to report new studies in context, and use systematic reviews — meta-analyses of all the best studies on clinical questions — wherever possible. When scientists or other members of the media prematurely blow up a novel breakthrough, I've tried to convey the reality that it's probably not a breakthrough at all. The more I do this, the more I realize the truth in what Harvard's Oreskes, Stanford's John Ioannidis, and many other respected researchers have reiterated over the years: we need to look past the newest science to where knowledge has accumulated. There, we'll find insights that will help us have healthier lives and societies.



As we turn away from the magic pills and miracle treatments, I think we'll focus more on the things that actually matter to health — like education, equality, the environment.



It's not always easy, and the forces pushing us to the cutting edge are powerful. But I try to proceed cautiously, to remind myself that most of what I'm seeing today is hopelessly flawed, that there's value in looking back.