Our latest Freakonomics Radio episode is called “Bad Medicine, Part 1: The Story of 98.6.” (You can subscribe to the podcast at iTunes or elsewhere, get the RSS feed, or listen via the media player above.)

We tend to think of medicine as a science, but for most of human history it has been scientific-ish at best. In the first episode of a three-part series, we look at the grotesque mistakes produced by centuries of trial-and-error, and ask whether the new era of evidence-based medicine is the solution.

Below is a transcript of the episode, modified for your reading pleasure. For more information on the people and ideas in the episode, see the links at the bottom of this post. And you’ll find credits for the music in the episode noted within the transcript.

* * *

We begin with the story of 98.6. You know the number, right? It’s one of the most famous numbers there is. Because the body temperature of a healthy human being is 98.6 degrees Fahrenheit. Isn’t it?

ANUPAM JENA: So, now I’m going to take your temperature, if you don’t mind just open your mouth and I’ll insert the thermometer. PATIENT: Ah! JENA: Perfect.

The story of 98.6 …

PHILIP MACKOWIAK: … dates back to a physician by the name of Carl Wunderlich.

This was in the mid-1800s. Wunderlich was medical director of the hospital at Leipzig University. In that capacity, he …

MACKOWIAK: Oversaw the care and the taking the vital signs on some 25,000 patients.

Pretty big data set, yes? Twenty-five thousand patients! And what did Wunderlich determine?

MACKOWIAK: He determined that the average temperature of the normal human being was 98.6 degrees Fahrenheit or 37 degrees centigrade.

This is Philip Mackowiak, a professor of medicine and a medical historian at the University of Maryland.

MACKOWIAK: Well, I’m an internist by trade and an infectious-disease specialist by sub-specialty. So my bread and butter is fever.

There’s one more thing Mackowiak is …

MACKOWIAK: I am by nature a skeptic. And it occurred to me very early in my career that this idea that 98.6 was normal and then if you didn’t have a temperature of 98.6 you were somehow abnormal just didn’t sit right.

Philip Mackowiak, you have to understand, cares a lot about what is called clinical thermometry. And if you care a lot about clinical thermometry, you care a lot about the thermometer that Carl Wunderlich used to establish 98.6.

MACKOWIAK: His thermometer is an amazing key to this story of 98.6 .

So you can imagine how excited Mackowiak was when, on a tour of the weird and wonderful Mutter Museum in Philadelphia, the curator told him they had one of Wunderlich’s original thermometers.

MACKOWIAK: I said: “Good heavens, may I see it?” And she said: “Sure, would you like to borrow it?” And I said: “Of course!” And so I was able to take this thermometer back to Baltimore and do a number of experiments.

The Wunderlich thermometer, Mackowiak realized, was not at all a typical thermometer.

MACKOWIAK: First of all, it was about a foot long, fairly thick stem and it registered almost two degrees Centigrade higher than current thermometers or thermometers of that era.

Two degrees higher — centigrade? Uh oh!

MACKOWIAK: In addition to that, it is a non-registering thermometer, which means that it has to be read while it’s in place. So it would have been awkward to use.

Mackowiak noticed something else about the original Wunderlich research.

MACKOWIAK: Investigating further it became apparent that he was not measuring temperatures either in the mouth or the rectum. He was measuring axillary or armpit temperatures and so that in many, many ways his results are not applicable to temperatures that are taken using current thermometers and current techniques.

As it turns out, the esteemed Dr. Carl Wunderlich …

MACKOWIAK: … was not the most careful investigator ever to come on the scene.

The more Mackowiak looked into the Wunderlich data, and how the story of 98.6 came to be, the more he wondered about its accuracy. So he set up his own body-temperature study. He recruited healthy volunteers, male and female, and took their temperature one to four times a day, around the clock for about two days, using a well-calibrated digital thermometer in the patients’ mouths. What did he find?

MACKOWIAK: Of the total number of temperatures that were taken, only 8 percent were actually 98.6. And so if you believe that 98.6 is the normal temperature, then 92 percent of the time the temperature was abnormal. Obviously that’s not even reasonable.

In his study, Mackowiak found the actual “normal” temperature to be 98.2 degrees. Not a huge difference — and yet, the whole notion of a “normal” body temperature was looking more and more suspect. Why? A lot of reasons. Temperature varies from person to person, sometimes so much that one person’s normal would nearly register as nearly feverish for another person.

MACKOWIAK: It’s almost like a fingerprint.

Temperature varies throughout the day — it’s roughly one degree higher at night than in the morning, sometimes even more. And an elevated temperature isn’t necessarily a sign of illness:

MACKOWIAK: In women it goes up with ovulation, during the menstrual cycle. The temperature goes up during vigorous exercise and this is not a fever.

And so, Mackowiak concluded …

MACKOWIAK: Looking at a rise in temperature as a reliable sign of infection or disease is inappropriately simplistic thinking.

Inappropriately simplistic thinking. It makes you wonder: if the medical establishment believed for so long in an inappropriately simplistic story about something as basic as normal body temperature — what else have they fallen for? What other mistakes have they made? I hope you’ve got some time; it’s a long list:

JEREMY GREENE: You take a sick person, slice open a vein, take a few pints of blood out of them … JENA: Drilling holes into people’s skulls. VINAY PRASAD: It was literally taking someone to hell and back. TERESA WOODRUFF: It would cause a whole series of malformations and probably a lot of fetal death. JENA: Lobotomies. KEITH WAILOO: The overuse of a mercury compound. EVELYNN HAMMONDS: The Tuskegee case. WAILOO: Losing your teeth and having your gums bleed. WOODRUFF: DES and Thalidomide. PRASAD: We use sort of a cement. WOODRUFF: Hormone replacement therapy. WAILOO: The Oxycontin and opioid problem. MACKOWIAK: As a medical historian, it is patently obvious to me that future generations will look at what we’re doing today and ask themselves, “What was Grandpa thinking of when he did that and believed that?” And they’ll have to learn all over again that science is imperfect and to maintain a healthy skepticism about everything we believe and do in life in general, but in the medical profession in particular.

On today’s show: Part 1 of a special three-part series of Freakonomics Radio. We’ll be talking about the new era of personalized medicine; the growing reliance on evidence-based medicine; and especially — pay attention now, I’m going to use a technical term — we’ll be talking about bad medicine.

* * *

We have a lot of ground to cover in these three episodes: medicine’s greatest hits, the biggest failures, where we are now and where we’re headed. In the interest of not turning a three-part series about bad medicine into a twenty-part series, we’re not even going to touch adjacent fields like nutrition and psychiatry. Maybe another time. Let’s start, very briefly, at the beginning.

Nearly 2,500 years ago, you had the Greek physician Hippocrates, who’s still called the “father of modern medicine.” You’ve heard, of course, of the Hippocratic Oath, the creed recited by new doctors. And you know the Oath’s famous phrase — “First, do no harm.” Even though, as it turns out, that phrase isn’t actually included in the Oath. It came from something else Hippocrates wrote.

Nor do many contemporary doctors recite the original Hippocratic Oath; there’s a modern version, written in 1964, by the prominent pharmacologist Louis Lasagna. The pledge begins: “I swear to fulfill, to the best of my ability and judgment, this covenant.” It’s a fascinating, inspiring document — and I think before we go too far, it’s worth hearing some of it …

LOUIS LASAGNA ADAPTATION OF HIPPOCRATIC OATH: “I will respect the hard-won scientific gains of those physicians in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow. … I will remember that there is art to medicine as well as science, and that warmth, sympathy, and understanding may outweigh the surgeon’s knife or the chemist’s drug. I will not be ashamed to say “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for a patient’s recovery. … Above all, I must not play at God. I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person’s family and economic stability. My responsibility includes these related problems, if I am to care adequately for the sick. I will prevent disease whenever I can, for prevention is preferable to cure. … May I always act so as to preserve the finest traditions of my calling and may I long experience the joy of healing those who seek my help.”

It’s comforting to think about the thoughtfulness, the nuance — the massive responsibility — that doctors pledge before they attempt to diagnose or heal us. How well has that pledge been upheld throughout medical history? We’ll talk to a variety of people about that today, starting with this gentleman.

JENA: My name is Anupam Jena. I’m a health care economist and physician at Harvard Medical School.

So Jena, as both a practitioner and an analytic researcher, is especially useful for our purposes. Because one of the themes we’ll hit today, several times, is that medicine, even though it’s scientific, or at least scientific-ish, hasn’t always been as empirical as you might think — and sometimes, not very empirical at all.

DUBNER: Here is an easy question: can you tell me please the history of medicine, or at least Western medicine in, I don’t know three or four minutes? JENA: Let me first answer the meaning of life. DUBNER: Is that going to be easier? JENA: That’ll take about five to six minutes. You know I would say how about three words: trial and error. I think if you think about medicine and how it has evolved — let’s just say in the last 100 to 200 years — the sorts of practices that at some point in history people thought were actually medically legitimate included drilling holes into people’s skulls, lobotomies. Even as late as in the 1940s-1950s, lobotomies were thought to actually have a treatment effect in patients with mental illness, be it schizophrenia or depression. The practice of bloodletting, which is basically trying to remove the, quote-unquote, bad humors from the body was thought to be therapeutic in patients. Things like mercury, which we know is downright toxic, were used as treatments in the past. And that was in a time and place when I think it was very difficult to get evidence — but not only that, there was probably a perception of the field that didn’t allow for the ability to question itself. And in the last 50-plus years, probably 50 to 75 years, I think we’ve seen tremendous strides in the ability of the profession to constantly question itself. DUBNER: So it’s easy to get indignant over the idea of these treatments that turned out to be so wrong. But understanding wellness and illness is hard, obviously. So when you look back at the history of medicine, do those interventions strike you as kind of shameful — you can’t believe you’re in the profession that tried things like that — or is that just part of the trial-and-error process that you accept?

JENA: I certainly wouldn’t call it shameful. The only thing that’s shameful is when someone doesn’t believe that they have the potential for being wrong and they don’t have that desire to inquire further about whether something actually works or doesn’t work. But the idea of trying things, particularly trying things that have a really strong plausible pathophysiologic basis, I think that there is nothing wrong with that. In fact, that’s what spurred scientific discovery and many of the treatments that we have now. DUBNER: So, I have a broad question for you. The human body is, I think you and I would agree, and extraordinarily complex organism. And over history, doctors and others have learned a great deal about it. But if we consider the entire human body — from the medical perspective only, let’s leave out metaphysics and theology and what have you — from the medical perspective, how would you assess the share of the body and its functions that we truly understand and the share that we don’t really yet understand?



JENA: Huh, that’s a tough one. We’ve made a lot of headway, but to put a number on it — I would say maybe 30 percent, 40 percent that we don’t know.



JEREMY GREENE: Ooh, that’s a tough question for me to quantify.

I asked the same question of someone else.

GREENE: My name is Jeremy Greene. I’m a physician and a historian of medicine at Johns Hopkins.

So what’s Greene’s answer?

GREENE: There is a Rumsfeldian answer of the known knowns, known unknowns and unknown unknowns. A different way of answering that question would have to do with what the idea of relevant science of medicine is.

For example?

GREENE: If you take for example the moment in Renaissance, the Vesalian moment when the opening of cadavers and description and rendering in precise three-dimensional chiaroscuro engravings of the human body was an exciting area for research that actually this humanist process of opening up cadavers, showing that the innards were not exactly what the ancient Greeks had described. And fast-forward that to the 21st century. How many organs are left still to be discovered now? Probably zero, although who knows exactly? So as a historian, rather than giving you a fixed percent of where we are, I can give you a Zeno’s paradox that we keep on getting close to that finite moment and then reinvent a new broader room for us to inhabit.

And that’s because there’s been a lot of progress in how we’re able to explore the human body.

JENA: There is the gross anatomy of the body, which you can see with your own eyes.

Anupam Jena again:

JENA: Then go a layer further and we’re now at the microscopic anatomy of the body. So now what do the cells of the body look like when they are diseased under a microscope.

And now …

JENA: Now go a layer further where you are now trying to understand things about the body that you can’t even see with the microscope. And that’s at, let’s say, the level of the proteins in the cell, or even further down, the level of the DNA that encodes that protein. GREENE: By the end of the 20th century, there’s a very strong genetic imaginary, which really helps to then fuel the excitement behind the human genome project. It’s thought once we know the totality of the human genome, we’ll know all we need to know about bodies and health and disease.

Of course we already know a great deal. And, to be fair, for all the mistakes and oversights in medicine, there’s been extraordinary progress. What are some of medicine’s greatest hits?

EVELYNN HAMMONDS: I’m sure every historian of science medicine would give you a different set of hits.

That’s Evelynn Hammonds. She’s a professor of the history of science and African-American studies at Harvard.

HAMMONDS: The ones that I typically think about are the introduction of more efficacious therapeutics and medicines. KEITH WAILOO: I would put something like the discovery of insulin right up there near the top.

That’s Keith Wailoo. He’s a Princeton historian who focuses on health policy.

WAILOO: It transformed diabetes from an acute disease into a disease that you live with. And to me, that is much more the story of what medicine has been able to do in the 20th century. JENA: The medicine that comes to my mind is statins. Most cardiologists believe that probably statins should be in water by now. I mean, statins are a remarkable drug. They’ve been shown to have benefit in preventing heart attacks and prolongation of life among people who have had heart attacks and the same thing for stroke and other forms of cardiovascular disease. So they are probably, at least in the last 20 years, the biggest improvement. But there are many, many drugs that are like that.

These are, truly, awesome interventions, for which we should all be thankful. One of the most remarkable developments over the past century and a half is the unbelievable gain in life expectancy: in the U.S., and elsewhere, it nearly doubled! It might be natural to ascribe that gain primarily to breakthrough medicines. But in fact a lot of it had to do with something else.

WAILOO: A lot of the advances in mortality and morbidity have come from, really, changes in the nature of social life. Infectious disease as the source of high mortality in the early 20th century began to drop long before penicillin and the antibacterials came along, in the mid century, because of improvements in housing, sanitation, diet, and sort of tackling urban problems that really created congestion and produced the circumstances that made things like tuberculosis the leading cause of mortality. HAMMONDS: For example, if you think about the reversal of the Chicago River — it used to flow into Lake Michigan, in the 19th — century, and people were dumping their waste into it, and every summer, there would be hundreds of deaths of babies and children from infant diarrhea because the water was so contaminated. They reversed the flow of the river so it flowed downriver towards the Mississippi. And that significantly improved the health of the people who lived there.

So we’ve got public-health improvements to thank. And yes, better therapeutics and medicines. Also: new and better ways of finding evidence.

PRASAD: I actually think the technology that really revolutionized how we think is the use of controlled experiments.

That’s Vinay Prasad. He’s an assistant professor of medicine at Oregon Health & Science University. Prasad treats cancer patients. But also:

PRASAD: The rest of my time I devote to research on health policy, on the decisions doctors make, on how doctors adopt new technologies, and when those things are rational and when they’re not rational.

Which means that Prasad is part of a relatively new, relatively small movement to make medical science a lot more scientific:

PRASAD: You know, if you think about medical science, for thousands of years what was medicine but something that somebody of esteemed authority had done for many years, and told others that it worked for me, so you better do it.

Even though medical science seemed to be based on evidence, Prasad says …

PRASAD: The reality was that what we were practicing was something called eminence-based medicine. It was where the preponderance of medical practice was driven by really charismatic and thoughtful, probably, to some degree, leaders in medicine. And you know, medical practice was based on bits and scraps of evidence, anecdotes, bias, preconceived notions, and probably a lot psychological traps that we fall into. And largely from the time of Hippocrates and the Romans until maybe even the late Renaissance, medicine was unchanged. It was the same for 1,000 years. Then something remarkable happened which was the first use of controlled clinical trials in medicine.

* * *

ANUPAM JENA: Alright, take a deep breath through your mouth, in and out. JENA: Good, okay. One more. JENA: One more.

Anupam Jena is an M.D. and a health care economist.

JENA: Alright, I’m going to lift up your shirt and listen to your heart.

In most developed countries, we tend to think of medicine as a rigorous science, and of our doctors as, if not infallible, at least reliable.

JENA: I think that the typical patient probably does look to their doctor for answers and they value very highly what that opinion is.

But as we’ve been hearing, the history of medical science was often “eminence-based” rather than “evidence-based.” When did evidence really start to take over?

JENA: Evidence-based medicine has become hugely important in the last 25 to 30 years.

The movement is a result, Jena says, of at least two factors: Number one:

JENA: We’re doing more randomized controlled trials and that tells us more information about what works and doesn’t work.

And, number two:

JENA: Improvements in computer technology have now allowed us to study data in a way that we couldn’t have done 30 years ago.

There’s also been a movement to collect and synthesize all that research and all those data:

LISA BERO: So our vision is to produce systematic reviews that summarize the best available research evidence to inform decisions about health.

That’s Lisa Bero, a pharmacologist by training, who studies the integrity of clinical and research evidence.

BERO: I’m also a co-chair of the Cochrane Collaboration.

The Cochrane Collaboration was founded in Britain but is now a global network. The “systematic reviews” they produce …

BERO: … are really the evidence base for evidence-based medicine. And we’ve been a leader in so many ways in developing systematic reviews. We were the first to regularly update these reviews. We were one of the first to have post-publication peer review and a very strong conflict-of-interest policy. And actually we were one of the first journals that was published only online.

Which means that whatever realm of medical science you’re working on, you can access nearly all the evidence on all the research ever conducted in that realm — constantly updated, available on the spot. Compare that to how things used to work — looking up some 5- or 10-year-old medical journal to find one relevant article that may well have been funded by the pharmaceutical company whose drug it happened to celebrate. How is Cochrane funded?

BERO: We are primarily funded by governments and nonprofits.

What about industry money?

BERO: We don’t take any money from industry to support any official Cochrane groups.

Which means, in theory at least, that the evidence assembled by the Cochrane Collaboration is pretty reliable evidence. As opposed to …

IAIN CHALMERS: … a whole variety of things. Opinion. What the doctor had been taught 30 years previously in medical school. Tradition. What they had been told to do by, or advised to do, by a drug-company representative that had visited them a week previously.

That is Sir Iain Chalmers, who co-founded the Cochrane Collaboration. He’s a former clinician who specialized in pregnancy, childbirth, and early infancy. He was a medical student in the early 1960s. When Chalmers observed his elders in practice, he was struck by how much variance there was from doctor to doctor.

CHALMERS: OK, so some doctors would — if a woman had a baby presenting by the breach — would do a Caesarean section, without any questions asked, as it were. Or they may take different views about the way the baby should be monitored during labor. Or the extent to which drugs should be used during pregnancy for one thing or another. So lots and lots of differences in practices. It’s as long as your arm. It’s madness isn’t it?

When he became a doctor himself, Chalmers worked at a refugee camp in Gaza. And, as he discovered …

CHALMERS: Some of the things that I had learned at medical school were lethally wrong.

Like how you were supposed to treat a child with measles.

CHALMERS: I had been taught at medical school never to give antibiotics to a child with a viral infection, which measles is, because you might induce resistance, antibiotic resistance. But these children died really quite fast after getting pneumonia from bacterial infection, which comes on top of the viral infection of the measles. And what was most frustrating was that it wasn’t until some years later that I found that there had been six controlled trials comparing antibiotic prophylaxis given preventatively with nothing done by the time I arrived in Gaza.

And those studies suggested that children with measles should be given antibiotics. But Chalmers had never seen those studies.

CHALMERS: So I feel very sad that in retrospect I let my patients down.

This led Chalmers to embark on a years-long effort to systematically create a centralized body of research to help attack the incomplete, random, subjective way that too much medicine had been practiced for too long. He was joined by a number of people from around the world — many of whom, by the way, were more versed in statistics than in medicine.

CHALMERS: So we embarked on these systematic reviews, about 100 of us. And that resulted at the end of the 1980s in a massive, two-volume, one-and-a-half-thousand-page book. At the same time, we started to publish electronically.

And so the Cochrane Collaboration became the first organization to really systematize, compile, and evaluate the best evidence for given medical questions. You’d think this would have been met with universal praise. But, as with any guild whose inveterate wisdom is challenged, as unwise as that wisdom may be, the medical community wasn’t thrilled.

CHALMERS: There was a great deal of hostility to it from, I’d say, the medical establishment. In fact, I remember a colleague of mine was going off to speak to a local meeting of the British Medical Association, who had basically summoned him to give an account of evidence-based medicine and what the hell did people who were statisticians and other non-doctors think they were doing messing around in territory which they shouldn’t be messing around in. He asked me before he drove off, “What should I tell them?” I said, “When patients start complaining about the objectives of evidence-based medicine, then one should take the criticism seriously. Up until then, assume that it’s basically vested interests playing their way out.”

It took a long while, but the Cochrane model of evidence-based medicine did become the new standard.

CHALMERS: I would say it wasn’t actually until this century. So one way you can look at it is where there is death there is hope, as a cohort of doctors who rubbished it moved into retirement and then death, the opposition disappeared. PRASAD: Yeah, so that’s been the slower evolution.

That, again, is Vinay Prasad, from Oregon Health and Science University.

PRASAD: The very first studies with randomization concerned tuberculosis.

This was in the late 1940s.

PRASAD: And from then really until the 1980s, the end of the 1980s, we did use randomized trials, but they weren’t mandatory. They were sort of optional.

One big benefit of a randomized trial is that you can plainly measure, in the data, the cause and effect of whatever treatment you’re looking at. This may sound obvious but it is remarkable how many medical treatments of the past were conducted without that evidence. Anupam Jena again:

JENA: I think some of the biggest mistakes in the last century, let’s say from 1900 to 1950 — things like lobotomy used to treat mental illness, either depression or schizophrenia — those strike me as being some of the most horrific things that could be done to man without any really solid evidence base at all.

This is one of the trickiest things about practicing medicine day-to-day. Let’s say you’re a doctor, and a patient comes to see you with a persistent headache. You make a diagnosis, and you write a prescription. What happens next? In many cases, you have no idea.

The feedback loop in medicine is often very, very sloppy. Did the patient get better? Maybe. They never came back. But maybe they went to a different doctor. Or maybe they died? If they did get better, was it because of the medicine you prescribed? Maybe. Or maybe they didn’t even fill the scrip. Or maybe they did fill the scrip but stopped taking it because they got an upset stomach. Or maybe they did take the medicine and they did get better but … maybe they would have gotten better without the medicine? Like I said, you have no idea. But with a well-constructed randomized controlled trial, you can get an idea. Vinay Prasad again:

PRASAD: The moment I think in my mind that kind of set us on different course was a study called CAST.

CAST stands for Cardiac Arrhythmia Suppression Trial. It was conducted in the late 1980s.

PRASAD: CAST was a study that — one of the things doctors were doing a lot for people after they had a heart attack was prescribing them an antiarrhythmic drug, that was supposed to keep those aberrant rhythms, those bad heart rhythms, at bay. That drug actually, in a carefully done randomized trial, turned out not to improve survival as we all had thought, but to worsen survival. And that was a watershed moment, I think, where people realized that randomized trials can contradict even the best of what you believe. It really doesn’t matter in medicine that the smartest people believe something works. The only thing that really counts is what is the evidence you have that it works.

The rise of randomized controlled trials led to a rise in what are called medical reversals. Vinay Prasad wrote the book on medical reversals, literally. It’s called Ending Medical Reversal.

PRASAD: And you know, what is a medical reversal? Doctors do something for decades, it’s widely believed to be beneficial, and then one day, a very seminal study — often better-designed, better-powered, better-controlled than the entirety of the preexisting body of evidence — it contradicts that practice. It isn’t just that it had side effects we didn’t think about. It was that the benefits that we had postulated, turned out to be not true or not present.

For instance …

PRASAD: In the 1990s we would recommend to postmenopausal women to start taking estrogen supplements, because we knew that women before they had menopause had lower rates of heart disease, and we thought that was because of a favorable effect of estrogen. And then in 2002, a carefully done randomized control trial, found that actually, it doesn’t decrease heart attacks and strokes; in fact, if anything it increases them.

I asked Prasad what first got him interested in studying medical reversal.

PRASAD: So I think I started to get interested in this even when I was a student, and I saw that there were some practices that had been contradicted just in the recent past, but were still being done day in and day out in the hospital. I mean, the example that comes to mind is the stenting for stable coronary angina. A stent is a little foldable metal tube that goes in a blocked coronary artery and the doctors spring it open, and it opens up the blockage. And stents are incredibly valuable for certain things. If you have a heart attack and there’s a blockage that just happened a few minutes ago, and the doctor goes in and opens that blockage up, we’re talking about a tremendous improvement in mortality, one of the best things we do in medicine. But stenting, like every other medical procedure, has something called indication drift where, yeah, it works great for a severe condition, but does it work just as good for a very mild condition? And so over the years, doctors has used stenting for something called stable angina. And stable angina is just that slow, incremental, narrowing of the arteries that happens to sadly all of us as we get older. But the bulk of stenting was this indication drift, and we thought it worked and made perfect sense. And then in 2007, a well-done study showed that it actually didn’t improve survival, and didn’t decrease heart attacks, which were, even to this day studies show that most patients who undergo this procedure believe it will do those things, and in fact it’s been disproven for eight years.

And yet: while stenting for stable angina did decline, it didn’t disappear. The rate of inappropriate stenting, Prasad says, is still way too high. This obviously starts getting into doctors’ incentives — financial and otherwise — and we’ll get into more in Parts 2 and 3 of this series. As Prasad makes clear, there’s a long, long list of medical treatments that simply don’t stand up to empirical scrutiny. Some common knee surgeries, for instance, where orthopedic surgeons take a tiny camera …

PRASAD: … take a tiny camera, make a tiny incision, and go in there, and actually sort of debride and remove those sort of scuffed and scraped knees. And in fact, people sort of felt a lot better. They had improved range of motion. There’s no argument there. But you’ve studied against, maybe just taking ibuprofen, or maybe just doing some physical therapy. What if you studied it against making the patient believe that you were doing the surgery, but you don’t actually do it? And, in fact, they’ve done those studies. Those are called “sham” studies. We give the appearance that, you know, we’re going to do this procedure, and the only thing we omit is actually the debridement of the menisci and the cartilage. And in fact, when you do it that way, you find that the entire procedure is a placebo effect. There’s another example where we use sort of a cement that we inject into a broken vertebral bone, and that again was found to be no better than injecting a saline solution in a sham procedure, and the cement itself cost $6,000, and I said, you know, at a minimum you can save yourself $6,000, and you don’t need to use the cement. DUBNER: What would be the incentives for me to do the study that might result in a reversal? Because we know how publishing works — whether it’s in your field, in any academic field, or in the media as well — it’s the juicy, sexy, new findings that get a lot of heat. And it’s the maintenance articles, or the reversal articles, that nobody wants to hear about. So I would gather there are fairly weak incentives to doing the studies that would result in reversals — which also makes me wonder if there is a woeful undersupply of such studies, which means there probably would be even more reversals then there are. PRASAD: Yeah, so I think that’s a fantastic question. One of the things that we did in the course of our research was we took a decade worth of articles, in probably one of the most prestigious medical journals, the New England Journal of Medicine, and there was about maybe 1,300 articles that concern things that doctors do. About 1,000 of those articles were something new, something that came off, that’s coming down the pipeline, the newest anticoagulant, the newest mechanical heart valve. And if you tested something new — exactly as you’d expect, 77 percent of those published manuscripts concluded that what’s newer is better. But we also discovered about 360 articles tested something doctors were already doing, but if you tested something doctors were already doing, 40 percent of the time, we found that it was contradicted or a reversal. DUBNER: I’d love for you to talk about the various consequences of reversals, including perhaps a loss of faith in the medical system generally? PRASAD: So if you find out something you were doing for decades is wrong, you harmed a lot of people, you subjected many people to something ineffective, potentially harmful, certainly costly, and it didn’t work. The second harm we say is this lag-time harm. Doctors, we’re like a battleship. We don’t turn on a dime. We continue to do it for a few years after the reversal. And the third harm is loss of trust in the medical system. And the deepest harm, and I think we’ve seen it in the last decade, particularly with our shifting recommendations for mammography and for prostate cancer screening, where people come to the doctor and they say, you guys can’t get your story straight. What’s going on? It’s a tremendous problem. And I’m afraid that probably what we are doing is what we are making people feel like that there’s nothing that the doctor does that’s really trustworthy. And I’m afraid that that’s sort of the deepest problem that we face, this loss of trust. DUBNER: Okay, so how do you not throw out the baby with the bathwater? What are some solutions to a practice of medicine and medical research that results in fewer reversals? PRASAD: So that is a million-dollar question. One is medical education. You know we have a medical education where for two years, students are trained in the basic science of the body. Only in the latter years, the third and fourth year of medical school, are students trained in the epidemiology of medical science, evidence-based medicine, in thinking not just how does something work, but what’s the data that it does work? And I’ve argued that needs to be flipped on its head. That the root, the basic science of medical school is evidence-based medicine. It’s approaching a clinical question knowing what data to seek, and how to answer that in a very honest way. So that’s one. The next category is regulation. And this is where you get into, you know, what is the FDA’s role, and what does the FDA do. And I think many people in the community hope that products that are approved by the FDA are both safe and efficacious for what they do. But you know, we were faced with a problem in the 80s and 90s that we had never faced before, which was the HIV/AIDs epidemic. And advocates rightly said that we need a way to get drugs to patients faster, maybe even accepting a little bit more uncertainty. I think that was right. And I think that’s still right for many conditions that are very dire, for which few other treatment options exist, and, which sometimes have very low incidence, so it’s very hard to do those studies because very few people have it. But what’s happened is that mechanism has been extrapolated to conditions that are not dire. That have very good survival. That don’t have few options, have many options, and that many people do have. So we’ve had again sort of a slippery slope for what qualifies for this accelerated approval. So I think there’s ways in which regulation can be adjusted. And then I think the last thing is the ethic of practicing physicians. You know, we have to have an ethic where when we offer something to someone, and there’s uncertainty, we should be very clear about communicating uncertainty. I think it’s a tragedy today that no matter what you think of stenting for stable coronary artery disease, that so many people who are having it done believe something that is clearly not true, that it lowers the rate of heart attacks and death. That’s just factually not true, and the fact that many people believe that, I think speaks to the fact that as doctors, we allow them to believe it. DUBNER: And let me ask you one last question. I have a pretty good sense of, having spoken to you for a bit, of what has prevented in the past medicine from being more scientific or more evidence-based, but what do you believe are the major barriers still that are still preventing it from becoming as evidence-based as you’d want it to be? PRASAD: So we should be honest about what medicine is. In the United States, medicine is something that now takes, nearly or over 20 percent of GDP. It’s a colossus in our economy. We spend more on medicine than any other Western nation. We probably don’t get as much from it, from what we’re spending. Because it’s such a large sector of the economy, the entrenched interest for the companies and the people who really profit from the current system, are tremendously reluctant to change things. I think we see that with, just for one instance, the pharmaceutical drug-pricing problem we’re having right now. I think no one will doubt that the pharmaceutical industry has made some great drugs. They’ve also made some less-than-great drugs. But does every drug, great or worthless, have to cost $100,000 per year? And I don’t invent that number. That’s actually the cost per annum of the average cancer drug being approved in the United States in the last year — well over $100,000 per year of treatment. I think there’s got to be a breaking point and people are recognizing that.

* * *

Freakonomics Radio is produced by WNYC Studios and Dubner Productions. Today’s episode was produced by Stephanie Tam, with help from Arwa Gunja. The rest of our staff includes Shelley Lewis, Jay Cowit, Merritt Jacob, Christopher Werth, Greg Rosalsky, Alison Hockenberry, Emma Morgenstern, Harry Huggins and Brian Gutierrez. If you want more Freakonomics Radio, you can also find us on Twitter and Facebook and don’t forget to subscribe to this podcast on iTunes or wherever else you get your free, weekly podcasts.