0:33 Intro. [Recording date: February 26, 2019.] Russ Roberts: My guest is Jacob Stegenga.... His latest book, which is the subject of today's conversation, is Medical Nihilism.... Now, this is an utterly fascinating book that begins with what seems like an essentially untenable claim that can't be true, and then relentlessly makes the case for that claim: That by then end of the book makes you wonder if it is true. And I have to confess--as listeners will discover, and recognize, that I'm sympathetic to some of the arguments in the book. Many of them, in fact. But I'm surprised at how far you got me to come along with you, Jacob. So, let's start with what you mean by this rather daunting term, 'medical nihilism'. Jacob Stegenga: Sure. So, medical nihilism--medical nihilism [pronunciations: medical nee-hilism or medical nai-hilism] is the term that I'm referring to, to summarize the overall argument of the book. So, the book is constituted by many kind of smaller level arguments, in each chapter. But, the overall argument, I'm referring to as medical nihilism. And, the conclusion of this argument is that we ought to have low confidence in the effectiveness of medical interventions. So, it's a skeptical thesis about how confident we should be in modern medical interventions. Russ Roberts: Well, I'd say 'skeptical' is not the right word. I would say, at least, 'highly skeptical'. Jacob Stegenga: Fair enough. Yeah. It's a very pronounced form of skepticism. It runs deep. It's meant to-- Russ Roberts: like most medical interventions are a bad idea. That's the way I would--or a surprisingly large number, are a bad idea, is the way I would describe it. Jacob Stegenga: Right. That's a fair description. Yeah. Russ Roberts: So, that seems to be silly. You concede early on that through most of history this was clearly true. Many of the cures and interventions of the past--ingesting mercury, bloodletting, and other things--didn't work; didn't improve the patient; in fact often were dangerous and harmful on net. And yet you admit that most people would say, 'That was then. This is now.' And, of course, in the last 50 or 60 years we've seen many, many--and even a little past that, maybe going back to the 1920s in America and the world--you'd say, 'Since then, we've discovered science; and the Enlightenment and the scientific method has given us many, many great and glorious health improvements. And doctors are to be revered, adored, as well as the people who create the devices and pills that we take and attach to ourselves and deal with.' And yet, you argue that even most of the modern ones are not so good. So, first you should probably make--you do concede there are a few, what you call, magic bullets. So, why don't you talk about what a magic bullet is and the three that you highlight in the book; and then why you think there are so few after that. Jacob Stegenga: Sure. Yeah. There's a lot packed into your question there. It's a really good summary of part of the motivation of the book. So, you're gesturing towards what I call in the book the 'Today is different' response to medical nihilism. So, the idea is we have modern science, we have strict regulation, we have effective pharmaceuticals, so this skeptical thesis is just nowhere near as compelling as it would have been in, say, the 18th century. And, so, part of the argumentative burden of the book is to dispel the persuasiveness of some of those premises in the 'Today is different' argument. It's also, as you noted, the thesis is not the kind of audacious, radical claim that there's not a single effective medical intervention. Of course there are. I refer to the very best medical interventions as 'magic bullets.' So, a magic bullet is an intervention which targets the pathophysiological basis of a disease with high specificity and high potency. The term 'magic bullet' comes from the chemist Paul Ehrlich. So, one of the most important scientists in the early part of the 20th century. He was looking for a cure for syphilis. And the treatment at the time was mercury. So, he was referring to this need for a chemical to bind to the bacterium that caused syphilis--that had recently been discovered thanks to germ theory in disease. So, he wanted a chemical that would bind to this bacterium, kill it, and only interfere with that bacterium and not the rest of our normal physiology. So, that's where the term comes from. The term comes from the chemist Paul Ehrlich. He and one of his colleagues, Sahashiro Hata, ended up finding a chemical with this kind of specificity; and some people call this the first modern antibiotic. And it was later improved on by penicillin. So, antibiotics like penicillin are magic bullets. They target disease entities with high potency and high specificity. The other example of a magic bullet in the book is insulin for Type 1 diabetes. So, the treatment for Type 1 diabetes until 1920 was starvation therapy. So, children who were born with Type 1 diabetes would be starved into a coma, and they would until maybe the age of 15 or 16 and then they would die. When Banting and Best discovered insulin as an intervention for diabetes--they developed an animal model of diabetes, diabetes in dogs. So, they discovered that you could modulate, radically reverse the symptoms of type 1 diabetes using insulin, they just walked across the street to one of these wards with comatose children who were born with type 1 diabetes and just started jabbing the kids with insulin. And the kids woke up out of their comas. So, it's a magic bullet. Now, penicillin and insulin aren't perfect. I mean, people develop resistance to penicillin; some people have allergies to penicillin and other antibiotics. The dosing of insulin has to be very, very careful for diabetics. But nevertheless they are pretty miraculous drugs. They either eliminate the disease entity altogether--in the case of antibiotics--or in the case of drugs like insulin they really effectively manage the symptoms of the disease without curing the disease.

7:59 Russ Roberts: A part that was so interesting to me, and I learned a lot from the book: there's a large class of pharmaceutical interventions that I would say after reading your book fall into two categories, broadly--the non-magic bullet categories. One is that they just don't work: They might affect some measure of health, like cholesterol level, but they don't necessarily reduce heart attacks, which is what we of course actually care about. So, there's ineffective drugs that seem to perhaps help but ultimately we find, don't. The second group, which is really interesting conceptually are pharmaceutical interventions, drugs that aren't specific. They, because of the complexity of disease, the attempts to cure the bad part leads to too many other things going on at the same time that can't be isolated. So, talk about both of those and help us understand the role of, certainly of complexity and the human body in the second case, because it mirrors the way I think about the macroeconomy and attempts to "cure it" in economic policy. Jacob Stegenga: Oh, right, yeah. That's an insightful point. I think there's a lot of physical-like[?] conceptual similarities between trying to intervene on a complex physiological system and trying to intervene on a complex social system. So, have articulated one of the arguments in the book: so, those interventions that aren't magic bullets, what is it about these interventions that makes them not magical--what is it about them such that they fail to live up to the standard that insulin and penicillin set? Just as an aside, I wouldn't necessarily want to say that a drug that's not a magic bullet isn't useful at all. Russ Roberts: Excellent point. Jacob Stegenga: And certainly some listeners to your podcast and some readers of the book will say, 'Wait a second. Statins might be useful.' The empirical evidence shows that statins can lower the risk of heart attacks by a small amount. Say, 1% in an at-risk population. One percent is better than 0%, so there's certainly some utility to statins. Now, the response to that that kind of effectiveness, 1% reduction in risk of a heart attack, is a completely different order of magnitude than the effectiveness of insulin and penicillin. Okay, so with that caveat aside, let me answer your question. There are two general kinds of physical reasons for an intervention failing to be a magic bullet. One has to do with the complexity of the target system--as you said. So, many disease entities that we're trying to intervene on have a radically complex causal basis. So, intervening on one node or one causal chain in this massively complicated causal nexus won't lead to the kinds of outcomes that we want because the causal network can just be robust against external perturbations. Many diseases are like this--so the pathophysiological basis of heart disease, or pretty much all psychiatric diseases are radically complex. So, that's about the complexity of the disease states. Another reason why many interventions fail to have the specificity or potency that we want is because of the ways in which drugs work on our body. So, drugs work as ligands. A ligand is something that binds to a receptor and changes the way that receptor works in our body. It turns out that there's a one-to-many relationship between ligands--most ligands, most drugs--and receptors. So, a single drug can bind to multiple receptors. It turns out also that there's a one-to-many relationship between activated receptor and chemical pathway. So, if you turn up or turn down one receptor that can modulate multiple biochemical pathways. And also there's a one-to-many relationship between activated biochemical pathway and physiological effects, depending on which organ or tissue the pathway is in. So, there's this like cascading complexity of effectiveness from consumption of drug to physiological effect. So, for these two physical reasons--the complexity of diseases and the complex ways in which drugs modulate our physiology, most drugs aren't magic bullets.

13:10 Russ Roberts: The economist F.A. Hayek said that the curious task of economics is to demonstrate to men how little they really understand about what they imagine they can design--a quote listeners are familiar with. Is it conceivable that some of these cascades of complexity will be better understood in the future? And, that our pharmaceutical interventions will be more successful? Or is there a certain level of complexity in the human body that you think cannot be overcome for some of these problems? Jacob Stegenga: This is the question to ask, I think, in response to the arguments that I put forward in the book. So, there's a certain ambiguity in the thesis of medical nihilism. To put it in philosophers' terms, the thesis can be either an epistemological thesis or a metaphysical thesis. The epistemological thesis is: Our methods of science as they are today just aren't good enough for us to get what we want. The metaphysical thesis is stronger. It says: The way our bodies are, and the way the medical interventions work on our bodies is just physically such that magic bullets will be out of reach, in principle, for many diseases. I, myself, sit on the fence between these two positions. But let me try to say a few words about how the development of science could possibly proceed such that we get more and more magic bullets in the future. One obvious way is just to pursue more research for diseases that we have a track record of finding magic bullets for. So, if we go back to the penicillin and insulin case, we can conceive of these really broadly as diseases of deficiency. Like, 'There's not enough insulin in your body, so put more in.' Or, scurvy is like, 'There's not enough Vitamin C in your body, so just put some Vitamin C in your body.' So those are diseases of deficiency. And diseases of infection are cases where there's something in your body that shouldn't be there. And so antibiotics work by just getting rid of those things. So, those are pretty basic physical systems that we can intervene on. And so, if we want more magic bullets, we could continue to develop those kinds of interventions. And I think that the most promising and most important line of medical research for the future will be to develop more antibiotics. In part because of the development of antibiotic resistance. So, we really must have in our arsenal more and more antibiotics, for the future. Okay. Another way in which we could develop our science so that we are able to develop more and more magic bullets is to learn more about the physiological basis of what I'm calling complex diseases. So, it could be that, say, depression, the way we're talking about depression now is it's a complex disease. But that might just be a way to mask-- Russ Roberts: ignorance-- Jacob Stegenga: the real nature of disease. Exactly. It might be a way to mask ignorance. So, it could just be that depression is not one kind of disease, but may be a hundred kinds of disease. So, there's a hundred some types of disease. So, the reason why SSRIs [selective serotonin reuptake inhibitors] fail to be effective-- Russ Roberts: and those are? Jacob Stegenga: Selective Serotonin Reuptake Inhibitors, like the major class of antidepressants that we use. So, the reason why antidepressants might essentially fail to be clinically significant now is that we are using, you know, a handful of drugs to try to intervene on a hundred different subtypes of depression. But as science progresses and we are able to sub-type these kinds of depressions, we'll be able to tailor drugs to those subtypes. And that's one of the promises of personalized medicine. Personalized medicine is supposed to be: Getting a bunch of big data, learning more about the physical basis of diseases, and then looking for interventions to target those physical bases. Now, whether or not you are persuaded by the problems of personalized medicine, in a sense depends on--it goes beyond the current empirical facts. So, some people are cup-half-empty. Some people are cup-half-full. You might be optimistic about what the future of science will bring to medicine. Or you might be more or less pessimistic. And I don't have an argument to sway you one way or the other, if you are kind of inherently optimistic or inherently pessimistic. Russ Roberts: In Hayek's 1974 Nobel Prize winning address, "The Pretense of Knowledge," he suggested we will never acquire that level of knowledge that will allow us to intervene successfully in the macroeconomy. Many economists disagree with that. And we'll put a link up to that speech. I heartily recommend it for skeptics everywhere.

18:17 Russ Roberts: But, coming back to this question of--I want to ask two things about what you just said. One is--let's start with this, because it's a general issue that runs through, I think, some of your claims, a problem with some of your claims. So, many, many interventions--you mentioned SSRIs or anti-depressants--people would say, 'Okay, they don't show up in clinical trials. But for me, it's fabulous.' Recognizing that for some people it makes them more depressed--people, I think, recognize that. But for many people, once they get the right cocktail or the right drug, they find they are much more capable of getting along in the world. And they would argued--and they have in the past when this kind of issue has come up on the program, they would say: 'You are dangerous, Jacob, because you are discouraging something that is lifesaving, for some people. Not everybody--okay we agree with that. But it's made so many people's lives better.' And, of course, many psychiatrists today are not doing cognitive, behavioral therapy. They are dispensing drugs. That's their overwhelming practice. And they think they are doing God's work. They think they are saving lives and making people's lives better. And if they don't, they just need to tweak it or find a better variation. So, how do you respond to that? Jacob Stegenga: Good; this is a very important question, and there is a lot that can be said about it. So, in general it raises the following question: What kinds of evidence should we be appealing to when we judge the benefits and harms of medical interventions? In evidence-based medicine, there's been a very, you know, powerful movement in medical research to move towards promoting certain kinds of evidence and downgrading other kinds of evidence. So, the gold standard in evidence-based medicine today is the randomized control trial [RCT]; and meta-analyses of randomized control trials. So meta-analysis is like a bringing together of results from all of the available trials. And, evidence-based medicine did this for good reason. So, the way in which we made causal inferences about the benefits and harms of medical interventions before evidence-based medicine was to appeal to things like expert opinion, background theoretical knowledge-- Russ Roberts: patient[?]-- Jacob Stegenga: anecdotes--yeah, exactly. Case reports. And, the community--statisticians, epidemiologists, regulators--recognized that these forms of evidence were shot through with biases. And so, as medical research progressed through the 20th century and now into the 21st century, the methods for testing the benefits and harms of drugs got better and better and better. Insofar as they controlled for more and more of these biases. Okay. Now, what about 1st person reports? What about 1st person anecdotes? Like, 'This drug worked for me.' Or, 'This drug worked for a good friend of mine,' or 'a patient of mine'? Russ Roberts: My patients. Yeah. Jacob Stegenga: My patients. So, what are we supposed to say about these kinds of cases. The short answer is we should approach first-person reports with a huge amount of cautionary skepticism. And this is for 3 fundamental reasons, that all work together. The first reason is that diseases have a natural course of progression. That is, they have a kind of a life of their own. So, symptoms get better and worse over time for many diseases. Some diseases have a natural course of progression in which the symptoms gradually decrease until they are gone. This is, for instance, like illustrated by common cold. Some diseases fluctuate with symptoms varying[?] over time. So, for instance, like bipolar disorder, or depression--symptoms are worse at some times, better at other times. And, people tend to seek treatment from their physician when their symptoms are especially bad. Now, if you seek treatment when your symptoms are especially bad, then merely the passage of time alone entails that your symptoms will get better in the future--for these diseases that have a fluctuating severity of symptoms or a gradually decreasing severity of symptoms. So that's problem Number One: the natural course of disease. Problem Number Two is the infamous placebo effect. So, the placebo effect involves the expectation is that you'll get better because you received treatment from a health care professional, in fact causes you to get better: but not via the biochemical activity of the drug that you've consumed, but via some sort of mysterious psychological phenomenon that we don't actually understand very well at this point. So, that's Problem Number Two: placebo effect. Problem Number Three is a well-known fallacy of reasoning that philosophers call 'confirmation bias.' Russ Roberts: Yeah, 'the narrative fallacy,' also. Jacob Stegenga: Is that another word for it? Russ Roberts: Yeah, it is: You tell yourself a story, and then everything fits the story. It's a version of confirmation bias. Jacob Stegenga: Exactly. Yeah. So, confirmation bias in general is paying more attention to evidence that confirms your beliefs and ignoring evidence that disconfirms your beliefs. And we have a massive amount of evidence that shows that typical people suffer from confirmation bias in really big ways; but also physicians, patients, and even, you know, professors-- Russ Roberts: Economists. Jacob Stegenga: Economists. Yeah. So we--the royal "we"--suffer from confirmation bias. So these three problems together--the natural course of diseases, the placebo effect, and confirmation bias--entail that we should treat first-person reports regarding the effects of interventions with a huge amount of skepticism. Now, I should add the following caveat, though. In medicine, there's been a long tradition of neglecting the patient's reports, because medicine, at least sometimes, has been kind of imperialistic in its attitudes. So, 'The physician is the educated one; they know about your disease; you don't know anything about your disease.' You are sick. Maybe you're a woman. Maybe you're disabled. And, like, the white, upper middle-class, male physician knows best. And so there's been a tendency to push back against this medicine. So, 'Medicine should listen more, should hear the patient, and should respect what the patient is reporting.' I agree with all of that. Medicine should--we, like, the physicians should be listening very carefully to our patients and respecting what patients report. However, when it comes to causal inference, that's a completely different ballgame. And I think we ought to be maintaining really, really strict evidential standards when it comes to deciding: Did this drug have the following effect?

26:13 Russ Roberts: So, let's talk about side effects, generally, because they are related to this issue of complexity. And it gets at something I didn't feel you emphasized enough. So, one of the themes of the book is that many of the things that we think work actually don't. Many of the things that we think work, don't work very well. And many of the things that work a little bit, have side-effects that actually are negative or offset or roughly counterbalance the good effects. And, you make a very persuasive case--I hope we'll get to it, but if not, I want to say it here because it's very important, that: There is a strong set of forces that cause us to underestimate the harm of intervention, while over-estimating the benefits. However, just because something is harmful doesn't mean you shouldn't--I mean, just because something has side effects doesn't mean you shouldn't do it. It could be that it's worth it, still. So, talk about that issue, first of all, of whether, of the importance of side effects. And I just want to complicate it a little bit by mentioning that: You don't talk very much about cancer in the book. Cancer treatments--I mean, most people recognize that our current level of cancer treatments are harmful to a person. They are destructive. They often have life-damaging effects, even when the cancer is cured--so-called cured or in remission. So, we understand that cancer drugs are unpleasant, and basically a form of poison; and we just hope it poisons more of the bad stuff and not so much of the good stuff. But, we also understand that that's not necessarily true. So, talk about this issue of tradeoffs between benefits and costs, and risk and return. And in particular if you can add some mention of the cancer issue--because I didn't notice as much about that in the book. Jacob Stegenga: True. Okay. Yeah. So, this is a really important sub-questions and there's a lot going on there. Just on cancer, I'll say, parenthetically, I'd like to plug a book of one of my colleagues, Anya Plutynski, who has just published a book, which is like a philosophical study of the science and medical treatment of cancer. And it's an excellent book, and one of the few sort of like philosophical discussants on cancer. So, that's a book worth looking at. Russ Roberts: Author's name again? Jacob Stegenga: Plutynski, P-l-u-t-y-n-s-k-i. She works at Washington University in St. Louis. Russ Roberts: Got it. Jacob Stegenga: So--right. Almost all, if not all medical interventions have harmful side effects. But, of course, that doesn't entail--as you said, that doesn't entail that that's an argument against using them, because their benefits might outweigh the harms. And so, at the end of the day, somebody has to decide if a particular medical intervention has benefits that outweigh the harms. And, I think that's a definitive, general point. So, so, the mere presence of harms, we just have to accept. And I think the case of cancer drugs is illustrative. I like the way you put that. Medical research is tuned in a variety of ways to hunt for benefits of interventions at the expense of hunting for harms. So, even if we agree that we are going to have to be weighing up the benefits and harms of medical interventions, the actual evidence that we have available to us to do that weighing is systematically skewed towards overestimating benefits and underestimating harms. To actually articulate the argument would take me some detail. We can do that if you want. But that's [?chapter 5?]-- Russ Roberts: We'll get to that. We'll get to that. Keep going. Jacob Stegenga: Okay. So, ultimately we need to do this kind of weighing up of benefits and harms. And, this raises a lot of questions. Like, how should we do that weighing? And, who should be doing that weighing? And on what evidential basis should we base that weighing? These are questions that really haven't been thought through with very much--these are questions that haven't been thought nearly as carefully enough as you would expect. So, just to give an example: To get a new drug approved by the FDA [Food and Drug Administration], the main evidential requirement is to have two positive randomized control trials in which the drug demonstrates benefits compared to placebo or competitor drug. And that benefit could be really, really small. Now, of course, there is also a safety assessment, at this stage. But the actual evidence that's available to properly assess the safety of an experimental drug is pretty thin, at this stage in the research life of a drug. The vast, vast majority of evidence that we get on the harmful side of exit[?] drugs occurs after the drug has been approved for public consumption. And at that point, there's no incentive to do any more careful randomized trials. And so, when you posed this question, you related it back to the issue of whether or not we should trust first-person anecdotes. Now, this is crucially important, because the majority of evidence that we have on the harms of drugs amounts to a collection of first-person anecdotes. So, if a patient thinks that they are suffering a particular side effect from a drug, they may or may not have a conversation with their physician about it. If they do, the physician has to decide if the patient is, like, a reliable reporter of this effect of the drug. And then the physician has to basically upload the harm to a database of, in which information on harms is collected. And then from that database, scientists try to make inferences about whether or not the drugs are in fact causing such and such harms. So, it's a collection of first-person anecdotes. And, it's only in rare cases, in which, after approval, there's a carefully controlled randomized trial done to test for harms.

33:18 Russ Roberts: But you give many examples in the book of harms that are so severe that drugs--later, that come out--where the effects are so obviously harmful that the drugs are taken off the market. Or where the company is sued. Because they knew of the harm and didn't reveal it. It's not like once. It's often, is how I would describe it. It's deeply disturbing. But I--on this issue of side effects--some of these side effects, I never really understood it until I read your book, and maybe I still don't. But, my assumption was: People are different. This drug might make me nauseous [nauseated?], but not you. It might make me tired, but not you. It might make me lose my appetite, but not you. Those are relevant. But the bigger issues are things like: It might stop my heart, but not you. And some of the reasons that it stops your heart is because that cascade of complexity you talked about earlier: We don't totally understand. We have this romance about doctors as sort of scientists, hopefully calibrating the impact of this thing I'm injecting. A lot of it's nothing like that. And, it was deeply enlightening--unfortunately. Jacob Stegenga: Thanks for saying that. Yeah. Um, right. So, there's--there are just so many problems when it comes to the detection of harms--like the careful, reliable, controlled experimental study of the harms of drugs. An example--this example is not in the book, but it's kind of a funny example. So, a couple of years ago the FDA approved a drug for what was at the time called 'hypoactive sexual desire disorder' in women. So, basically women with low libido--women who weren't enjoying sex. And, so, they would be diagnosed--this was in the DSM-IV [Diagnostic and Statistical Manual of Mental Disorders, 4th edition]--they would be diagnosed with this disease. And, um, for a long time there was no drug available to treat this disease. Of course, there was a kind of male equivalent in Viagra and drugs like it. And the financial success of Viagra motivated the hunt for a female version. Um, a drug was test called flibanserin. It was initially rejected by the FDA, because the positive effects were really tiny, and there were some noticeable harmful effects. And it reacted poorly with alcohol. And then it was finally accepted because the FDA received some pressure from patient advocacy groups. There was a campaign called 'Even the Score'. And the idea was: 'You men have your drug for sexual desire, so we should have ours, too.' Turns out that that patient advocacy group was funded by the company that made the drug. And okay, this is a kind of long-winded story about harms. There was a study of the harmful side-effects of flibanserin. And in that study, the majority of subjects were men. So, it's a kind of funny example of how medical research sets up the conditions under which a physician in the wild--in real life, who is dealing with real patients--has to base prescription decisions on a set of evidence which might not be relevant to the patient that they have in front of them. Russ Roberts: Well, that's a semi-comic example; it's tragi-comic, obviously. The more general cases that you document in the book are the fact that, in clinical trials, there's a natural incentive on the part of the pharmaceutical company to work with healthier people. Work with younger people. Keep out the elderly. Keep out--super-young--children. And yet, once the drug is approved, the target audience expands from the group that was tested to the general population--for a whole bunch of reasons, economic, human, financial. And then, as a result, a lot of the harms show up that couldn't have been observed in the trial, because the trial didn't have the population in it. Jacob Stegenga: Exactly. Yeah. And there's a kind of general and principled way to put this point. So, randomized trials that are designed and performed to get regulatory approval exclude subjects with particular characteristics. Those characteristics are age--so, elderly people are excluded--people with other diseases; people on other drugs. And we have really good empirical evidence that shows that those very features increase the harms of drugs. So, an 80-year-old on that drug will experience more harms from that drug than a 50-year-old on that drug. So, we know that age, co-morbidity, and so-called poly-pharmacy--being on multiple drugs--itself modulates the harmfulness of the new drug. So, if you exclude those people from trials--I mean, those are the very people that end up taking new drugs--the elderly, people with multiple diseases, and so on. And so, we have a principle--we can just make a principled prediction that trials are systematically underestimating the harm profile of medical interventions. In the book I am sort of facetious about this, but we talk about the safety of drugs. And there's a lot of talk about the safety profile of drugs. But this is a kind of like Orwellian misnomer. Really we should be talking about the harms of drugs.

39:42 Russ Roberts: And so, I found that very difficult to swallow--bad metaphor, we are talking about pills. But, I think it's really important as an economist to come to grips with this, because, as an economist I've always taken the view that: Well, of course all things are--there's no such thing as a safe drug. And this whole FDA thing about safety is just an intellectual sham. Of course, things have side effects. Life's about tradeoffs. And, of course, when you take a drug that's going to help you, there may be some costs, besides monetary costs--which are increasingly small for most patients, these days. That's another problem we've talked about many times here. But the point is that, I take a drug to help me cure some issue I have; and of course it could raise the risk of something else. It could have lifestyle challenges like fatigue or nausea or whatever. And so I've always said this whole idea of safety is a mistake. It's silly. We don't want a perfectly safe drug. If we did, we wouldn't take anything. And yet, what I've learned from your book, which is a bit alarming, is that, that's true; but the data that we have, and our impression of the evidence, is not nearly as clean as we would think it is in evaluating those tradeoffs. In other words, sure, there's tradeoffs. They're just a lot worse than they actually appear to be, because the incentives for collecting the benefits are very high; and the incentives for being honest about the harm are really low. So, what looks like, 'Yeah, there's some cost to this, but it's worth it,' may turn out not to be the case. Jacob Stegenga: Yeah. Exactly. So, that's exactly a component of the argument for medical nihilism. And so, and one way that we could offer a kind of different angle towards the general argument is as follows. Over the last generation or so, trials have in fact gotten better and better and better in that they have, for a whole variety of reasons, they've controlled for various biases when it comes to the detection of benefits. So, in short, the epistemic reliability of trials, when it comes to the detection of benefits, has gotten better and better and better. And a result of this is that the-- Russ Roberts: More benefits-- Jacob Stegenga: Well, actually, no. The result is a measured decrease in the effectiveness of drugs. So, the better trials get, the smaller the effect sizes are observed in trials. And so, there's an inverse correlation between trial quality and measured effect size in the trial. Now, you might just extrapolate that into the future: so, no trial is perfect. So if trials get better and better and better, measured effect sizes on the benefits will get smaller and smaller and smaller. Now, if we take the discussion we were just having about harms and apply it, similar kind of logic, we know, based on arguments that I've given in the book, that our current evidential basis for assessing harms radically underestimates harms. If our evidential basis got better at detecting harms, we would detect more harms. And we[?] extrapolate that into the future: So, the better and better and better trials got at detecting harms, the more harmful drugs would look. So, on the one hand, benefits are going down as trials get better; and harms are going up. That's a kind of general and principled argument for medical nihilism.

43:26 Russ Roberts: Well, let's talk about the FDA a little bit, because you give a number of examples in the book where the FDA approves something where there were numerous trials that found no effect. And then there's like a couple that found it, so they approved it. And, it raises the possibility--and I think you explicitly say this--that the FDA is too lenient in approving drugs. Which goes against a long history in economic research of claiming that the FDA is too tough: that the hurdles for drug approval and the costs of drug approval are so large that, Sam Peltzman, for example, a famous study, showed that--'showed'--I retract that word. That's a word I should never use, no one should ever use. It's a study that found--whether it's true or not is a tough question to answer--but, found that thousands of people have died because the FDA took so long to approve drugs that were helpful. You are coming along and saying, 'The FDA is too lenient. There are many cases where the people involved with the FDA decision have a financial incentive either in conducting the trials or in assessing the trials,' and you are concluding the FDA is too lenient. Is that a correct summary of your view? And how would you relate it to the claims by economists that the FDA is too slow in approving important drugs? Jacob Stegenga: Yeah. Good. So, broadly construed, that is my view, although the issue is complicated. And I should say, when I'm talking about regulatory standards in the FDA, I am only focusing on the evidential standards. So, the barrier that a company has to get over when it comes to the evidence. There are a whole bunch of other regulatory standards, like standards that have to do with the actual manufacturing of the pharmaceutical; and I don't know anything about those standards and my argument doesn't touch those. So, it could be that for some of those standards, like the--how many times a day does the factory have to be cleaned, or something like that--I've got nothing to say about that. They might be too stringent. But when it comes to the evidential standards, my argument is that the evidential standards are far too low. They are far too easy to get a drug which has a negative benefit/harm profile approved. So, the standard, the evidential standard currently is--and typically, a new medical intervention has to be tested in two randomized control trials. And in those trials the drug has to be better than placebo; or better than a competitor drug. And, how much better? That's not part of the standard. According to what kind of statistical inference? That's not part of the standard. As long as it's a Phase 3 RCT [Randomized Control Trial]--which means that there's got to be a certain number of subjects; there has to have been Phase 2 RCTs, which are a bit smaller. Two Phase 3 RCTs which are positive RCTs, the drug gets approved. And that is far, far too low of a standard. Now, what about the argument from economists that people are dying because drugs aren't getting on the market soon enough? And it's not just economists, I should say. There are also patient advocacy groups that have argued for this. And, the most famous case is during the drug trials for HIV [Human Immunodeficiency Virus], activists were arguing that the FDA was moving too slowly; there was a drug that was potentially a lifesaver and people were dying of AIDS [Acquired Immunodeficiency Syndrome]. And so, they had pushed the FDA to hurry up. It's a famous case, in this domain. My--the short answer is that it presupposes that there's a pipeline of many lifesaving drugs that are just getting through the pipeline slowly because the FDA is dragging their feet. Or they're raising regulatory standards too high. And so, rather than getting a drug approved in 2 years, it takes 8 years to get a drug approved; and during those intervening 6 years people's lives could have been saved--but they're not. Well, the overall argument of the book is that there's not such a pipeline. Where are these lifesaving drugs? In the last two generations--really, in the last 50 years--there's been a tiny, tiny handful of drugs that have consistently increased the lifespan of people suffering from particular diseases. Gleevec is one example. HIV drugs are another example. This is--there is just a tiny, tiny handful of drugs like this. Now, moreover, for diseases which are clearly lethal if--the FDA does have a program which allows prescription of drugs before they passed this two-positive RCT standard. Now, there are regulatory and administrative constraints on this program. But the short story is: If a physician has a patient who is dying of a particular disease, like some form of cancer, and they know that there's an experimental drug in the pipeline that can target this disease, even if the drug hasn't been approved by the kind of standard, the two-positive RCT standard, the physician can nevertheless approve the drug. So, this argument doesn't--this argument from economists that the FDA standard is killing people--doesn't carry much weight. For those reasons. I think you could go even further and say the economists' standard would kill orders of magnitude more people, because more harmful drugs would get through the regulatory standard. Thereby killing a lot of people. So, a good example is Rosiglitazone. Rosiglitazone was a drug for Type 2 diabetes. It was on the market for a number of years. In the United States, in fact, last I checked, it was still available. And, in 2007, meta-analysis was done which suggested that in the handful of years that the drug had been on the market, it had caused something like 70,000 heart attacks. So, you know--so, ultimately, we're faced with a tradeoff. The higher the regulatory standard, the fewer drugs that are going to get on the market. Will that entail that more people will suffer or die because of the fewer drugs? It's not so obvious to me. Russ Roberts: Of course, operating in the background which we haven't talked about is the fact that many, many of these drugs are not paid for by the patients. Their incentive to be careful about taking these drugs certainly is there, because they don't want to die, and they don't want to have side effects. But there's financial incentive, often, for not paying for them. That's happening around the world, as well; not just in the United States. It's really fascinating.

51:14 Russ Roberts: Now, we had Adam Cifu on the program talking about his book with Vinayak Prasad, Ending Medical Reversal. The theme of that book is that many, many things that come to market, interventions, not just pharmaceuticals but various techniques for ameliorating pain or repairing damage to the body through various innovative techniques that work in observational studies, where you take a group of people, you take data and you know something about people who have had this treatment, and you see what happened to them--those studies work out pretty well. But then, when you do the Randomized Control Trial, you find out they actually don't work, because you can then control in a more effective way for the differences between the populations that get the procedure and those that don't. And you discover that actually it either doesn't work at all, or it actually is harmful. And that's also a disturbing book. But I've always thought, until I read your book, that: Well, observational studies--again, that's like the problems of epidemiology and regression analysis and economics, trying to tease out causal relationships in observed data in complex systems, they don't work very well, they are not often replicated. But an experiment--a randomized control trial--that's different. And what you argue in the book is that in both randomized control trials and in meta-analyses where you look across, you aggregate randomize control trials--which would seem to be even better, because you have even more data--that in those studies, there is a problem of what you call malleability. Which is deeply related to the problem of p-hacking. P-hacking is the problem that occurs in observational studies; or because there's a certain standard of statistical significance: people are biased or fraudulent, but mostly just biased in rejecting certain decisions along the way. There's too many degrees of freedom for the researcher. It's what Andrew Gelman calls 'The garden of forking paths.' There's just too many decisions; and so, through no fault of fraud, they just find out that things work when in fact they can't be replicated. Huge problem in psychology today. We've talked about it with Brian Nosek. But, again, I've always thought, 'That doesn't happen in randomized control trials. It's certainly not in meta-analyses.' So, you remind me that that's not the case. So, talk about why that is. Jacob Stegenga: Yeah. So, first of all, the comparison between observational studies and randomized trials is an interesting illustration of the point we were getting at earlier. Namely, the better that methods get, the less interventions look effective. So, there's a trope in evidence-based medicine, which illustrates this, basically. You see this very often in the literature about evidence-based medicine. Physicians will say, 'We were using such-and-such intervention for decades when I went through medical school. And then finally we did a randomized trial. And we learned that that intervention is actually useless.'-- Russ Roberts: Yep-- Jacob Stegenga: So, there's just a--countless number of these cases. So, the basic idea is these pre-randomized trial methods were biased. And they were suggesting that interventions were effective; and then the randomized trials come along and suggest that in fact these interventions are ineffective. So, the better the methods get, the worse that interventions look. Um, now, okay. But does that mean that randomized trials and that analyses are perfect? Are they, like, the kind of method that, you know, comes down to us from God and just like-- Russ Roberts: Truth-- Jacob Stegenga: speaks the truth to us? Yeah. Yeah. I mean, there are better and worse randomized trials. And better and worse meta-analyses. But, there are just a whole number of ways in which randomized trials and meta-analyses can be shot through with biases. And that's--you know, the arguments that show that make up about a third of the book. So it would take me too long to sort of illustrate all the different ways in which randomized trials can be biased. You mentioned p-hacking. And, there are, there are practices that have the same look and feel as p-hacking, that occur in trials. One is to make a bunch of measurements in a trial and then only report a subset of those measurements. There's an interesting study done by a German regulatory group. What they did was they took a one-year window, and they sorted all of the interventions that had been submitted to the regulatory agency during this one-year window. And they went back to the pre-registration plans of the trials that were deployed to test these interventions. And they just counted the number of outcomes that were planned to be measured in those trials. And then they went to the corresponding publications, and counted how many of those outcomes had data that were then published in the articles. And the publication rate of measured outcomes was about 26%. So, um, the short story is: You can design a trial; measure a hundred things in the trial; and then just publish an article which only reports 20 of those measurements. That's a kind of p-hacking. And so, this kind of malleability exists in trials. Of course, there's publication bias, as well. So, this was, what I was just talking about was publication bias of particular outcomes. But, if you own the rights to a new pharmaceutical and you want to show that it's effective, you can perform 20 trials on that pharmaceutical, and just publish the two trials that show a small, beneficial effect. This phenomenon, publication bias, in medical research, has been extremely widespread. At least in the last generation or so.

57:25 Russ Roberts: I was shocked to hear that. I don't understand it. So, explain. My thinking is: There's two pieces to that. One is, you say you are going to measure a hundred things and you only report 26, but aren't there usually like 1 or 2 things that are really important? Like, not getting a heart attack, or the cancer disappears? So, I'm not quite sure I'm not suicidal--the case of antidepressants--I'm not quite sure how I can play that game when I'm trying to tell the FDA I need that drug. Now, the second question is: Doesn't the FDA, when I register a trial, don't they get all that information? How do I get away with doing 20 trials and only publishing two? Jacob Stegenga: Yeah, good; so both good questions. So, um, on the first: Medical scientists have what they call the primary outcome that they are measuring. And, the primary outcome--now we have the pre-registration of trials. As you said, it has to--the trial, pre-registration has to happen either in some public database where journals and regulators can go and see like, 'Was this trial pre-registered?' And, in those pre-registration, description of the experiment that is going to be done, the scientists have to stipulate what the primary outcome is going to be. It turns out that there is second-order empirical evidence that looks at how effective these pre-registration practices are, and the extent to which scientists follow pre-registration plans. The extent to which scientists stick to measuring the primary outcome. And the results are shocking. So, for instance: One group looked at randomized trials in the very best medical journals. These are, like, the Lancet, the Journal of the American Medical Association, the New England Journal of Medicine, the British Medical Journal. These are like the absolute pinnacle of medical journals in the world. And they looked at trials in a particular temporal window--I think it was, like, one year. And they compared the publications to the pre-registration plans, and found massive disparities between the pre-registration plans and the trials. Even, like, came to the primary outcome. So, switching what they called the primary outcome happened in about half of these trials. So, outcome-switching occurs randomly in medical research. We might hope that pre-registration plans could be used and enforced. But there's been a lot of wrangling about: What jurisdiction should be responsible for the storing and publishing of pre-registration plans? And then enforcing the sticking-to-them, when it comes to publication or regulation? And, so far, there's just a lot of looseness. So, for instance, journals, for a while said--journal editors got together and said, 'Okay, we're not going to publish trials unless they've been pre-registered.' But it turns out that that wasn't stuck to. So, journals--journals were publishing trials that weren't pre-registered. Um, when it comes to regulation, as far as I know, regulators like the FDA do get access to a large amount of information that doesn't get included in trials. This includes, like, patient-level data, even if that patient-level data wasn't, didn't end up in the publication. So, regulators can get access to this data. And, in an ideal world, they would be able to use that data, in a way that guided their regulatory decisions. Russ Roberts: You are saying they don't regularly do so? Jacob Stegenga: Yeah. Exactly. So, the typical practice is to approve when there is--the two positive RCTs that are found. There's-- Russ Roberts: That means that there are 12 others that aren't positive, and they just ignore that? Jacob Stegenga: With the case of Rosiglitazone there had been something like 45 randomized trials done, testing the benefits of Rosiglitazone. And, they were also measuring some harms. And one of the harms was: Does Rosiglitazone cause heart attacks? Of these-- Russ Roberts: This is for treatment of-- Jacob Stegenga: Type 2 diabetes. Russ Roberts: Yeah. Go ahead. Jacob Stegenga: Yeah. So, of those 45 trials, about 15 have been published. Anyways, Rosiglitazone was approved for clinical use. An academic came along, Steven Nissen, and tried to do a meta-analysis on the harms of Rosiglitazone. He wanted, he tried to ask the question, 'Well, does Rosiglitazone cause heart attacks? And if so, by how much?' So, he tried to get all the data from Glassco Smith Kline. And they refused. But, because Glassco Smith Kline had settled a lawsuit about a previous case--Paxil--they had been forced to create a database of all of their trials. And so, via this route, Nissen was able to get access to the data from all of these trials, both published and unpublished. So, not just the 15 published ones, but all 45. So, he and a co-author did a meta-analysis, and they found that Rosiglitazone does increase the risk of heart attack by a really serious amount. They submitted the manuscript of their meta-analysis to the New England Journal for publication. And the story has a kind of perversely funny twist. A peer reviewer at the New England Journal of Medicine faxed a copy of the manuscript to somebody in Glaxo Smith Kline, and that generated a flurry of internal memos. And, one of the internal memos--a journalist got their hands on one of these memos. And one of those memos said, 'Okay, Nissen has discovered at we at Glaxo Smith Kline and what the FDA already know, namely, Rosiglitazone causes an increase in heart attacks by such-and-such percent.' So, this memo was really revealing. It suggested that Glaxo Smith Kline had already done their own meta-analysis based on this unpublished data. And, they'd shared that information with the FDA. And, that no regulatory decision had been made after that. So-- Russ Roberts: Really interesting. Jacob Stegenga: It's a compelling case in which the regulator had access to either the full set of data, or the, you know, meta-analysis of the full set of data. And anyways, did not change their regulatory stance. Russ Roberts: It's conceivable they shouldn't have. Right? It's conceivable that the benefits from reducing, whatever did for Type 2 diabetes, outweighed, say, a small risk. You wouldn't want to argue that, because there's a risk of a heart attack, you should never take the drug. Jacob Stegenga: That's absolutely right. I agree with the point that you made earlier: that all drugs have potentially harmful side-effects. Some of those side-effects might be very serious, like heart attack, and death. And, the mere existence of one of these side-effects doesn't entail that the drug shouldn't be approved. That's absolutely right. Yeah. But of course what matters is, um: Can we make a reliable inference about the benefit/harm ratio? Russ Roberts: Yeah. And the point you are making, which is one I emphasize, is that the full information at the time that says [?] you [?] make, which [?] wrong, is a bad idea.

1:05:34 Russ Roberts: Now, I'm a skeptic about empirical work in economics, and I get criticized a lot for it. And I always make it clear that I'm not against evidence. I'm not against data. What I'm against is the overconfidence that economists sometimes have in data that's generated in complex systems. And, in particular, that, I would argue that the ability of statistics to tease out those effects is problematic. For that, I get often called, as being anti-science. Um, and, of course, my defense, is: I'm in favor of science. Good science. Different-- Jacob Stegenga: Different mind, like empirical economics. Like, the randomized trial movement in the MIT [Massachusetts Institute of Technology] poverty lab--this kind of work? Russ Roberts: That would be one example. It comes up a lot in all kinds of areas. It comes up in, say, evaluating the minimum wage. It comes up in evaluating the effect of government spending on fighting unemployment. In the case of the randomized control trial part of economics it comes up when--people will say, in the effective altruism movement, 'We just have to figure out what works,' as if that was something that we know how to do. We don't. I'm very, very in favor of funding things that work rather than things that don't work. Some of the things that we thought worked evidently don't, despite there being shown in randomized control trials to work in the development literature and anti-poverty. But I want to read two paragraphs from your book that I think say this very well in your case. It's near the end of the book. It says--you write the following: Anti-science sentiments about medicine are widespread. For example, the anti-vaccine movement--prominently associated with a single publication has suggested that the measles-mumps-rubella vaccine can cause autism, which has been thoroughly discredited--has led many parents to not vaccinate their children, putting their own children and others at risk. One might worry that the view presented in this book contributes to irrational anti-science sentiments. However, one would have to seriously misinterpret the message of the book to portray it this way. To make the master argument compelling, throughout this book I've appealed to high-quality science. The trouble with so much of medical research is not science per se, but poor reasoning based on low-quality science that suffers from many systematic biases exacerbated by financial conflicts of interest. It's a fabulous summary of what I think your book is trying to do and what I feel good economics should be trying to do. Do you want to add anything to that? Jacob Stegenga: Thanks. Thanks for bringing this quote out. Yeah. I'm often asked a question that motivates this kind of response that I'm giving there. So, some people worry that by being critical of mainstream medicine and the kind of scientific basis of mainstream medicine that I'm lending a hand to those people who want to develop implausible alternatives, like homeopathy or the anti-vaccine movement, or, you know, like different kinds of religious opposition to particular kinds of medical interventions. And, my response is: I don't align myself with any of those movements. The arguments in this book could apply to those movements in a way stronger fashion than they do to mainstream medicine itself. So, this book is about increasing the quality of science in medicine. It's not an anti-science book at all. It's a pro-science book. It's trying to argue that medicine should be more scientific than it is. Russ Roberts: And I should just add, as an important footnote: We spent most of this conversation, almost all of it, on pharmaceuticals. But the argument goes way beyond the pharmaceutical area. Jacob Stegenga: Um, that's a--I'm glad you think so. And I'm sometimes criticized for this among my colleagues. So, my colleagues note that 'I've been focusing on pharmaceuticals; I'm calling the book Medical Nihilism but most of the examples and most of the arguments are framed around pharmaceuticals. Well, what about surgery? What about radiology? What about, say, early detections, screening programs for diseases like cancer? And I'm not talking about screening programs or surgery or vaccines or radiology: I don't have very many examples of those in the book, at all.' And, what I say in response to this line of questioning is: We often give advice to graduate students to pick a focus and not try to be overly ambitious in a book. And, that's part of my strategy here. So, I think that some of the arguments that I make could be extended to domains of medicine that go beyond pharmaceuticals. I'm glad that you think so. So, for instance, certain aspects of surgery, say, or disease screening programs. This is something that I've started to write a little bit about. But, the fine-grain details about how those arguments would go, I think would be a little bit different. So, for instance, the financial incentives in play in the domain of pharmaceuticals are just so fantastic that I think that nudges the biases more than they would in another domain in medicine in which the financial incentives weren't quite so astonishing. Russ Roberts: Oh, I don't know about that. When everything looks like--when you have a hammer, everything looks like a nail. And if you are a surgeon-- Jacob Stegenga: Yeah-- Russ Roberts: it's shocking to me how often surgery is recommended by surgeons. Strangely enough. So, I think that's important. I don't think it's unimportant-- Jacob Stegenga: right-- Russ Roberts: and the point you made earlier, which is extremely hard to remember, that many things get better by themselves with the passage of time is very challenging for most of us to remember; and I can't tell you how many people have told me they were improved or cured or helped by Procedure X, and in the back of my mind I'm always thinking, 'Yeah, but it may have gotten better anyway.' Jacob Stegenga: Yeah. So, that general line--I mean, I guess I would want to offer a kind of closing comment. I think some people will read this book or listen to your podcast and think, 'Well, that's an interesting idea, but I'm not totally persuaded.' And that's perfectly fine with me. What I hope is the big-picture argument in the book, what I'm calling the master argument, and then the particular chapter-level arguments, at least offer people a way to think about different domains of medicine, in particular medical interventions, more critically. And I certainly hope that audience includes physicians and policy makers and regulators. So, while I hope that I convince people that the thesis is persuasive and compelling, short of that, I hope that it at least offers people a set of tools and an argumentative strategy to think carefully and critically about medicine and about the evidence that's available for our most widely consumed medical interventions. Russ Roberts: Yeah. As you say, you are not anti-intervention. You are pro-being careful.