0:33 Intro. [Recording date: May 20, 2019.] Russ Roberts: My guest is... Adam Cifu. He last appeared on EconTalk in 2016 discussing his book written with Vinay Prasad, Ending Medical Reversal, a discussion of how depressingly often health care therapies and treatments that appear to work in observational studies fail to show success and often produce harm when tested in randomized control trials. Today we're going to be talking about his very short and very provocative essay in the American Journal of Medicine that he wrote with Vinay Prasad, John Mandrola, and Andrew Foy. And the title of that essay is: "The Case for Being a Medical Conservative." Adam, welcome back.... I see this conversation as a way to bring together a number of EconTalk episodes over the years: your own, Robert Aronowitz on Risky Medicine, and recently Jacob Stegenga on Medical Nihilism. It follows up on a bunch of episodes related directly or indirectly to the placebo effect, with Gary Greenberg, David Meltzer, and most recently Eric Topol. It also ties in to a number of episodes we've done on pharmaceuticals, most recently with Robin Feldman. And finally, it ties in with Brian Nosek on the replication crisis and psychology and the EconTalk episode with John Ioannidis. For listeners who have missed some of those episodes, we'll link directly to them in the notes for this episode. Now, Adam, your essay opens with these words: We have been called critics, haters, nonbelievers, or our least favorite--nihilists. We prefer the term "medical conservative." We believe this is the ideal approach to patient care. Explain. Adam Cifu: So, when we think about medicine, we think about practicing medicine in a way that we are providing care that we are sure works. And, in today's world, with how quickly things move, and how many different interests are involved with medical research, and actually, pitching medical therapies to doctors, we worry that a lot of what gets out there into practice really is not based on good data. And often doesn't work. And so, our paper, which actually reads somewhat like a manifesto, I'm afraid-- Russ Roberts: It does-- Adam Cifu: um, is making the point that, you know, we need to slow down at this point. We need to think about the evidence behind, um, what doctors are offering patients. And we need to consider the cost/benefit of this. And I'm not just speaking about harms that often therapies carry, but actually the financial costs of those therapies. Russ Roberts: Yeah, whether it's worth paying an enormous amount for a very small incremental gain. Adam Cifu: Sure. Russ Roberts: Which our system is very, right now, biased toward adopting. Deeply disturbing. Adam Cifu: It is. And, it's not surprising. I mean, I think--you know, and the reason we say we are not nihilists is because we recognize that, you know, medicine has done incredible things. And if you look at the advancements that medicine has brought to, you know, whatever--not to be too grand, but has brought to humanity over the last hundred years--it's mind-boggling. But the truth is that, if you look over my career--and I've been in practice for 25 years--that there are very few things that have come out over the last 25 years which you can say, 'Wow. This changes everything.' There are a few of them. But the vast majority of things have offered small, incremental advances which some patients might accept and say, 'Yes. This is a therapy that I'll take.' While other people, patients, might say, 'You know, I don't think it's worth taking that medication,' either because of side-effects; maybe because of cost. Or maybe just speaks of values: that this is someone who sort of sees their approach to their own health as less is more, and they don't want to take a medication which only has a, I don't know, 1% chance of helping them. Russ Roberts: Maybe a higher percent chance of some side-effects that they are not prepared to face. Adam Cifu: Absolutely.

5:02 Russ Roberts: Now, you mentioned the cost-benefit analysis. Of course, as I like to point out, the incentives here in our system today versus 50, 60 years ago are very different. Typically the patient is paying a small, minimal amount out of pocket. Adam Cifu: Sure. Russ Roberts: So, our incentive as patients to take account of costs and benefits is very distorted. Adam Cifu: Right. Russ Roberts: Our costs are often small. The benefits could be tiny, but they are positive relative to a very small cost. And outweigh them. And so I say, 'Go for it.' In fact, one of the things that disturbs me deeply about medicine in America today, and just as a side note, Adam, I should tell you: Some of my listeners assume I've had some horrible experience--I'm serious--with the medical profession. I've had some surgery that went awry. I've been very blessed. I'm very healthy. Most of my disdain or concern is merely intellectual. But, right now, doctors don't ask me if I want this test, this treatment, this therapy. They just give it to me, because they assume, 'Well'--for a lot of reasons. But I don't have, usually, any voice in that. A lot of the times I want to say, 'Stop!' I have said, 'Stop. I don't want that.' And they'll say, 'Oh, but it's free.' And I'll say, 'Well, to me. But that's wrong.' It's wrong to do a test that's free to me--it seems to me to be immoral to do a test that's free to me with minimal benefits and, I just--let's do without it. And that conversation very rarely takes place. Adam Cifu: I think that you've had a lot of great episodes on the economics of health care. And probably the only thing I can bring to it is to reflect on how it effects the conversations between a doctor and a patient in the room. And I find it so interesting. Because, because cost has been, for the most part, removed, um, we end up having a conversation, patient and doctors about risk/benefit--you know, what are the side-effects of this medication? How likely is this to help you? What are maybe the opportunity costs? How much of a hassle is it to come up and get this infusion once a month, whatever? We often don't talk openly about the cost. But I think we are often, both doctor and patient, both thinking about it. I'm certainly aware of how much, you know, we, as a society are spending on things. And it does impact me with my recommendations. And patients, interestingly, although they seldom talk about it, it's often a concern--because even if they are paying just a very small part that can be difficult for, you know, for a large proportion of our patients. And then it's interesting how often I get calls from people who have received their bill and are overwhelmed by the numbers on there. And even if they are not actually paying for it, and it says, 'Medicare is covering this much; your secondary insurer is paying this much; and this is what's left for you to pay'--and it may be a completely affordable thing for the individual--patients are sometimes overwhelmed, and are angry about those numbers. And so I do think that everybody is thinking about it, although it's removed so much from our conversation.

8:25 Russ Roberts: Well, I'm going to reveal something deeply personal here, which is: I have some fungus on my toenails, evidently. I wasn't sure what it was. I went in to a dermatologist to get a diagnosis. And, she wrote me a prescription. And I went to fill it at the Walgreen's across the street. And they said--I looked at the bill, and I thought I misread it. I think it was over $1000 for this little tube of cream. And I said, 'There must be some mistake here.' And, the pharmacist said, 'Oh, no, but don't worry. You won't pay $1200.' I thought, 'Well, that's a start.' But then I said, 'But someone's going to pay $1200.' And that, of course, is that a statement that no one has any interest in talking about except economists. And, it turned out--I think it was $15. Or $30, with the co-pay, because of the insurance arrangement. But I thought, 'Is somebody collecting over $1000 of this from my insurance company?' There's an over-the-counter remedy that's $4.82, or $11.50, or it was some tiny amount, that probably--and finally I asked. At one point I think I asked my dermatologist: 'You know, it will clear it up more quickly.' And I just felt--you know, I have to confess, I ended up filling the prescription, because I wanted to do a test. I wanted to do one foot with the over-the-counter and one foot with the other. I never did it. And so I feel bad about that. But that whole interaction is--something is terribly wrong there. Adam Cifu: Yeah. I agree. And it's interesting. Because, the doctors who are spending the time seeing patients--and I mean, I admit, I'm one of them--I don't have the time to think about this. But there is something deeply troubling. And some of it is there is a lot of public money being shifted into the private realm based on these costs. And, I understand there's something wrong, but day to day when I'm having conversations with patients and I'm just trying to get through my schedule, get people, you know, as much better as I can, I don't have time to ponder it. Russ Roberts: Well, that's why you come on EconTalk, Adam, so you can have this one hour at least--per month, year: every 3 years, I guess, roughly. Adam Cifu: Right. To guarantee that I won't sleep tonight. Russ Roberts: Exactly. Trust me, you'll get over it.

10:53 Russ Roberts: More seriously, we're going to get into a number of, some of the both therapeutic and philosophical issues surrounding this. But, before we do: Talk a little bit about your own practice and medical experience. And in particular, having done that, tell me about how much interaction you have with the commercial purveyors of these ideas, for therapies that might not work. In other words, how many times a year, a week, a month do people drop into your office to tell you about some new idea? And so, tell us what your practice is like and a little of your life experience as a doctor, and then how the commercial site of medicine interacts with that. Adam Cifu: Sure. So, I'm a general internist. I'm a primary care doctor. I have, you know, my own quite large panel of patients who I take care of, primarily in the out-patient setting. I do do some work on the in-patient setting as well, with a team of residents, taking care of some even sicker patients who have been admitted to the hosp. of divide my time: I'm at a university practice so I divide my time between that and my time where I teach or, you know, write. And, it's interesting: My interaction with, I don't know, progress or people selling progress comes very much from reading journals. We are quite locked down here, in that we don't have drug detailers coming through the clinic offering us samples and trying to sell their most recent medications. Because, like many teaching hospitals, universities, we feel that that's the wrong way to base our ideas. But, I feel like I'm faced with that on a weekly basis, as I read the journals. And I read really important articles about new therapies, new devices, new interventions which are being studied. And, in today's world, the majority of those articles are funded by industry--by the pharmaceutical companies, by the device manufacturers who are making those products. And, you know, we're at a place where that's fine: who else is going to study these advances? But it takes a lot of thinking to figure out to figure out, 'Wow. Is this randomized control trial, which should be the perfect data: is this okay?' Or, 'Has this been sullied in its design, or how it's being sold to me?' Even in very good, peer-reviewed journals. Russ Roberts: Well, as Brian Nosek and a co-author like to say, or said in an article which I cherish, this line: "Published and true are not synonyms." Adam Cifu: That is very true. Russ Roberts: You say, 'Who wants to do these studies?' It reminds me a little bit of the agencies that assess the risk of various bonds and were blamed somewhat, sometimes, for the Financial Crisis [Financial Crisis of 2008]. Those agencies are paid by the people who sell those products. And, I don't think anybody would take those assessments as truth. I think people understood that, say, Moody's or others who were trying to--even Triple A or Double A or Triple B rating [AAA, AA, BBB]--had a financial stake in it. And really, kind of like, the whole thing just kind of keep moving along. One answer to that, to your question, 'Who else could do this?' would be an independent organization. A philanthropic-supported, say, foundation-supported organization. They are very expensive. That's the small hitch there, I guess. Adam Cifu: Yeah. Yeah. It's difficult. Vinay Prasad and I, in our book a few years ago--you know, we made the suggestion about, 'Well, obviously what should happen is that the companies that produce the drugs and devices, sort of industry, should be forced to put money into, you know, a public resource who would then design and run these trials. Knowing that this would never happen. But, like, so much, and certainly--certainly the financial watchdogs are a great example. You know, we are really asking industry, to a great extent, to judge the products that they are producing. And then, so it's a weird setting where then we sort of take their data, either as peer reviewers or as, you know, as the consumers of data. And then try to judge that data to say, 'Is this good data? Have they pulled something over on us?' And, you know, they are very good at pulling things over on us, both at how they choose the patients who are in the studies; how they design the studies. But, they are also coming out with some wonderful products. Which really do work. And so there's a lot of--there's a lot of teasing out of truth in this process.

16:16 Russ Roberts: Yeah. I think the challenge here, and thinking about better public policy, is to realize that drug companies and others are not--are not--evil, malicious folks. They are just responding to the incentives. Now, some of the incentives they create through their own lobbying. So you have to, you have to put a footnote there. But the obvious problem with such a system--and of course it has benefits, not all dark--it's not just the--the same with the financial sector. They don't just siphon money out of my pocket to pay for their past mistakes. They also sometimes finance wonderful new things. And pharmaceutical companies, which I have many friends in the business, create products--sometimes, not always but sometimes. But the challenge is, it seems to me pretty simple: How do you get some skin in the game so that we are not relying on people whose skin works in the opposite direction? That just like is a recipe for bad policy. Bad outcomes. Adam Cifu: Right. And I would add that, what makes this all the more difficult or concerning is that, although I agree with you completely, that, you know, these companies are, mostly, interested in making good products that will help people. There are certainly examples over the last 20 years of some real industry malfeasance, where obviously harmful medications, or medications which just truly don't work, are marketed with companies knowingly suppressing data that shows that. And although I'm an optimist and I think those are the exceptions--boy, you know--you learn about some of those examples and it really worries you when you read just about any other industry-sponsored article. Russ Roberts: So, when the financial sector, say, hides data or create some kind of fraudulent practice--sometimes someone might borrow money for a house they are going to struggle to afford-- Adam Cifu: Sure-- Russ Roberts: we could have a situation where a set of bankruptcies leads to a recession. There are some horrible things. But, we're talking here about death. And it strikes me--and I don't want to name names because I'm not well-versed and up on the specifics. But you're free to name the names if you want. That, a pharmaceutical company that knowingly suppresses data--that a reasonable person would say, would be, should be a deal killer to market anyway--you know, they get fined sometimes, like a billion dollars. Or a hundred million dollars. That's not enough. That's not close to enough. They should be shut down. They should be burned to the ground--not literally. But figuratively. And their scientists scattered to the winds. And their decision-makers maybe should go to jail. I don't know. But--and I want to add here--you have to be very careful, and I'll let you weigh in on this: Hiding evidence that doesn't conform to the finding that they are claiming, sometimes is a gray area. Flags get raised. And those flags are sometimes extremely ambiguous. So, after the fact, when it turns out this drug, say, kills people, besides, alongside helping them: You know, sometimes there was some evidence that it might be harmful. But it's not like smoking-gun evidence. What are your thoughts on that? Adam Cifu: I think that's true. And, what often happens with these medications is that you are talking about medications which seems to have promise. And the promise may be because they are a version of another therapy which works. And is already on the market. It may be that there is real bio-plausibility: that our models suggest that this does truly work. Um, because these, um--because these interventions have gotten to Phase 3 Randomized Clinical Trials. You know, they've gone through a lot of vetting at the beginning, showing that they are generally safe, that there's some signal of efficacy. And that it's only later that, when large clinical trials with maybe a broader array of patients, that we see that, 'Huh. This is a whole lot less effective than we thought it was.' Or, there are adverse effects that are important that we didn't realize were important to begin with. And so, as you say, it's not clear cut. And, I think what often happens at that point is the people who have been working on, um, these drugs, or these devices, throughout, it's like, you know, 'We're sure this works. We know better. And we're going to push this through.' I would hope that if I was in that position that I would say, 'Look, we're supposed to be helping people. We have to be sure.' But I also understand that if you've worked on something for 15 years and you are pretty sure that it works, and then you get some negative data--there's a temptation to say, 'Ach, that data is wrong. Let's push through.' Added to it that there is a lot of money involved here.

21:36 Russ Roberts: For sure. As I quoted you before, you said you've been called critics, haters, non-believers--nihilists, ni-hilists [two pronunciations, nee-hilisists, nai-hilisists--Econlib Ed.]. Are people really giving you flak for this perspective? Adam Cifu: People really give us flak for this perspective. Russ Roberts: Fellow doctors? Adam Cifu: Fellow doctors. Um, because, and, and, and--I'm okay with it to a great extent. Because the doctors, who give us flak for this, are also committed to patient care. And they feel like, 'Look, I know this intervention works. I've adopted this therapy. I've adopted this procedure.' And, 'I've seen my patients get better.' 'Why are you being such a nitpicker?', looking at these trials and saying, 'the absolutely benefit is small; the cost is huge? Maybe we shouldn't be adopting this?' I get it. Because I think those people are honestly thinking that they are doing the best job for the patient. And they honestly think that what we are doing is slowing down progress. And maybe keeping some beneficial therapies from patients. And it's a struggle. Russ Roberts: I think it's really hard to think objectively about one's own personal experiences. I have shoulder issues on both sides of my body; and I've had a steroid shot in each side. And I got better afterwards. About a year after one of those episodes, I had a very strong, intense pain in the neck and shoulder area and ended up doing nothing. And the other day--it was yesterday, actually--my wife said, 'So, how did you get better? That shoulder thing seems to have gone away.' I said, 'Yeah. I'm great.' She said, 'What did you do?' And the answer, of course, is: Nothing. Time passed; it got better. The nerve which was being impinged on got unimpinged, or whatever was going on. And I think--you know, as Jacob Stegenga points out in his episode--you know, one of the things about life is that a lot of things are cyclical. They get better on their own and we misread what treatment works and doesn't work. And I'm pretty confident that steroid shot made a big difference--the next day. The next three days, say. Maybe it would have taken two weeks, or six months. Or maybe it wouldn't have gotten better. But I can't tell. And my doctor, who is a wonderful man and an incredibly caring and effective doctor--he believes very strongly that it works, for sure. I understand that. Adam Cifu: Right. Right, right. And there are wonderful examples of that. The thing that, for me, comes to mind most--and I'll go into a little bit of specifics. You know, as a general internist who takes care of a lot of older patients, one of the difficulties that many people have is spinal stenosis, which is a disease, a syndrome, where generally because of osteoarthritis in the lumbar spine there is compression of the spinal chord itself. And it can be a horribly debilitating illness, where people get weakness, pain in their lower extremities--really keeps them from doing the things that they like. And often they come and see me with this. A therapy that I think we know works is a surgery--a laminectomy. Which is a pretty big deal. And since we are often dealing with older people, the risks are there. And some people aren't interested in that. Another therapy is steroid injections--a lumbar epidural steroid injection to try to, you know, shrink the swelling around the spinal cord. There are pretty good randomized control trials of that, which shows that it doesn't work. And when I say, 'It doesn't work,' it means: It doesn't work--doesn't reach our level of statistical significance. But, there is a difference in those trials; and there does seem to more and more people who respond than don't respond. More people who respond in the treatment group than in the placebo group. So, for the most part I say, 'I'm not going to offer this. This is a therapy which doesn't work.' But I really do know that--I don't know, one in a hundred patients, you know, they may have a response to that. Do I know that that's not a placebo response? I don't really. But it's what makes these decisions so complicated. You'll remember a few years ago, there was really a terrible, terrible, really, outbreak of complications of those steroid injections related to a compounding pharmacy which was using non-sterile procedures and people got terrible meningitis. Some people died from that. And the tragedy of that was obviously that people were harmed. But instead it was for a therapy which we're not entirely sure whether it worked or not. So, the complexity of this decision-making, you know, it is overwhelming sometimes.

26:39 Russ Roberts: I'm reminded of the view that says: Stay away from hospitals, because people die there. And, of course, that can be what we call selection bias in economics. But it also can be true: A hospital is a somewhat dangerous place. Adam Cifu: Yes. Russ Roberts: The modern version of that is: Stay away from surgery: It can kill you. Of course, it can save your life, also. And, I want to turn to this question of what--and for me, I'm definitely a medical conservative. I'm--for me, those kind of treatments are last resort--desperation. If you can live without them. Of course, sometimes you can't function without them. But, I want to read another quote from your piece. The medical conservative adopts new therapies when the benefit is clear and the evidence strong and unbiased. Cardiac resynchronization therapy for patients with systolic heart failure and typical left bundle branch block, direct acting oral anticoagulants for prevention of arterial and venous thrombosis, and rituximab for lymphoma are therapies that sell themselves. So, there are some miracles. We have a whole bunch of things which, as you say, glorious for humanity. Of those, you list three things--which is typical in an article, by the way: lists of three are good. Is the actual full list 30? 300? 3000? of new therapies that sell themselves? Meaning, tell me what you mean by that and how long that list really might be. Adam Cifu: Sure. I think when we say, 'sell themselves,' we would say, 'This is a therapy where you'd read a few randomized control trials, that there is a clear benefit with a large absolute risk reduction.' And, and, we are very serious when we talk about that, about thinking about really important endpoints. So, we are interested in things which improve mortality, which improve patient wellbeing. Not things that improve markers of disease, surrogate outcomes--where, you know, who knows if that's important? Um, and generally when you read those articles, you know it. You say, 'Well, this is important.' And, I think that if I said, 'Let me look back over my career,' I think we are talking, you know, a few dozen of those. Some in the cardiology realm. Certainly some in the hematology/oncology realm. Certainly some in gastro-neurology, treatment for Hepatitis C. Certainly infectious disease, treatment for HIV [human immunodeficiency virus]. But the majority of things that we see now--you know, most of the articles that I read are mostly negative--'We tried but we need to go back to the drawing board'--or, are very small improvements over something that we already have: the absolute risk reduction of comparing a new therapy versus the accepted therapy is small. It's one of the reasons that, if you read, say, the New England Journal of Medicine, you know, our, I don't know, one of our bibles, I guess--a bible can come out equally[?]--one of the journals that publishes a lot of good primary research--many of the trials these days are not inferiority trials. Russ Roberts: Explain. Adam Cifu: So, that's a trial that compares a new therapy to an accepted therapy. It's not even trying to show that the new therapy is better. It's trying to show that the new therapy is non-inferior to the accepted therapy. Because this new therapy offers some other advantages. Seldom the advantage that it's less expensive. More often is that it's better tolerated; it's less invasive; it's easier for the patient to use. And that shows us that we are in a place that, look: Medicine has come a long way. We've had enormous breakthroughs. And so the majority of advances right now are small advances over accepted therapies. Russ Roberts: Yeah: The low-hanging fruit, some of it has been picked. By definition, most of it has been picked. Adam Cifu: Right. And there's a promise that, 'Well, maybe we are going to enter a new phase.' Right? You have a lot of people talk about personalized medicine, and genomic medicine. Some people are all in on that; some people are not. And there is the potential that: Boy, we'll find a lot of drugs that work on, you know, specific genetic mutations. And we have some of those, mostly in the world of oncology. And that maybe we'll get to the point that we entered some sort of new golden age of medicine, where many drugs a year are shown to have enormous breakthroughs. I'm obviously more skeptical about that. I feel like we'll continue to have, occasionally, great advances. But mostly what we'll see are small, incremental steps forward.

31:40 Russ Roberts: That would be great, except that the small, incremental steps forward are--my understanding is they are billed at very large numbers of dollars. And a patent is given out, say, because it's absorbed more easily in the stomach, or more comfortably, or you don't have to take it as often. And then that becomes the thing that gets prescribed. It's many-fold times the cost, say, of a generic. It is better for the patient in some dimension. But it seems to me that should not be privileged with the monopoly of a patent. In particular, the current system, the way that it doesn't encourage patients to take that generic is, it seems to me, a really bad thing about what we are living in right now. Adam Cifu: Yes. And I think my co-authors would agree that, you know, when we, when talk about being medical conservatives, it's not standing against, um, those advances. It's not standing against those medications. It's just recognizing the data. Recognize what sort of advance these medications bring. What are the costs, economic, and in terms of actually patient outcomes. And then being able to have an open discussion with the patient about that. So, rather than blindly accepting, you know, the newest treatment for diabetes and saying, 'Okay. This is the benefit of this new treatment. This is the harms. This is the costs. Let's have an open discussion with our patients, who are, a). overwhelmingly intelligent people who can make their own decisions; and, b). understand much, much, much better what they want for themselves in their health care than we could ever hope to. Russ Roberts: So, to bring up an example that sometimes, in the news lately--get your thoughts on this. Insulin--you mentioned diabetes. Insulin is quite expensive--however--for the latest, state-of-the-art insulin. There is, as we've talked about on the program before, you can buy insulin at Walmart I think for $25 a dose. My listeners tell me, 'Oh, but that's not as good.' I'm sure it's not. I hear it works more slowly. It has real lifestyle implications for the patient. And you can debate when that enormous increase in cost is worth it. Sometimes a quite large benefit in lifestyle. But no one is making that choice. Or at least no on is encouraged to make that choice naturally. It's going to have to be decided from the top down, because of the nature of the current system. Adam Cifu: Right. It's a wonderful example, actually. An example that, when I precept our residents--when they see their own patients, and I'm there to, sort of listen to each case, and give them guidance: It's amazing to look at how we've made this transition over the years. Meaning someone in therapy. A much more physiolic insulin. Insulin which looks like how our own pancreas works. Where there is sort of a basal level of insulin that, that is always in the body; and then there are peaks that come, you know, brilliantly, as soon as people lead[?]. And, you know, we've developed or other people have developed for us, for us practitioners: Insulins which do an amazing job of medicating back[?]. But, they are much more difficult for the patient. They generally involve at least four injections a day. And it's funny that the residents adopt that. Because that's sort of the newest thing to do. It's what's sold to them by, by the experts in the field. But often we see patients who then come back under worse control on that. Because they can't possibly handle those four injections a day. It's just a lot of injections. Or maybe they are at work half the time, and it's complicated to have your insulin with you, and to be giving yourself a shot when you are in the lunch room on a quick break. And that's a great example of, you know, where we should tailor our therapies towards the patient. And sometimes it's not the most expensive, most advanced therapy, which is best for the person.

36:01 Russ Roberts: You say, quote, We resist the urge to conflate benefits of a therapy to a population vs benefit to the individual. And I know that you, and I, and others are very interested in the lack of value in many screening tests: we've talked on the program about prostate screening, mammography, and others where the benefit is quite, is shockingly small. Sometimes zero. There are an enormous number of false positives, which alarm the patient and then often lead to tests and further consequences that are dangerous--literally dangerous. Some people would respond to that and say, 'Yeah; but for the one person it has helped, that's what matters.' And, of course, it's more than one--it's whatever that small number is. And, 'We don't know who that is right now, so we should just keep doing this.' Adam Cifu: Right. Boy. And a topic that gets more and more complicated. As I practice for longer, know more and more, and get to the age where I should get these screening tests that I've been offering my patients over the years. And, you put it in a very interesting way, because I--let's talk about prostate cancer screening. I have had patients who have probably had their lives saved by prostate cancer screening. I can't tell you that for sure. But I've found aggressive tumors very early on because of screening, tumors which in all likelihood would have caused them problems had I waited for that person to have symptoms. I also have a, you know, stable of men in my practice who I've screened who have gone on to get therapy, who look back and say, 'I think that was a terrible decision. My life is worse because of that decision.' I also have a lot of people who I've probably made more anxious just by delivering them information that they didn't need therapy; this is something we just need to follow. And there's someone who is absolutely fine without prostate cancer; they remain someone who is absolutely fine without prostate cancer; but now they're someone who worries about prostate cancer. And, the people who are paying for that person's success are obviously different than the person who had the success. Russ Roberts: Well, yeah: that's the personal data point thing we were talking about earlier-- Adam Cifu: Right. Russ Roberts: When I went in for my most recent physical, which was about two years ago, I explicitly told my doctor that I did not want a PSA [Prostate-Specific Antigen test]--which is the prostate screening test. And, they did it anyway. Right? And I wanted to say, 'Look: If you are going to do it, at least don't tell me the answer.' Right? Adam Cifu: Right. Russ Roberts: And I can't--it's such a symptomatic example, what's wrong with the system, that, when I confronted my doctor--not in angry way: I just mentioned that, 'I think I told you not to do this,' he says, 'Yeah. I tell them that they do it anyway.' And I'm thinking, 'Really?!' Adam Cifu: Right. Who is 'they'? Russ Roberts: Yeah, exactly. How about me and you? Adam Cifu: Right.

39:29 Russ Roberts: Now, I have--I have friends and relatives in England. And my experience has been, if you are not feeling well in England, and you go to the National Health Service [NHS], nothing happens. They angrily turned you away. In America, they are more prone to say, 'Well, let's do an MRI [Magnetic Resonance Imaging test].' Or, 'Let's try this drug.' And, in England, you have to basically be bleeding or having a bone sticking out of your body before they are going to really intervene. So, I want to ask you--this is kind of an unfair question--do you think the National Health Service in England, and other places like that, are practicing medical conservatism? Or, is it, is that something we should be striving for? Adam Cifu: Hmm. That is an unfair question. Russ Roberts: Sorry about that. Adam Cifu: They may be practicing medically conservative but not by choice and not by design. Maybe I'll pivot on it a little bit. I have to say that what I enjoy most in my practice--and this may sound morbid--but is taking care of people who are unwell. Who are coming to me with concerns, complaints, that bring them to me for help. Because I feel like I have a lot to offer those people. And much of what I offer will actually be beneficial to them. I can treat problems--I can diagnose problems, I can treat problems. Where I have much more problems are those healthy people who come in, who want wellness. You know, who want health care prevention. And, I'm not sure that much of what I--well, I want to say I'm not sure. I am sure that much of what I have to offer is ineffective, and even if things that are effective--you know, we can talk about some cancer screenings--have such a small absolute benefit that it's very, very, very unlikely to help that person who is sitting in front of me. Russ Roberts: Yeah. And, of course, we all want magic cures. We expect you to, you of course have--in[?], you're covered, either in a device or in a thing I can swallow or a shot I can take that will make me better. And you are my shaman. You are my cure-me wizard. Adam Cifu: And we have a giant covered in the 21st century. Russ Roberts: That's right. Yes, I do. Yeah. Okay.

42:00 Russ Roberts: I'm going to read two paragraphs that I loved from the essay, and just let you expand on them. You write the following: Robust critical appraisal may put the medical conservative at odds with "content experts" who may oppose our skepticism on the grounds that it is not informed by deep expertise in the particular issue at hand. Yet the medical conservative remains steadfast in drawing a sharp distinction between content-level expertise and expertise in critical appraisal. These 2 may not go together, and the value of each must be judged on a case-by-case basis. For instance, the expert at placing implantable cardioverter-defibrillator (ICD) devices may or may not be the most reliable expert in answering the question of "when is it best to implant an ICD." Too often, content expertise becomes a synonym for devotion to the prevailing model or theory.



At the core of this tension is that content experts are often enthusiasts for whatever content they are expert in, whereas the medical conservative is enthusiastic only for that which has been proven to improve human health. When genuine benefit exists for an intervention, it easily withstands critical appraisal. No one debates the value of antibiotics for bacterial infection, percutaneous coronary intervention for acute myocardial infarction, or repair of femoral head fractures. React to your quote. Expand on it. Adam Cifu: First, I have to give a nod to my incredible co-authors on this article. And as I read those, I remember ones that in the first draft of this that I saw, I just read those and like, 'That is perfect, and that so summarizes my thinking: I am not going to touch that paragraph.' And I think this might be one of them. This is an interesting struggle. Because, medicine has become so specialized that it's hard sometimes for me, as a generalist. I, you know, I love what I do. I think what I do is very important. And, if there's anything that I feel like I'm an expert in, it's probably appraising data. You know, it's appraising the medical literature and saying, 'Is this something that I would offer my patients?' And that, that sometimes puts me in the position of arguing with someone who is a true content expert. Who is the person who does these procedures, spends their entire life seeing these patients, with these problems. And they actually know how to treat those patients better. They understand the disease better. But we will often disagree on the value of an intervention. And it's because--and I'm going to make myself in the right here, which certainly isn't always the case--it's because they feel like: 'I have seen this work.' Right? And, as we know, it may not have worked. It may be just that the patient was just going to get better. And that's what happened. And they are such true believers in the mechanism of how this intervention works that they are blinded when they read the studies and lose track of: 'You know, this is poorly-designed study. The absolute benefit of this is quite small. The risk of adverse effects is quite high.' That, me, as sort of a maybe disinterested observer will look at that and say, 'I don't understand why you are doing this.' On a larger level, this plays out in guidelines sometimes. The U.S. Preventive Service Task Force [a private group, not a U.S. government group--Econlib Ed.] is very proud of the fact that they really don't have content experts working on their guideline recommendations. They have experts in health outcomes, you know, critical appraisal. And they will often run afoul of these specialty societies who are coming out with their own guidelines, which will almost always be more aggressive, more pro-health care than the U.S. Preventive Service Task Force. Russ Roberts: I can't help but note that the American Society of Engineers often gives the United States a D in infrastructure. But, of course, they would. It doesn't mean we have good infrastructure. That could still--they could be right. It could be there is a serious problem. But, their recommendation alone should be taken with many grains of salt. Adam Cifu: Yes.

46:38 Russ Roberts: The other thing I'm reminded of is Nassim Taleb, Nassim Nicholas Taleb, who has made the analogy I love, which this reminds me of: That, when you are trying to understand gambling, say, at the--you really don't want to use the carpenter who built the roulette wheel on the grounds that, 'Well, the carpenter knows more about it than you do. You are just a statistician.' And, obviously doctors' understanding of statistics is imperfect. I think that's a big challenge, an interesting aspect of medical education and education generally. Adam Cifu: Yeah. Russ Roberts: But one place this comes to mind is a lot of issues, of course, in riskiness of procedures and testing. You know, with false positives, false negatives, surgeries that not just fail to cure the patient but tragically kill them or harm them. Almost all tests have costs. Besides the financial costs they have radiation often involved, which leads to the risk of higher cancer in the future. One of the things that I've never really seen anyone carefully explain to me is: How do they know that a false positive has happened? How do we know that--because the claim of a false positive means that we understand actually what does happen: if we really did understand, how do we know it was a false positive? Does that question make sense? Adam Cifu: Yes, it does. It does. It may be too philosophical for me to attack. I do--you know, there is a move these days--and I think where we have difficulty with false positives--and we really do have difficulty in medicine here--are level of significance. Like, in many realms, is the sort of 5% error rate, right? And I know--I was giving a talk once and actually my neighbor, who is a small particle physicist, was in the audience and came up to me and just chuckled and was like, 'How can you accept that level of a false positive?' Right? I was like, 'Well, people are a little bit more expensive to deal with than your silly particles.' So, that's an issue. And, it's actually been, it's been nice to see the journals really struggle with how we can bring sort of Bayesian reasoning into our analysis of medical data more these days, rather than going by the kind of strict, frequentist[?] statistics of, 'Let's actually really consider our pre-test probability. How likely is this therapy to help people, based on what we've seen before? based on the plausibility of this outcome?'--to see if we can have a better sense of what the real outcome is. And maybe therefore decrease the number of false positives that we're seeing. I am not sure we're there yet, and I'm certainly not a statistician, but the risks are pretty huge because you have to think, 'So, who sets that pre-test probability?' Right? And, we have enough trouble with the people designing the studies, biasing the studies to give the answers they want. If we're also giving them a little bit more leeway in the statistics, will that be a problem? But I do think that's promising, and I'm hoping that over the final decade of my career I see more progress in that.

50:29 Russ Roberts: Well, I raised the philosophical question, which you artfully ducked, because, when economists or statisticians like to make fun of doctors, they point out: 'Oh, they don't understand Bayes' Theorem. They don't understand that a test that shows a person has, say, cancer--the actual risk can be very, very small if the disease is rare and the rate of false positives is high.' And then there will be a little numerical example: 'Let's say the risk--the disease occurs once in every 100,000 people; and the false positive rate is 8%; and therefore, it turns out of having the disease with a positive result is often incredibly small.' And this is--besides being good for exam questions, these kind of observations again make some people feel superior to doctors, which I guess some people now and then. But, how would you know what the false positive rate is? It's a serious question; it's not actually a philosophical question. And so, if I do a screening--I don't understand that if I do a screening, then I do surgery and I see the thing I went to remove isn't there, that's a false positive. Do you have any idea how those kind of numbers get actually guessed at? Adam Cifu: Right. So, I'll push back, because I would say, as physicians, I think from a diagnostic standpoint, we probably understand this better than anybody. Because we usually do get answers. Right? We see that person who we diagnosed with cancer who goes to surgery and they don't have cancer. Or, we see that person who we, got a false negative test, and we follow them clinically and we realize that they actually do have an important disease. And, actually, you know, on rounds we talk about this all the time. When we think someone has a disease and get a negative test, we will actually sit down and, you know, calculate, post-test probabilities, to say, 'Look: there's still a 40% likelihood. We've got to go for it and do another test.' Where it's difficult, though, and where I think, um, doctors don't do a good enough job at it, and because I'm a doctor and I have to defend myself--you know, and the stakes are high for us, which is why most people who criticize us, you know, I'm not sure have the standing to criticize us-- Russ Roberts: Yeah. We have no skin in game. Adam Cifu: Right. Exactly. Is that, when you are looking at a therapy, and there are--well let me say there are very few parachutes in medicine, right? There are very few things that have a number needed to treat of one, that you know, you would die without this therapy; and with this therapy you will get better? Right? Where there's such absolutes that we can truly tell there is a false positive, there is a false negative in this study. Usually what it is, is that there's a large population; those people will respond differently to a therapy. Some of those people will benefit. Some of those people will not benefit. In the way we practice medicine today, we accept a therapy if it helps people in a statistically, you know, meaningful significant way. Um, but, you know, the difference therefore between a false positive and false negative can be a few patients. Right? In a large study. Um, and so, I think we don't understand that. And I think it's why, to go back to being a medical conservative, we feel like this is the right way forward, is to say: Let's slow down. Let's not accept a single article, a single study of a new intervention. Because we don't know if that drug is beneficial, or that article showing that a drug is beneficial, is a false positive. We will know it's a true positive if we see that repeated a few times, on multiple different populations with maybe subtly different study designs. And so, maybe that's my somewhat defensive answer. Russ Roberts: Yeah; I love that--it's great.

54:42 Russ Roberts: There's been a story in the news recently about relationship between depression, mental health issue, and a particular gene. I think it's SLC6A4, for those keeping score at home. And, in the 1990s there was this excitement that we've found a genetic basis for depression. According to a recent article in The Atlantic as well as a fascinating and scorching article and a blog post by Scott Alexander at SlateStarCodex. Maybe 450 or maybe even a thousand studies were done confirming this relationship. A new study has come out finding there is no medical basis for that connection. It's a sham. A false positive. A--it's not a false positive maybe a thousand studies are meaningless-- Adam Cifu: Right-- Russ Roberts: Totally misled us. Not right. Passed all the standard tests. And it's an enormous wake-up call if in fact this new study is correct. There's always a question whether it's correct. But, assuming it's correct, it raises--one of the authors was quoted as saying the following, which I thought was reminiscent of your point, that it's Keller[?], one of the co-authors of the piece. Keller worries that the--this is a quote from Keller in The Atlantic. He says, "People ask"--let me say it differently. Keller worries that these problems "will be used as ammunition to distrust science." So, that's a line from The Atlantic. And here's the quote, from Keller, at this study: "People ask, Well, if scientists are publishing crap, why should we believe global warming and evolution?" he says. And he continues, Keller: "But there's a real difference: Some people were skeptical about candidate genes even back in the 1990s. There was never unanimity or consensus in the way there is for human-made global warming and the theory of evolution." You point out that--you are very careful to say--my response to that quote is that, 'Well, of course scientists publish crap. They do it all the time. That's the nature of science.' Because good science and science that doesn't stand up over time--social scientists do it even with more regularity. But you point out that, when you write a piece like this, you get accused of being 'anti-science.' What all these studies--you are just going to reject them? That opens the door to--fill in the blank: Creationism, anti-global-warmingism, whatever. Respond to that critique of your approach, and if you have any thoughts of the Keller, et al, study. Adam Cifu: I--it's challenging. But it is true. You know, not only is there a lot of bad science out there, but the science does change. Our treatments change. Evolve. Our populations on whom we are testing those treatments do change. We are, maybe, in, you know, a more hot water than economists, because people are much more interested in medicine and medicine outcomes than they are in economic outcomes. You can argue if that's fair or not. But, you know, every week you read Science Times, say, and many, many new studies--you know, new studies are covered. And so people read that, and they remember that. And, often, these are little blips on the process of trying to find truth. And we need to accept that. You know. Not everything that's published is right. And even good studies are often not right. We have an article which recently came out which looks at part of this, which is called, 'Should Evidence Come with an Expiration Date?' which responds to some recent studies--probably the aspirin studies[?] are the ones that people know best. A therapy that was, you know, clearly shown beneficial in the 1980s and 1990s, became adopted into guidelines in the 1990s and 2000s. And then some really well-done studies show that, you know, at this point in time, the risks of that therapy for a subset of patients really do outweigh the benefits. Nothing has changed in human biology over that time. But the risk of people have changed over that time. There have also been multiple studies over the last decade about interventions for acute stroke. And, the first of those articles looked like this was ineffective: it did more harm than good. But as the devices got better, as the operators got better, as we figured out who to use those on: Those, some of those interventions have appropriately become a standard of care. And you could say, 'Huh, you know: Those first studies were garbage. Scientists don't know what they are doing.' But, it's just--the process of figuring out how these powerful tools work. In whom they work and don't work.

59:48 Russ Roberts: So, what's the advice you might give the thoughtful patient? Your article is written mainly for your colleagues. And it's, as you say, a bit of a manifesto. It's a delightful read. But for those of us out here in, uh, anxiety-land, as we age-- Adam Cifu: Heh, heh, right-- Russ Roberts: and more things go wrong, and more symptoms are showing up every day--some of which are irrelevant but some of which are life-threatening-- Adam Cifu: yeah-- Russ Roberts: How should we approach this? Philosophically? I--you know, there's--and I'm going to add a twist. My father recently had a procedure. He's 88. My thought was: You know, it's probably not worth it. After the procedure, the doctor said, 'Good news. There was nothing really there; and you don't have to come back again for it.' And my dad was happy. He's not a fool, though. Even at 88, he's mentally aware enough to know that: Maybe I shouldn't have gone at all. And it turned out well. It could have killed him. Right? But--thank God it didn't. But, as his agent, I would have counseled just to let it go. Take a chance. Easy for me to say. I'm not him. But, as he gets older, that's going to be my call to some extent more than it is now. And: How should I think about that? And, when it's me, how should I think about it? Adam Cifu: Yeah. My advice will, I think, seem obvious. And may be hard to accept or at least bring into use. But, the most important thing [--he, he] is to find a physician who you can actually talk to. A physician to has time to engage in a conversation and will, you know, accept your questions. And then, for anything that is recommended, to ask a couple of very simple questions. The first question is probably: What are the alternatives? And, the alternatives mean: What if I do nothing here? And then, asking: If I accept this therapy--if you are suggesting something which is right and I accept it: What evidence do you have that this is going to help me? And, if you say, 'Help me' what does that mean? You really need to know from your doctor, you know: Is this something that may get better on its own? Because if that's the case, maybe it's fine to wait. Especially if the risks of not doing anything are small. That's the case for--you know, to go back to your shoulder--is for many, many, many orthopedic complaints, you know, many of them get better on their own. And when we study those, often if you send people to physical therapy first, 50% of those people end up not needing the procedure that was recommended to begin with. That may be because of the physical therapist are amazing, or it may be because of time is beneficial. The other thing is to say, 'Okay, so, you are saying that is a good therapy.' What does 'good" mean? What does it going to help? And if the person tells you that it is going to take lab value X better, but I have no idea if that's going to make you better: You should pause. And you should think hard about that. Because, um, I and my medical conservative colleagues are, um, very skeptical of surrogate outcomes. Which have sometimes led us in the right direction, but have sometimes really led us astray. Russ Roberts: I think the challenge of that approach--the first part of that I just absolutely love, which is: Choose your doctor wisely and find a medical conservative or try to find one. I have a medically conservative dentist, which is very rare. And when I asked him if I really needed to replace this missing tooth way in the back of my mouth, because there was a risk that the tooth above it would grow down and then go through my jaw and pierce through my throat and into my leg--because teeth continue to grow--I said, 'What's the odds that will happen in the next x years? Where?' And he said, 'We really don't have any idea.' And I thought, 'I appreciate that honesty.' And he said, 'Good idea.' So far, so good, by the way. But the other problem is when I say, 'What will happen if I do nothing?' Often that honest, medically conservative doctor will say, 'Well, we don't know that with any precision.' And, of course, patients want--even data-conscious patients want some data. And when you don't have it--which you often don't--we're kind of left in the lurch there with a tough situation. Adam Cifu: Yeah. I kind of hate to fall back on the art of medicine, because this is just such a trope. But I think most of the art of medicine is, you know, figuring out how to talk to your patients. Because there are some people who, if you give them--I really don't know--they get angry and will quickly find a doctor who will make recommendations, which may not be the right thing for them. Other people are very happy to accept that. And I hear from some of those people, 'Oh! That's why I continue to see you as my doctor.' And it's very hard to figure out what people want. Because if you ask them directly, we often don't know what we want from our doctor. Maybe until it's too late.

1:04:57 Russ Roberts: You said you had a large practice? Do you have a rough idea? What does that mean? What's a large practice? Roughly? How many patients do you? Adam Cifu: Yeah. So, people in primary care, you know, general medicine, family medicine practices may scoff at me. But I take of about 850 patients. That's kind of my panel. And that's based on, you know, people who have seen me in the last 18 months, I think, is our cut-off. If you go into our not-perfect medical record system and look at how many, you know, primary care patients I have, I think the number is about 2500. But, many of those people have disappeared long ago and no longer see me. And obviously, every doctor will tell you that if there's 850 patients there are probably 100 that I see all the time and actively manage and know them about as well as I know my children. And then there are another 750 who, you know, call me occasionally for a problem or show up every now and then for that wellness-type visit. And the wonderful thing about medicine is that there's a lot of flux. People graduate in and out of that hundred active patients. So, it's fun.