In a world where 85% of doctors can't solve simple Bayesian word problems...

In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...

In a world where "p-values" are anything the author wants them to be...

...and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of...

...there's also MetaMed. Instead of just having “evidence-based medicine” in journals that doctors don't actually read, MetaMed will provide you with actual evidence-based healthcare. Their Chairman and CTO is Jaan Tallinn (cofounder of Skype, major funder of xrisk-related endeavors), one of their major VCs is Peter Thiel (major funder of MIRI), their management includes some names LWers will find familiar, and their researchers know math and stats and in many cases have also read LessWrong. If you have a sufficiently serious problem and can afford their service, MetaMed will (a) put someone on reading the relevant research literature who understands real statistics and can tell whether the paper is trustworthy; and (b) refer you to a cooperative doctor in their network who can carry out the therapies they find.

MetaMed was partially inspired by the case of a woman who had her fingertip chopped off, was told by the hospital that she was screwed, and then read through an awful lot of literature on her own until she found someone working on an advanced regenerative therapy that let her actually grow the fingertip back. The idea behind MetaMed isn't just that they will scour the literature to find how the best experimentally supported treatment differs from the average wisdom - people who regularly read LW will be aware that this is often a pretty large divergence - but that they will also look for this sort of very recent technology that most hospitals won't have heard about.

This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report. (Keeping in mind that a basic report involves a lot of work by people who must be good at math.) If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately. We can’t all have nice things like this someday unless somebody pays for it while it’s still new and expensive. And the regular healthcare system really is bad enough at science (especially in the US, but science is difficult everywhere) that there's no point in condemning anyone to it when they can afford better.

I also got my hands on a copy of MetaMed's standard list of citations that they use to support points to reporters. What follows isn't nearly everything on MetaMed's list, just the items I found most interesting.

90% of preclinical cancer studies could not be replicated:

http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

"It is frequently stated that it takes an average of 17 years for research evidence to reach clinical practice. Balas and Bohen, Grant, and Wratschko all estimated a time lag of 17 years measuring different points of the process." - http://www.jrsm.rsmjournals.com/content/104/12/510.full





"The authors estimated the volume of medical literature potentially relevant to primary care published in a month and the time required for physicians trained in medical epidemiology to evaluate it for updating a clinical knowledgebase.... Average time per article was 2.89 minutes, if this outlier was excluded. Extrapolating this estimate to 7,287 articles per month, this effort would require 627.5 hours per month, or about 29 hours per weekday."





One-third of hospital patients are harmed by their stay in the hospital, and 7% of patients are either permanently harmed or die: http://www.ama-assn.org/amednews/2011/04/18/prl20418.htm





(I emailed MetaMed to ask for the actual bibliography for the following citations, since that wasn't included in the copy of the list I saw. I already recognize some of the citations having to do with Bayesian reasoning, which makes me fairly confident of the others.)





Statistical Illiteracy





Doctors often confuse sensitivity and specificity (Gigerenzer 2002); most physicians do not understand how to compute the positive predictive value of a test (Hoffrage and Gigerenzer 1998); a third overestimate benefits if they are expressed as positive risk reductions (Gigerenzer et al 2007).

Physicians think a procedure is more effective if the benefits are described as a relative risk reduction rather than as an absolute risk reduction (Naylor et al 1992).

Only 3 out of 140 reviewers of four breast cancer screening proposals noticed that all four were identical proposals with the risks represented differently (Fahey et al 1995).

60% of gynecologists do not understand what the sensitivity and specificity of a test are (Gigerenzer at al 2007).

95% of physicians overestimated the probability of breast cancer given a positive mammogram by an order of magnitude (Eddy 1982).

When physicians receive prostate cancer screening information in terms of five-year survival rates, 78% think screening is effective; when the same information is given in terms of mortality rates, 5% believe it is effective (Wegwarth et al, submitted).

Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test (Bramwell, West, and Salmon 2006).

Sixteen out of twenty HIV counselors said that there was no such thing as a false positive HIV test (Gigerenzer et all 1998).

Only 3% of questions in the certification exam for the American Board of Internal Medicine cover clinical epidemiology or medical statistics, and risk communication is not addressed (Gigerenzer et al 2007).

British GPs rarely change their prescribing patterns and when they do it’s rarely in response to evidence (Armstrong et al 1996).





Drug Advertising





Direct-to-customer advertising by pharmaceutical companies, which is intended to sell drugs rather than to educate, often does not contain information about a drug's success rate (only 9% did), alternative methods of treatment (29%), behavioral changes (24%), or the treatment duration (9%) (Bell et al 2000).

Patients are more likely to request advertised drugs and doctors to prescribe them, regardless of their misgivings (Gilbody et al 2005).





Medical Errors





44,000 to 98,000 patients are killed in US hospitals each year by documented, preventable medical errors (Kohn et al 2000).

Despite proven effectiveness of simple checklists in reducing infections in hospitals (Provonost et al 2006), most ICU physicians do not use them.

Simple diagnostic tools which may even ignore some data give measurably better outcomes in areas such as deciding whether to put a new admission in a coronary care bed (Green and Mehr 1997).

Tort law often actively penalizes physicians who practice evidence-based medicine instead of the medicine that is customary in their area (Monahan 2007).

Out of 175 law schools, only one requires a basic course in statistics or research methods (Faigman 1999), so many judges, jurors, and lawyers are misled by nontransparent statistics.

93% of surgeons, obstreticians, and other health care professionals at high risk for malpractice suits report practicing defensive medicine (Studdert et al 2005).





Regional Variations in Health Care





Tonsillectomies vary twelvefold between the counties in Vermont with the highest and lowest rates of the procedure (Wennberg and Gittelsohn 1973).

Fivefold variations in one-year survival from cancer across different regions have been observed (Quam and Smith 2005).

Fiftyfold variations in people receiving drug treatment for dementia has been reported (Prescribing Observatory for Mental Health 2007).

Rates of certain surgical procedures vary tenfold to fifteenfold between regions (McPherson et al 1982).

Clinicians are more likely to consult their colleagues than medical journals or the library, partially explaining regional differences (Shaughnessy et al 1994).





Research





Researchers may report only favorable trials, only report favorable data (Angell 2004), or cherry-pick data to only report favorable variables or subgroups (Rennie 1997).

Of 50 systematic reviews and meta-analyses on asthma treatment 40 had serious or extensive flaws, including all 6 associated with industry (Jadad et al 2000).

Less high-tech knowledge and applications tend to be considered less innovative and ignored (Shi and Singh 2008).





Poor Use of Statistics In Research





Only about 7% of major-journal trials report results using transparent statistics (Nuovo, Melnivov and Chang 2002).

Data are often reported in biased ways: for instance, benefits are often reported as relative risks (“reduces the risk by half”) and harms as absolute risks (“an increase of 5 in 1000”); absolute risks seem smaller even when the risk is the same (Gigerenzer et al 2007).

Half of trials inappropriately use significance tests for baseline comparison; 2/3 present subgroup findings, a sign of possible data fishing, often without appropriate tests for interaction (Assman et al 2000).

One third of studies use mismatched framing, where benefits are reported one way (usually relative risk reduction, which makes them look bigger) and harms another (usually absolute risk reduction, which makes them look smaller) (Sedrakyan and Shih 2007).





Positive Publication Bias





Positive publication bias overstates the effects of treatment by up to one-third (Schultz et al 1995).

More than 50% of research is unpublished or unreported (Mathieu et al 2009).

In ten high-impact medical journals, only 45.5% of trials were adequately registered before testing began; of these 31% show discrepancies between outcomes measured and published (Mathieu et al 2009).





Pharmaceutical Company Induced Bias





Studies funded by the pharmaceutical industry are more likely to report results favorable to the sponsoring company (Lexchin et al 2003).

There is a significant association between industry sponsorship and both pro-industry outcomes and poor methodology (Bekelman and Kronmal 2008).

In manufacturer-supported trials of non-steroidal anti-inflammatory drugs, half the time the data presented did not match claims made within the article (Rochon et al 1994).

68% of US health research is funded by industry (Research!America 2008), which means that research that leads to profits to the health care industry tends to be prioritized.

71 out of 78 drugs approved by the FDA in 2002 are “me too” drugs that are more profitable because of the patent but not substantially different from existing medication (Angell 2004).

“Seeding trials” by pharmaceutical companies promote treatments instead of testing hypotheses (Hill et al 2008).

Even accurate research may be misreported by pharmaceutical company advertising, including ads in medical journals (Villanueva et al 2003).

In 92% of cases, pharmaceutical leaflets distributed to doctors have data summaries that either cannot be verified or inaccurately summarize available data (Kaiser et al 2004).









I don't plan on becoming seriously sick, but if I do, I think I'll check in with MetaMed just to make sure nobody is ignoring the research results showing that you shouldn't feed the patient rat poison.