Share:

March 28, 2019 | By Dr. Ronald Hoffman

As my readers know, I’ve often lamented the lack of health literacy in the general public. I’ve even created a series of Health IQ quizzes so my audience could assess their knowledge.

But one of the reasons for confusion over fundamental health issues (Is moderate drinking healthy? Is coffee good or bad for you? Does consuming eggs up your risk for heart disease?) is that many studies contain fundamental methodological flaws.

Even doctors, who are supposed to studiously evaluate the latest research and interpret their conclusions for patients, are ill-equipped to cut through the confusion. We all had to take basic courses in statistics and data analysis, but the complexity of scientific studies often renders them too tough to properly critique by all but the most wonky bio-staticians.

In fact, in a celebrated 2005 paper, Dr. John P. A. Ioannidis explained: “Why Most Published Research Findings Are False”. Among his assertions are “The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true,” and “The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.”

So what’s a person to do?

Even if you’re convinced that critiquing a scientific paper is above your pay grade, here are some basic ways you can arm yourself against junk science when confronted with health headlines that just don’t add up.

1) Correlation is Not Causation: I recently heard a doctor hold forth on his personal theory on why men are less fertile these days. He’s a urologist, and conscious of the relationship between testicular temperature and sperm motility (why taking hot baths before sex might offer relative—but not complete—contraceptive effects).

He actually authored a study containing neat graphs correlating “global warming” with declining male fertility—a perfect match! The trouble is, notwithstanding the fact that climate change remains controversial, it’s by no means clear that temperature elevation is the sole or even primary cause of men’s declining fertility. What about poor diet and environmental toxicity? Endocrine disrupting chemicals are a more plausible explanation for sperm problems than over-heated gonads.

2) Poor Placebos: The “Gold Standard” of scientific research is the “double-blind placebo controlled trial” in which neither the researchers nor the subjects know who’s getting the active treatment vs. the placebo. But sometimes, researchers choose a “placebo” that actually may distort a study’s conclusions. A classic example of this is a study which concluded that fish oil wasn’t beneficial for the heart—after a comparison with subjects taking identical olive oil capsules. Trouble is, olive oil is heart-healthy! In this study, both the active treatment and the placebo conferred comparable protection, leading to the false conclusion that fish oil was worthless.

3) Follow the Money Trail: Research has conclusively demonstrated that if a drug company is sponsoring a study, it’s more likely to show that a medication is effective or superior to a rival drug. Many such studies, in fact, are “ghosted” by pharmaceutical copy writers and influential doctors are paid just to put their imprimatur on them.

Similarly, researchers-for-hire sometimes exaggerate the benefits of supplements. Alternatively, countries can advance their commercial interests with dubious research. This happened recently when Cuban studies touted the cholesterol-lowering effects of policosanol, a derivative of domestically-produced sugar cane. When the same material was retested by a Scandinavian group, it showed no effects on serum lipids.

4) Bias: This is a supremely human trait—even honest researchers are never completely free of unconscious bias. Medical journals are rife with pharmaceutical bias—they are just not hospitable to natural therapies. In some, an “anti-quackery” bias leads them to feature poorly-written accounts of alleged supplement harms.

A recent example is a series of studies claiming lack of efficacy—and even potential harm—from calcium supplements. Upon further examination, it emanates from a New Zealand study group that even stakes out a skeptical view of vitamin D supplementation for bone health. While a valuable cautionary about overzealous reliance on calcium alone for bone health, other studies directly contradict its conclusions that calcium is worthless. The authors emphatically state that women should not be deluded into believing that natural supplements can forestall the need for osteoporosis medication—a bias that colors their interpretation of calcium data.

5) Wrong Problem: Some research shows that a drug—or even a supplement—can lower blood pressure, drop cholesterol levels, or fix high blood sugars. Admittedly, that may turn out to be beneficial in the long run, but the real test of a treatment—be it a drug, supplement, surgery, or a medical procedure—is whether it ultimately helps patients to enjoy greater longevity or better quality of life, end points which are often not evaluated. What good, for example, is a cancer drug that prolongs life, on the average, for about two months, at the cost of serious side effects, when after two years there is no discernible survival benefit? Yet many drugs receive approval on the basis of research demonstrating an effect that is of no long-term benefit to patients.

6) Rodent Research: Rats and mice have it good. We’ve “cured” Alzheimer’s, spinal cord injury, and cancer countless times in rodent models. But people aren’t rodents (well, perhaps with a few exceptions!). Therefore, studies showing this or that is good or bad in an animal experiment may have human implications—or not. Since rabbits don’t generally eat lard, experiments that show that saturated fat causes atherosclerosis in animals programmed to be vegetarians should not form the basis for far-reaching public health recommendations that humans shouldn’t eat meat!

7) Sample Size: A study of 12 individuals shows that ginkgo biloba cures tinnitus; 6 patients receiving a new brain cancer drug are now disease-free; your aunt Sally took apple cider vinegar for three months and lost 18 pounds. Fine, these are hypothesis-forming investigations that will require larger trials with larger numbers of subjects to achieve the statistical significance to raise them beyond the status of mere anecdotes. The trouble is, large trials are expensive, and the advantage goes to the pharmaceutical industry which can invest the 100s of millions of dollars necessary to prove their drugs work; additionally, what use is there to sponsoring expensive research on non-patentable natural substances that other companies can readily copy?

8) Subset Effect: Should you take aspirin? Should your child receive a new vaccine? Is coffee right for you? While studies of thousands of individuals may show that something is harmless or beneficial on average, you are an individual, not a statistic. A large study may miss the fact that a tiny but very real subset of the population has unique or paradoxical reactions.

9) Study Quality: Whether it’s done in a test tube, or comprises the dietary recall of thousands of individuals with poor memories, or was done in a developing country with notoriously-poor scientific standards, it’s still called a “study”. Was it published in a reputable scientific journal? Are the researchers free of conflicts of interest? Is there an over-reaching conclusion drawn from the results on a Petri dish? Were controls in place to make sure subjects actually stuck to that diet or consistently took that supplement? These are all considerations when evaluating the quality of research.

10) NNT: Or Number Needed to Treat, is a useful way to find out if the true implications of a study are being over-promoted. For example, a cholesterol drug might be shown to reduce heart disease death by a third. Sounds pretty good, doesn’t it? But the claim might be based on a five-year study that shows that, among a thousand healthy subjects, 3 got heart attacks when not taking the drug, and only 2 when taking the drug. The practical implication might be that, over 5 years, the Number Needed to Treat to avert just one death might be as high as 500. Fine if you’re the guy who’s saved, but that means that 499 other people took the drug unnecessarily, risking debilitating side effects.

11) Lazarus Effect: This is often invoked to demonstrate that supplements are inefficacious. Lazarus, as you may remember from Sunday school, was raised from the dead by Jesus. A supplement is given—too late to make any difference—to a very sick population, say with established heart disease, Alzheimer’s Disease, or advanced cancer. Naturally, the supplement does very little at that point. Conclusion: Worthless! The problem is, such research does not address the more modest and plausible proposition that a given supplement might help to arrest the progression of early disease or prevent it in the first place.

12) Meta-Analysis Misdirection: These are studies of studies, aggregating previous research and weighting their conclusions based on the objective quality of trials involved. Trouble is, meta-analyses are highly susceptible to bias; it’s easy for authors of meta-analyses to “cherry-pick” studies that buttress their preconceptions about a research problem.

13) Relevance: Despite encouraging hints, does the study in question provide you with practical news you can use? For instance, a recent study showed that the B vitamin biotin might slow the progression of MS. But, on closer examination, the dosage used in the study would require you to take 20 to 60 capsules of the highest potency biotin currently sold in health food stores per day! The researchers hope to patent a high-dose biotin drug that would only be available by prescription—if and only if future trials pan out, and the formulation receives FDA approval after years of rigorous review! Recent news on high dose biotin isn’t so promising.

14) P-Hacking: Also known as data-dredging, p-hacking is the practice of torturing data to squeeze out a favorable “p-value”. “P” is the probability that indicates the likelihood an outcome is not simply due to chance. A very low p-value (less than 0.05) is the goal of researchers, sometimes obtained by discarding discordant data that doesn’t support the hypothesis.

15) Recall Bias: In many observational studies, such as the recent one suggesting egg consumption could increase cardiovascular risk and overall mortality, subjects are quizzed as to their food consumption. But even the best food diaries are prone to omission or exaggeration. Do you even remember what you had for breakfast last Monday?

Hopefully, this article will help you to become a more discerning consumer of health news. Or, let Intelligent Medicine provide you with timely analyses of the latest stories.

Share:

Recommended Articles