A day almost never passes without someone sending a comment my way about some recent study, plucked by the media from the hundreds published that same day, showing that low-carb diets cause brain fog or decreased longevity or cancer of some type or any number of conditions any of us would rather not have. These comments always end with the plaintive request, is there any truth to this?

My answer follows: This data comes from an observational study, and, as such, can’t possibly indicate causality.

Since I get these comments so often and answer them the same equally often, I figured it was about time to write a post on what an observational study really is so that I can link to it when I give my standard reply.

I can then add this post to the ones on the glycemic index and relative risk, both of which serve the same purpose. I can simply link instead of explaining what these terms mean each time I have to use them.

Observational studies – also called prospective or cohort studies and sometimes even epidemiological studies – are the kind most often reported in the media simply because there are so many of them. These are the studies in which researchers look for disease disparities between large populations of people with different diets, lifestyles, medications, incomes, etc. If disease disparities are found to exist between groups, then researchers try to make the case that the difference in diet, lifestyle, medication, etc. is the driving force behind the disparity.

We’ve all seen these studies by the score. We read that a large study population of people is separated into two groups based on blood levels of vitamin C. One group of subjects has high blood levels, the other group has lower blood levels. And since every one seems to believe that vitamin C protects against the common cold, the researchers decide to monitor these two groups for a year and find that the group with the highest blood levels of vitamin C has the fewest colds. These findings are rushed into publication, and soon we read everywhere that vitamin C prevents the common cold. It all seems so reasonable and so scientific, but the truth is that these studies don’t mean squat. And the researchers who do them know it, or at least should know it. The fact that they do know is evident in the weasel words they use in describing their findings. You’ll read that these data ‘suggest’ or that they ‘imply’ or that this ‘may cause’ that. The non-technically trained public, however, read these to say that vitamin C prevents the common cold. And usually the media helps to sway opinion by slanting the story in the same direction.

But, you may ask, why aren’t these studies sound? If the one group with the greater blood levels of vitamin C had significantly fewer colds, why is it such a stretch to say that vitamin C prevents colds?

I can explain by way of a game I used to play with myself as a child. I’ve never been one to sleep much even when I was a kid. I always stayed up late and I always woke up early. My brain never seemed to slow down. I was always ruminating on something. My way of trying to get to sleep was to try to think of everything that could be thought of. My mind would race, and I would think of my brothers sleeping in the room with me, their beds, my bed, the closet, the tree outside, my dad’s car, the rug on the floor, the moon, and on and on and on. As I thought faster and faster, continuing to compile things that could be thought of, I would finally hit a quitting point. Then I would try to figure if there was anything I hadn’t thought of. Of course, immediately I would think of something. I hadn’t thought of the pigs on my grandfather’s farm. Or I hadn’t thought of the fire hydrant out front. Or my father’s shoes. Or whatever. Then I would start the game again, this time, of course, starting with the pigs on my grandfather’s farm and going from there. I would always fall asleep before I had ever thought of everything there was to think of.

Researchers doing observational studies have much the same problem. They try to think of all the differences between two large populations of subjects so that they can statistically negate them so that only the observation in question – the vitamin C level in the example above – is different between the groups. Problem is they can never possibly think of all the differences between the groups. As a consequence, they never have a perfect study with exactly the same number, sex, age, lifestyle, etc. on both sides with the only difference being the study parameter. And so they don’t really ever prove anything. In fact, we would all probably be a lot better off if all the researchers doing observational studies had followed my lead and fallen asleep mid study.

But I’m being too harsh. These studies do have some value. Their value is in generating hypotheses.

The observational study demonstrates a correlation. In our example above, the correlation is that higher vitamin C levels correlate (in this particular study) with lower rates of colds. So, from this data, we could hypothesize that vitamin C prevents the common cold. But at this stage that would be just an hypothesis – not a fact.

Once we have the hypothesis, we can then do a randomize, placebo-controlled trial. We can recruit subjects, randomize them into two groups that are as equal as possible, especially as vitamin C levels are concerned. Then we give one group of subjects vitamin C and the other a placebo and watch them for a year. At the end of the year (or whatever the study period is), we break the codes, see who is on vitamin C and who is on placebo. We already know how many got colds, so now we compare that to vitamin C intake. We may find that those who took the vitamin C got significantly fewer colds, so we can say that our study demonstrates that vitamin C prevents the common cold. If this same study is repeated a number of times with the same outcome, then it can be said to be proven that vitamin C prevents colds. (This study is, of course, hypothetical.)

But these studies are randomized trials, not observational studies. Observational studies only show correlation, not causation, a fact that everyone doing research and reading about research should have tattooed on their foreheads.

CORRELATION IS NOT CAUSATION

More often than not observational studies are chock full of all kinds of technical-looking graphs, charts and tables. Many even have complicated equations. And long statistical analyses of the data derived. They are like zombies, however. They give the appearance of scientific life, but they are really scientifically dead. Irrespective of how many scientific baubles are strewn through them, they are nothing but observational studies, worthwhile only as generators of hypotheses. They demonstrate only correlation, not causation.

If you want to bear with me, I’ll show you a bizarre observational study that was actually performed that demonstrates everything you need to know about observational studies.

The study was published in 2003 in the prestigious American Journal of Epidemiology. The title of the study is Shaving, Coronary Heart Disease, and Stroke. (Click here for free full text) This study purports to show that the frequency of shaving correlates with risk for developing heart disease, with those men shaving less having a greater risk.

Here’s the finding that initiated this study.

A case-control study comparing the frequency of shaving in 21 men under 43 years of age who had suffered a myocardial infarction and 21 controls found that nine of the cases but none of the controls shaved only every 2 or 3 days.

Someone noticed that about half of the men in a small group of subjects who had a heart attack shaved once every two or three days. Another group of men of similar age who hadn’t had a heart attack were designated as controls. Upon questioning it was discovered that all of the men in the control group shaved every day. Thus the first hypothesis was born: Infrequent shaving correlates with heart attack.

The researchers had access to a large population of subjects from another ongoing study called the Caerphilly Study. Researchers recruited 2,513 men aged 45-59 from this study and gave them comprehensive medical workups including extensive laboratory testing.

Men were asked about their frequency of shaving by a medical interviewer during phase I. Responses were classified into categories ranging from twice daily to once daily, every other day, or less frequently. The 34 men with beards were not classified. These categories were dichotomized into once or twice per day and less frequently.

The men in the study were followed for the next 20 years with follow-up exams periodically to monitor for history of chest pain, heart attack and/or stroke.

Of the 521 men who shaved less frequently than daily, 45.1 percent died during the follow-up period, as compared with 31.3 percent of men who shaved at least daily.

When the data were further refined it was determined that

The age-adjusted hazard ratios demonstrate increased risks of all-cause, cardiovascular disease, and non-cardiovascular-disease mortality and all stroke events among men who shaved less frequently.

So there you have it. Proof that shaving daily prevents heart disease. Or is it?

The researchers doing this study aren’t so stupid that they really think that the act of shaving itself has anything to do with a man’s risk for developing heart disease. In fact, they went to great lengths to show that shaving was merely a marker for other things going on that may well have something to do with risk for developing heart disease or increased all-cause mortality.

The one fifth (n = 521, 21.4%) of men who shaved less frequently than daily were shorter, were less likely to be married, had a lower frequency of orgasm, and were more likely to smoke, to have angina, and to work in manual occupations than other men.

And these are just the differences the researchers found. Had they looked harder, I’m sure they would have found more, just like I did when I played my ‘think of everything that can be thought about’ game with myself as a kid.

But if these researchers had really believed that the data showed that the lack of frequent shaving itself may have been the driving force behind the development of heart disease, they may have designed a randomized clinical trial to show causality. They could have recruited men without heart disease, randomized them into two groups, instructed the men in one group to shave daily and the men in the other to shave every third day. Then after 20 years the researchers could tell whether or not shaving protects against heart disease.

But the idea that shaving itself has anything to do with heart disease is so ludicrous that no one would ever do such a study. We can all see that. It’s a ridiculous idea. It should be obvious that the shaving or lack thereof has nothing to do with heart disease or early death; the lack of shaving is merely a marker for all the other conditions that are risk factors for heart disease, i.e., small stature, unmarried, smoking, lower socioeconomic class, etc. It’s all so easy to see.

But let’s just suppose that we take this same study and substitute the term ‘elevated cholesterol’ for ‘infrequent shaving.’ Now what do we see? Let’s change one of the quotes from above to reflect this change. What then?

Of the 521 men who had elevated cholesterol, 45.1 percent died during the follow-up period, as compared with 31.3 percent of men who had low or normal cholesterol.

We nod our heads sagely. Suddenly we have a study that seems to make sense. But – and this is important – it doesn’t make any more sense than the shaving study. Both are observational studies. We are programmed to think cholesterol is bad and causes heart disease, so this second study appears reasonable to us. It triggers our confirmation bias. We don’t believe for a second that shaving has anything to do with heart disease, so we can easily dismiss those findings. But we are more than ready to believe that the elevated cholesterol caused those men who had it to have heart attacks. But the reality is that both studies are exactly the same – and neither proves anything.

If you’re interested in a longer, more in-depth article on observational studies, take a look at Gary Taubes long piece in the New York Times a few years ago. I’ve tried to take a little different slant than he did so that my post and his article would cover all the bases.

Cartoon above from: Smith, G. D. et al. Int. J. Epidemiol. 2001 30:1-11