The Framingham Heart Study (FHS) is without argument and by design an observational study. Clearly, a group of people were recruited by researchers, evaluated for a handful of parameters, and observed over time to see what transpired. Ergo, an observational study.

There are actually a host of different types of observational studies, including prospective cohorts, retrospective cohorts, convenience cohorts, retrospective case-control studies, and a handful of others. The names of these types of studies imply rigorous science, but it must be remembered that no matter how scientific the name might sound, none of these forms of study can establish causality (7). They can only serve to generate hypotheses. And the same is true for Framingham. Let’s take a closer look at it.

Framingham, a Study in Bias

When the federal government gave a grant of US$500,000 (about $5 million in today’s dollars) as seed money to start the FHS, it did so because in the mid-20th century between one-third and one-half of Americans were dying from some form of cardiovascular disease, a term encompassing coronary artery disease, stroke, high blood pressure, and congestive heart failure. And strange as it seems today, no one at the time knew how to treat cardiovascular disease or what caused it. The early founders of the FHS decided to recruit citizens into the study from Framingham, Massachusetts, a former farming community that had become a factory town for products including General Motors cars. The citizens were mainly white, working-class people thought to be representative of the majority of the United States (3).

The idea was to enter subjects into the study, give them thorough examinations, then follow them over time, with follow-up exams every two years. Based on the original exam data and the biennial follow-up data, researchers hoped that as the patients aged, were stricken with disease, and began to die, the researchers could correlate the diseases the patients developed with the earlier findings on their exams and lab work and begin to get a sense of the cause. If strong patterns emerged from the data, all the better, as that would strengthen the notion of causality (2).

FHS staff recruited around 5,000 subjects, who comprised the first cohort along with about 300 volunteers, and the initial exams began.

At first blush, this seems like a reasonable way to start. But was it?

In order for a study like this to generate accurate data, the subjects need to represent the overall group as closely as possible. In this case, mainly working-class people were recruited and volunteered on their own, which creates a handful of problems.

First, Framingham was at the time home to wealthy and poor people. It’s well known that wealthy people are healthier and live longer than those who are less wealthy, and it’s well established that poor people tend to be sicker and die earlier than those who have more. So the working-class people who were the majority of the subjects didn’t really represent Framingham, much less the rest of Massachusetts or the rest of the country.

Second—believe it or not—people who are recruited into studies are different from those who refuse to participate. As physician and researcher Lars Werkö showed in a similar study in Sweden, “The mortality from cardiovascular disease and other usually not well defined causes for death is several times higher in a Swedish city population among those not answering an invitation for health examination than in those coming to the investigation.” (8)

This is a common finding in these kinds of studies. It makes sense that people who are more interested in their own health would respond to an offer of a free comprehensive medical examination.

Third, volunteers versus those actively recruited are even more interested in their health and thus skew the data even more than recruited subjects.

Finally, not only is the study population non-representational, but it is also small. Tiny, in fact. Five thousand subjects might seem like a lot, but when compared to the more than 160,000 subjects recruited into the Women’s Health Initiative, a 15-year study launched in 1991 by the National Institutes of Health, it really isn’t. And remember that the FHS investigators continuously subdivided these 5,000 subjects into smaller and smaller subgroups for various phases of the study. The smaller the group, the more difficult to get decent, statistically significant data.