And before any of this can even be attempted, someone’s got to pay for it. Since no pharmaceutical company stands to benefit, prospective sources are limited, particularly when we insist the answers are already known. Without such trials, though, we’re only guessing whether we know the truth.

Back in the 1960s, when researchers first took seriously the idea that dietary fat caused heart disease, they acknowledged that such trials were necessary and studied the feasibility for years. Eventually the leadership at the National Institutes of Health concluded that the trials would be too expensive — perhaps a billion dollars — and might get the wrong answer anyway. They might botch the study and never know it. They certainly couldn’t afford to do two such studies, even though replication is a core principle of the scientific method. Since then, advice to restrict fat or avoid saturated fat has been based on suppositions about what would have happened had such trials been done, not on the studies themselves.

Nutritionists have adjusted to this reality by accepting a lower standard of evidence on what they’ll believe to be true. They do experiments with laboratory animals, for instance, following them for the better part of the animal’s lifetime — a year or two in rodents, say — and assume or at least hope that the results apply to humans. And maybe they do, but we can’t know for sure without doing the human experiments.

They do experiments on humans — the species of interest — for days or weeks or even a year or two and then assume that the results apply to decades. And maybe they do, but we can’t know for sure. That’s a hypothesis, and it must be tested.

And they do what are called observational studies, observing populations for decades, documenting what people eat and what illnesses beset them, and then assume that the associations they observe between diet and disease are indeed causal — that if people who eat copious vegetables, for instance, live longer than those who don’t, it’s the vegetables that cause the effect of a longer life. And maybe they do, but there’s no way to know without experimental trials to test that hypothesis.

The associations that emerge from these studies used to be known as “hypothesis-generating data,” based on the fact that an association tells us only that two things changed together in time, not that one caused the other. So associations generate hypotheses of causality that then have to be tested. But this hypothesis-generating caveat has been dropped over the years as researchers studying nutrition have decided that this is the best they can do.

One lesson of science, though, is that if the best you can do isn’t good enough to establish reliable knowledge, first acknowledge it — relentless honesty about what can and cannot be extrapolated from data is another core principle of science — and then do more, or do something else. As it is, we have a field of sort-of-science in which hypotheses are treated as facts because they’re too hard or expensive to test, and there are so many hypotheses that what journalists like to call “leading authorities” disagree with one another daily.