Consider the following two statements:

Scientists have shown in a study that chocolate helps you lose weight. Scientists lied to us in a study about chocolate helping with weight loss.

Both statements are almost true. But not quite.

In a recently trending article, Dr. Johannes Bohannon discusses how the study in question, which he headed, deliberately made a misleading claim that was believed by millions of people. The thing is, they didn’t lie. Not really. They made a factual statement: “that people on a low-carb diet lost weight 10 percent faster if they ate a chocolate bar every day”. But in order to make that statement, they were very selective about the data they chose, and presented.

So, the lie wasn’t so much in what they said as in what they left out. As Bohannon discusses in his article, the real culprits behind the false claim that “eating chocolate helps with weight loss” were the journalists who, eager to generate reader traffic with sensationalist headlines, completely failed to do any fact-checking, and even concocted their own non sequitur conclusions before publishing.

I encourage everyone to read the article I linked above, in order to learn a bit about “p-value hacking”, a method by which facts about studies can be reported in a misleading way. But misreported studies to promote sensationalist journalism are only one way in which dubious “science” can be used against the public. Shady and manipulative marketing strategists use it as well.

And so does the government.

For an example that is steadily gaining more exposure in the mainstream, consider diet and nutrition as an area in which bad science, promoted by government, has an immediate impact on the wellbeing of individuals affected by “scientific” policy. Specifically, I would point to the emergence during the 1970s of a “scientific consensus” regarding the dangers of dietary fat.

Although the idea that eating fat is what has made Americans fat and unhealthy persists at a popular level, the “consensus” has begun to disintegrate amongst scientists. Among many examples, a recent report by the Academy of Nutrition and Dietetics appears to endorse a complete reverse in direction, effectively (if not openly) admitting that U.S. dietary guidelines have been wrong about dietary fat for decades, and emphasizing the dangers of eating too much carbohydrate.

Particular scrutiny of the studies behind the national dietary fat recommendations issued in the US and UK in 1977 and 1985 supports this reversal: there was never any strong link between dietary fat and cholesterol and heart disease.

If this is the case, then the question follows: how could so many scientists have been so wrong for so long?

Those in dissent against the “consensus” have had their theories since long before the scientific community began to change their stance in favor of the dissenters. In a 2002 essay published by the New York Times, Gary Taubes, a long-time champion of diet and nutrition reform, points to political expedience as the explanation.

…the N.I.H. spent several hundred million dollars trying to demonstrate a connection between eating fat and getting heart disease and, despite what we might think, it failed. Five major studies revealed no such link. A sixth, however, costing well over $100 million alone, concluded that reducing cholesterol by drug therapy could prevent heart disease. The N.I.H. administrators then made a leap of faith. Basil Rifkind, who oversaw the relevant trials for the N.I.H., described their logic this way: they had failed to demonstrate at great expense that eating less fat had any health benefits. But if a cholesterol-lowering drug could prevent heart attacks, then a low-fat, cholesterol-lowering diet should do the same. ”It’s an imperfect world,” Rifkind told me. ”The data that would be definitive is ungettable, so you do your best with what is available.”

In other words, the government, faced by the “national crisis” of rising mortality from heart disease, did something that they could pass off as “science”, and policymakers took a stab in the dark on that flimsy basis. Their conclusions led predictably to national dietary guidelines that favored a lot of starch, and in particular, grain products, in spite of the knowledge that this could lead to more problems, such as a rise in metabolic syndrome and Type 2 diabetes. The benefits of those guidelines to the food industry is noted by Taubes:

Surely, everyone involved in drafting the various dietary guidelines wanted Americans simply to eat less junk food, however you define it, and eat more the way they do in Berkeley, Calif. But we didn’t go along. Instead we ate more starches and refined carbohydrates, because calorie for calorie, these are the cheapest nutrients for the food industry to produce, and they can be sold at the highest profit.

In that light, it is easy to imagine a conspiracy between the grain industry and the politicians behind the N.I.H. studies to demonize fat and promote a high-carb diet. Evidence in favor of such a conspiracy may well exist. But even if there was no malevolent intent working behind the scenes, the government definitely screwed up, and certain politically favored business interests definitely profited. And from the perspective of everyone else, the resulting policies were not only ineffective; they were catastrophically counterproductive.

And ultimately, the government used “science” to promote those ill-founded policies, in the same way that Dr. Bohannon used “science” to promote the false idea that chocolate leads to weight loss.

The political class has used the credulity and scientific illiteracy of the people to its advantage since long before the 1970s in America, of course. From the Soviet agricultural policies to the eugenics of Nazi Germany, “science” has been the constant tool of modern totalitarianism. And while not every example is as sensational as those presented by the Soviets and the Nazis, the trend persists: whenever a government wants something done, it is easy enough for them to get popular support for the idea by presenting a scientific study that appears to prove that the public would benefit from what they intend to promote, or to highlight the dangers of whatever they intend to ban.

Want to strengthen public support for the war on drugs? Promote a study about a purported link between marijuana and schizophrenia, and conveniently fail to distinguish between correlation and causation. Get enough experts talking about anthropogenic climate change, and the people will be in a receptive mood for regulations administered by the world’s worst polluter, the U.S. government. Are children failing to adjust well to the prison-like environments of public schools? Get a school psychiatrist to sign off on an addictive drug so they’ll stop fidgeting and using their brains in non-approved ways.

But science isn’t the problem. The problem is that “scientific” claims, and especially those originating from a “consensus”, are argued from a presumed position of authority. And people, in general, tend not to know how to evaluate such claims.

There’s a good reason for that failure: we are psychologically predisposed to accept authoritative claims simply because it’s easy to do so. A rank or title, a badge, a degree, or a lab coat tends to endow the person making the claim with a sense of trustworthiness, which relieves the person hearing the claim of the psychological burden of determining whether they are being told the truth, or being misled. And accepting a “consensus” allows one to align himself with the “winning side” of any argument without significant mental exertion. (For a poignant demonstration of this phenomenon, google “Milgram experiments”.)

Furthermore, we all tend to rely on experts in areas outside of our specialties, because we cannot all be experts in everything. On one hand, we all benefit from intellectual and academic specialization in the same way that we benefit from other forms of specialization. But on the other, anyone can claim to be an “expert”.

So, the problem becomes clear: given that we can’t all know everything, how do we know whom to trust? When do we yield to the “authority” of the experts, and when do we resist it?

The solution lies in the fact that we are all capable of critical thinking. Not knowing much about the subject at hand does not mean one is incapable of determining the trustworthiness of a particular claim.

Anyone can be made aware of logical fallacies and the way in which they are frequently employed in agenda-driven news reporting (check out the School Sucks Podcast for an extensive and entertaining treatment of this subject). Anyone can evaluate the methodology behind a scientific study, even if they aren’t well-versed in the subject being studied.

Of course, this presents a challenge in its own right, as most of us are victims of a school system that not only fails encourage critical thinking, but actually tends to suppress it. And as with the possibility of collusion between the grain industry and the N.I.H. regarding the Food Pyramid, one can find evidence of a conscious conspiracy in the early days of the public education system to reduce the American public to unthinking automatons, and also from modern policies.

But while one can argue about intentions, the results are clear: mainstream pedagogy prepares children to automatically accept claims made by authority figures. It conditions people to get aboard the “consensus” bandwagon, and to ignore all points made by dissenters as inherently untrustworthy. And that works entirely to the benefit of the political class, whose privilege and power are always threatened by any challenge to the status quo.

If there is one lesson that the schools fail to teach that is desperately needed, it is how to know whether an authoritative claim is trustworthy or not. Not every lie is obvious, and not every misleading statement is, strictly speaking, even a lie. Science, properly done, presents facts. But if we accept the government’s interpretation and representation of those facts without question, we put ourselves in jeopardy. We ourselves must take the steps necessary to protect us and our our loved ones from “scientific” claims that can be leveraged against us, from public policymakers, manipulative journalism, and the broad overlap between the two.