I once met a politician who told me that he believes water fluoridation is the greatest scam ever perpetrated on the public. I have been confronted by “truthers” who insist the 9/11 attacks were an “inside job” engineered by the Bush administration. Others have regaled me for hours with theories about who really killed JFK and Princess Diana—not to mention the nefarious goings-on of the New World Order, the Trilateral Commission, the Council on Foreign Relations, the Knights Templar, the Freemasons, the Illuminati, the Bilderberg Group, the Rothschilds, the Rockefellers, and the Zionist Occupation Government (ZOG) that secretly runs the United States. In the course of researching a 2012 BBC documentary, I spent a day in Las Vegas with a cohort of British conspiracists during their journey around the southwestern United States in search of UFOs and aliens, and the government facilities where their existence supposedly was covered up. One woman told me about the orange balls of energy hovering around her car on Interstate 405 in Los Angeles. (Fortunately, they were chased away by Black Ops helicopters, she added.) A man challenged me to explain the source of a green laser beam that followed him around the British countryside. I confessed that I had no explanation.

While I’ve been writing about conspiracy theories in Skeptic and Scientific American for decades, it’s only recently that I’ve had occasion to summarize all of this research for a downloadable course called Conspiracies and Conspiracy Theories, produced in conjunction with Audible and The Teaching Company’s Great Courses. This exercise gave me an opportunity to catalog the common characteristics shared by all conspiracy theories—including the fact that almost all forms of conspiracism have a negative valence. Rarely do people believe that there’s a conspiracy afoot to make the world a better or safer place. Conspiracy theories invariably feature nefarious agents seeking to do bad things. This quality is embedded in the definition of a “conspiracy” that I provide to listeners: “two or more people plotting or acting in secret to gain an advantage or to harm others immorally or illegally.”

In recent years, psychologists and political scientists have identified several factors that influence conspiratorial thinking, such as political orientation, race and power (or the lack thereof). These are proximate causes of conspiracism. But lying beyond this, I propose, is a deeper cause rooted in evolutionary pressures that have shaped our brains, disposing us to the pessimism and negative assumptions that are the hallmarks of conspiracy theories.

My friend and colleague Jared Diamond, the UCLA geographer and Pulitzer-Prize winning author of Guns, Germs, and Steel, has identified what he calls “constructive paranoia,” or “the importance of being attentive to hazards that carry a low risk each time but are encountered frequently.” While out in the rain forest with local colleagues in Papua New Guinea one night, Diamond proposed that that they pitch their tents under a big tree. “To my surprise,” wrote Diamond, “my New Guinea friends absolutely refused. They explained that the tree was dead and might fall on us.”

At first, Diamond thought them paranoid. Over the years, however, he formed a different opinion: “I came to realize that every night that I camped in a New Guinea forest, I heard a tree falling. And when I did a frequency/risk calculation, I understood their point of view.” If the odds of a tree falling on you any given night are only one in 1,000, but you sleep under trees every night, “you’ll be dead within a few years.”

I would like to adapt Diamond’s idea to what I call constructive conspiracism: Sometimes “they” really are out to get you, so it pays to be careful.

We all are prone to a psychological force called “negativity bias”—a concept captured by the title of a 2001 paper co-authored by psychologist Roy Baumeister, Bad is Stronger than Good. Hundreds of studies reveal the pervasiveness of this bias in human life. In investing, for example, behavioral economists have identified a phenomenon by which investors experience a financial loss with twice the countervailing emotional force they experience from a gain of equivalent magnitude. (That is, losses hurt twice as much as gains feel good.) Tennis superstar Jimmy Connors once said, “I hate losing more than I love winning”—a sentiment echoed by banned cyclist Lance Armstrong, who once explained that “I like to win, but more than anything, I can’t stand this idea of losing.”

Those of us who experience the same bias in the course of normal life might be inclined to agree with the following generalizations:

Criticism and negative feedback hurt more than praise and positive feedback feel good.

Losing friends has a greater impact than gaining friends.

Bad impressions and negative stereotypes form faster, and are more resistant to change, than positive impressions and stereotypes.

Bad everyday events have a greater impact than good; for example, having a good day does not necessarily lead to a good mood the next day, whereas a bad day often does carry over its consequences into the next day.

Traumatic events leave traces in mood and memory longer than good events; e.g., a single childhood traumatic event such as sexual molestation can erase years of positive experiences.

Morally bad actions far outweigh morally good actions when it comes to our evaluation of others—a lifetime of positive public service can be erased with a single moral failing.

Psychologists Paul Rozin and Edward Royzman were the ones who originally coined the term negativity bias to describe this asymmetry. “Negative events are more salient, potent, dominant in combinations, and generally efficacious than positive events,” the authors explained, noting additional examples:

Negative events lead us to seek out root causes more than do positive events. Wars, for example, generate endless analyses, whereas peace literature is paltry by comparison. Everyone asks, “Why is there war?” Almost no one asks, “Why is there peace?”

Negative stimuli command more attention than positive stimuli. In rats, for example, negative tastes elicit stronger responses than do positive tastes. And in taste-aversion experiments, a one-time exposure to noxious food or drink can cause lasting avoidance, but there is no corresponding reaction to palatable food or drink.

We have more words to describe the qualities of physical pain than we have to describe physical pleasure.

There are more cognitive categories for, and descriptive terms of, negative emotions than positive emotions.

Evil contaminates good more than good purifies evil. As the old Russian proverb says, “A spoonful of tar can spoil a barrel of honey, but a spoonful of honey does nothing for a barrel of tar.”

An evolutionary component to the negativity bias may be observed in the emotion of disgust, which evolved to drive organisms away from noxious stimuli—because noxiousness is an informational cue that such stimuli could harm you through poisoning or disease.

In both societies as a whole and human lives in isolation, progress usually is earned in small steps (a farmer waking up early for months to tend his crops), whereas regress can come about in an instant (a single weather event or flock of locusts that destroys everything). In any complex mechanical, biological or sociological system, all parts must work reliably to keep the thing going. If one part or sub-system fails, it can be catastrophic to all others. And so such systems must be run in a way that allows maximum attention to be devoted to problems and threats, while everything else is pushed to the background.

In his 2018 book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, Harvard psychologist Steven Pinker argued that, in our ancestral environment, the cost of overreacting to a threat was less than the cost of underreacting, so we have become programmed to err on the side of overreaction. I.e., we expect the worst.

Pinker traces the blame for our evolved constructive paranoia all the way back to the Second Law of Thermodynamics, which states that the total entropy (or disorder) of a closed system (one not losing or gaining mass or energy, such as through human intervention) cannot decrease over time. It can only remain constant or increase. Systems tend to move from order to disorder, from organization to disorganization, from structured to unstructured. Absent outside intervention, metal rusts; wood rots; weeds overwhelm gardens; bedrooms get cluttered; and social, political and economic systems fall apart. Which is to say, the very nature of science dictates that it is far easier for things to go bad than good (except in regard to those few tasks that leverage entropy to human advantage, such as the passive evacuation of heat from a rolled metal ingot or forged object). As Pinker puts it, “the Second Law defines the ultimate purpose of life, mind, and human striving: to deploy energy and information to fight back the tide of entropy and carve out refuges of beneficial order.”

The ne plus ultra explanation for entropy can be found on that ubiquitous bumper sticker, “Shit Happens.” So-called “misfortunes,” like accidents, disease and famine, typically have no purposeful agency behind them—no gods, demons or witches, intending us evil—just entropy taking its course. But people do tend to seek out hidden sources of agency as a means to explain the presence of misfortune in our lives (for reasons described in the paragraphs below), which is why we attribute many of life’s outcomes to far-fetched conspiracies. While a totalizing immersion in conspiracism can destroy one’s perspective and rational faculties, our susceptibility to conspiracy theories isn’t some stray programming bug that infects our cognition. It’s a systematic habit rooted directly in mental reflexes that served us well in our ancestral environment.

In my 2011 book The Believing Brain: From Ghosts and Gods to Politics and Conspiracies—How We Construct Beliefs and Reinforce Them as Truths, I discussed a quality we all share called patternicity, which is the tendency to find meaningful patterns in data that might well be completely random.

To explain why we evolved this feature in our thinking, let’s start with a thought experiment: Imagine you lived three million years ago on the plains of Africa as a tiny small-brained bipedal primate that was highly vulnerable to the region’s many terrifying predators. You hear a rustle in the grass. Is it just the wind or is it a dangerous animal? If you assume that the rustle in the grass is a dangerous predator, but it turns out that it’s just the wind, you have generated a “false positive”—believing something is real when it isn’t. There’s no harm, though: You simply move away and become more alert and cautious. But if you assume that the rustle in the grass is just the wind, and it turns out to be a dangerous predator, you have generated a “false negative,” while the predator has gained a meal. Over the course of many such meals, those primates susceptible to false negatives will enter the fossil record before they can reproduce.

The problem is that assessing the difference between a false negative and false positive is hard—especially in a split-second, life-and-death context. So the default position is to assume that most patterns are real. We are the descendants of those who were most successful at reading augurs of danger into the patterns around us, even if that danger wasn’t always real.

This is what I mean by constructive conspiracism: If it turns out there is no danger, no harm is done and little energy is expended in indulging these momentary spasms of paranoia. If it turns out that there is danger, on the other hand, being constructively paranoid pays off.

But there’s more to it than that—because when we dwell on the dangers around us, we don’t just tend to think of falling branches from dead trees and stray beasts. We tend to focus on the constellation of threats as signifying some systematic program aimed at doing us harm. This is a manifestation of what I call “agenticity”—our tendency to infuse patterns (especially patterns of threat or harm) with meaning intention and agency. And so we imagine that disconnected misfortunes are commonly directed by intentional agents, sometimes operating invisibly. Souls, spirits, ghosts, gods, demons, angels, aliens, governments, religious officials and big corporations all have played this role in conspiracist lore (and, in the case of the latter three entries, real life, too, it must be conceded). Taken together, patternicity and agenticity form the cognitive basis of conspiratorial cognition.

While some readers may accuse me of connecting a lot of dots here, I should say that there is some empirical evidence for the broad claims on offer. In a 2018 paper, for example, psychologists Josh Hart and Molly Graether found that people who were more likely to believe in conspiracy theories “were also more likely to say that nonhuman objects—triangle shapes moving around on a computer screen—were acting intentionally, as though they were capable of having thoughts and goals they were trying to accomplish. In other words, they inferred meaning and motive where others did not.” That’s patternicity and agenticity at work.

Examples of how these mental reflexes project themselves onto grand narratives abound. And in all cases, they involved the historical equivalent of grass rustling—a princess who died in a Paris car crash; a man opening up an umbrella on a sunny day in Dallas; a BBC newscaster announcing the fall of a World Trade Center building before it actually fell. Conspiracy theorists have dedicated their lives to searching the long grass for the still-hidden creatures that supposedly engineered these tragedies. They’ll never find them because they don’t exist. But given all the very real predators that have tried to devour us over the eons, you can’t blame them for looking.

Michael Shermer is the Publisher of Skeptic magazine and a Presidential Fellow at Chapman University. His new course from Audible Original and The Teaching Company’s Great Courses is Conspiracies & Conspiracy Theories: What We Should and Shouldn’t Believe – and Why.

Featured image: “Two Lions On The Prowl in The Jungle,” by Henri Rousseau.

Share this: Pocket

WhatsApp



Email

Print

