You experience the following on a Monday: The alarm clock does not wake you up and you are now late for work. The cat has peed on the couch. The coffeemaker is making strange noises and doesn't seem to be working. The kids are fighting with each other. It's raining. And on top of everything else, the car won't start. What do you conclude? One or two of these minor irritants would seem insignificant and unmemorable. Once the list grows, however, it begins to take the shape of a plot—a plot of unseen forces perhaps conspiring in a meaningful way against you.

Our brains are pattern-detection machines that connect the dots, making it possible to uncover meaningful relationships among the barrage of sensory input we face. Without such meaning-making, we would be unable to make predictions about survival and reproduction. The natural and interpersonal world around us would be too chaotic. In the above example, if I draw conspiratorial conclusions (i.e., seeing a pattern where none really exists), I am making what statisticians would call, a Type I error, also called a false positive.

What if you experience the following upon entering your house alone at night: The front door is left open. Household objects are strewn everywhere (you left the house neat and clean just a few hours ago). Your computer is missing. There are faint but unrecognizable smells. You can hear someone talking. In all likelihood, we would not only draw conclusions about has occurred but we will have a palpable, physiological response as well. Nature ensures that we are prone to seeing patterns rather than missing them. The Type II error, seeing no pattern where a pattern exists, turns out to be more dangerous. Far better—from a Darwinian perspective—to erroneously interpret danger where none is present than miss out on important cues that put our survival on the line. There is cognitive efficiency built into this equation: quick reactions depend on a cost-benefit ratio that favors safety and survival.

So, when our pattern-recognition systems misfire, they tend to err on the side of caution and . The experience of seeing patterns or connections in random or meaningless data was coined apophenia by the German neurologist, Klaus Conrad. He originally described this phenomenon as a kind of thought process, though it is now viewed as being a ubiquitous feature of human nature. Science historian Michael Shermer has called the same phenomenon patternicity. Shermer has pointed out that our brains do not include a “baloney-detection network” that would allow us to distinguish between true and false patterns.

Examples of apophenia, or patternicity, are everywhere. Many people perceive faces in seemingly random places—such as in clouds, in patterns of dirt left on cars, or on the moon. We take such patterns a step further by ascribing meaning to them. People have seen the images of Jesus and Mary inside a halved orange; or the face of Jesus on a piece of toast. Sometimes such objects are then worshipped or granted a sacred status. Some forms of apophenia have to do with sequences of behavior—such as the gambler’s fallacy or other misperceptions of probability (this is illustrated most simply by sequential coin tosses where one might erroneously believe that after five tosses of heads, the probability of getting tails would somehow be higher than 50%). Apophenia also surfaces in the more complex patterns of our interpersonal world. Conspiracy theories, such as the belief that the twin towers of 9/11 were destroyed in a controlled demolition perpetrated by the government, are confabulations based on misperceived patterns. Such fallacious reasoning also has potentially adverse social consequences. For example, despite a paucity of evidence showing a causal connection, many parents do not vaccinate their children because they believe such vaccinations cause .

While it’s tempting to view apophenia as merely a defect in our cognitive processing capacities (i.e., something that must be overcome or defeated), it might be useful for us to view this tendency as being an ironic, even amusing aspect of our nature. We are fooled by optical illusions—apophenia of the visual cortex—but we don’t take such cognitive errors personally. Magic shows are often enjoyable precisely because we know that we are being tricked. If we embraced our vulnerability to cognitive errors, we would not be so easily caught off guard.

What does (and more generally) add to the conversation about apophenia? One immediately thinks of free association—a clinical tool that specifically focuses on meaning generated from word associations. Rather than merely viewing apophenia as a kind of unfortunate side effect of our cognitive architecture, psychoanalysis pushes us to look at meaning where it seems least obvious. In this way, patternicity is the point, not the problem. Good novelists understand this, of course, and depend on the suspense and anticipation that seemingly unrelated associations create in readers. In an excellent essay on the subject, the writer Christopher Moore said, “A mild case of apophenia is a novelist’s secret weapon that brings readers and literary success. We spend our working days seeing spontaneous connections between unconnected events, people, and lives, and weaving meaning into those connections.”

In psychotherapy, we co-construct meaning with our clients in an active process of making sense of interpersonal noise and randomness. The journey of psychotherapy can take on a narrative quality and often depends—like good storytelling—on a recognizable, coherent plot. Of course, our desire for patternicity may underlie larger questions and rituals of meaning-making. Our relentless detection of patterns is part of our larger search for meaning. Our greatest challenge may be learning to bear incoherence.