A typical experiment in functional magnetic resonance imaging goes like this: A subject is slid into a claustrophobia-inducing tube, the core of a machine the size of a delivery truck. The person is told to lie perfectly still and perform some task — look at a screen, say, or make a decision. Noisy superconducting magnets whir. The contraption analyzes the magnetic properties of blood to determine the amount of oxygen present, operating on the assumption that more-active brain cells require more-oxygenated blood. It can't tell what you're thinking, but it can tell where you're thinking it.

Functional MRI has been used to study all sorts of sexy psychological properties. You've probably seen the headlines: "Scientists Discover Love in the Brain!" and "This Is Your Brain on God!" Such claims are often accompanied by a pretty silhouette of a skull, highlighted with splotches of primary color. It's like staring at a portrait of the soul. It's also false. In reality, huge swaths of the cortex are involved in every aspect of cognition. The mind is a knot of interconnections, so interpreting the scan depends on leaving lots of stuff out, sifting through noise for the signal. We make sense of the data by deleting what we don't understand.

What's disappointing here isn't just that these early fMRI studies are overhyped or miss important facts. It's that this mistake is all too familiar. Time and time again, an experimental gadget gets introduced — it doesn't matter if it's a supercollider or a gene chip or an fMRI machine — and we're told it will allow us to glimpse the underlying logic of everything. But the tool always disappoints, doesn't it? We soon realize that those pretty pictures are incomplete and that we can't reduce our complex subject to a few colorful spots. So here's a pitch: Scientists should learn to expect this cycle — to anticipate that the universe is always more networked and complicated than reductionist approaches can reveal.

Look at genetics: When the Human Genome Project was launched in the early 1990s, it was sold as a means of finally making sense of our DNA by documenting the slight differences that encode our individuality. But that didn't happen. Instead, the project has mostly demonstrated that we are more than a text, and that our base pairs rarely explain anything in isolation. It has forced researchers to focus on the much broader study of how our genes interact with the environment.

This same story plays out over and over — only the nouns change. Once upon a time, physicists thought they had the universe mostly solved, thanks to their fancy telescopes and elegant Newtonian equations. But then came a century of complications, from the theory of relativity to the uncertainty principle; string theorists, in their attempts to reconcile ever widening theoretical gaps, started talking about 11 dimensions. Dark matter remains a total mystery. We used to assume that it was enough to understand atoms — the bits that compose the cosmos — but it's now clear that these particles can't be deciphered in a vacuum.

Not surprisingly, this is exactly what neuroscientists are coming to grips with. In the mid-'90s, Marcus Raichle started wondering about all the mental activity exhibited by subjects between tasks, when they appeared to be doing nothing at all. Although Raichle's colleagues discouraged him from trying to make sense of all this noisy activity — "They told me I was wasting my time," he says — his team's work led to the discovery of what he calls the default network, which has since been linked to a wide range of phenomena, from daydreaming to autism. However, it can't be accurately described with the kind of distinct spots of a typical fMRI image. There's too much to see: It's a network of colorful complexity. Thanks to the work of Raichle and others, neuroscience now has a mandate to forgo the measurement of local spikes in blood flow in favor of teasing apart the vast electrical loom of the cortex. God and love are nowhere to be found — and most of the time we have no idea what we're looking at. But that confusion is a good sign. The brain isn't simple; our pictures of the brain shouldn't be, either.

Karl Popper, the great philosopher of science, once divided the world into two categories: clocks and clouds. Clocks are neat, orderly systems that can be solved through reduction; clouds are an epistemic mess, "highly irregular, disorderly, and more or less unpredictable." The mistake of modern science is to pretend that everything is a clock, which is why we get seduced again and again by the false promises of brain scanners and gene sequencers. We want to believe we will understand nature if we find the exact right tool to cut its joints. But that approach is doomed to failure. We live in a universe not of clocks but of clouds.

Contributing editor Jonah Lehrer (jonahlehrer@me.com) wrote about the neuroscience of failure in issue 18.01.