Staring down a packed room at the Hyatt Regency Hotel in downtown San Francisco this March, Randy Gallistel gripped a wooden podium, cleared his throat, and presented the neuroscientists sprawled before him with a conundrum. “If the brain computed the way people think it computes," he said, "it would boil in a minute." All that information would overheat our CPUs.

Humans have been trying to understand the mind for millennia. And metaphors from technology—like cortical CPUs—are one of the ways that we do it. Maybe it’s comforting to frame a mystery in the familiar. In ancient Greece, the brain was a hydraulics system, pumping the humors; in the 18th century, philosophers drew inspiration from the mechanical clock. Early neuroscientists from the 20th century described neurons as electric wires or phone lines, passing signals like Morse code. And now, of course, the favored metaphor is the computer, with its hardware and software standing in for the biological brain and the processes of the mind.

In this technology-ridden world, it’s easy to assume that the seat of human intelligence is similar to our increasingly smart devices. But the reliance on the computer as a metaphor for the brain might be getting in the way of advancing brain research.

As Gallistel continued his presentation to the Cognitive Neuroscience Society, he described the problem with the computer metaphor. If memory works the way most neuroscientists think it does—by altering the strength of connections between neurons—storing all that information would be way too energy-intensive, especially if memories are encoded in Shannon information, high fidelity signals encoded in binary. Our engines would overheat.

Instead of throwing out the metaphor, though, scientists like Gallistel have massaged their theories, trying to align the brain’s biological reality with computational complexity. Rather than question the assumption that the brain’s information is Shannon-like, Gallistel—a wiry emeritus professor at Rutgers—devised an alternate hypothesis for storing Shannon information as molecules inside the neurons themselves. Chemical bits, he argued, are cheaper than synapses. Problem solved.

This patchwork method is standard procedure in science, filling in holes in their theories as problems and evidence present themselves. But adherence to the computer metaphor might be getting out of hand—leading to all sorts of shenanigans, especially in the tech world.

“I think the brain-as-a-computer metaphor has led us astray a little bit,” says Floris de Lange, a cognitive neuroscientist at the Donders Institute in the Netherlands. “It makes people think that you can completely separate software from hardware,” de Lange says. That assumption leads some scientists—mind-body dualists—to argue that we won’t learn much by studying the physical brain.

Recently, neuroscientists tried to demonstrate how current techniques for studying the brain wouldn’t help much with understanding how the mind works. They took a crack at analyzing some hardware—a microprocessor running Donkey Kong—in hopes of elucidating the software, just using techniques like connectomics and electrophysiology. They couldn’t find much other than the circuit’s off switch. Analyzing hardware won’t give you insights into the software, QED.

But the Donkey Kong study was framed the wrong way. It assumes that what is true for a computer chip is true for a brain. The mind and the brain are much more profoundly entangled than a computer chip and its software, though. Just look at the physical traces of our memories. Over time, our memories are physically encoded in our brains in spidery networks of neurons—software building new hardware, in a way. While working at MIT, Tomás Ryan used a method to visualize that entanglement, labeling neurons that are active when memories are forming by marking them with fluorescent proteins. Using this tool, Ryan watched memory take hold physically in the brain over time.