Editor's note: This excerpt of a chapter from Louder Than Words: The New Science of How the Mind Makes Meaning by Benjamin K. Bergen (Basic Books, 2012) relates that our brain’s capacity to both perceive a pig and then imagine what the animal is like, even one that flies, points to an essential cognitive skill that makes humans different from all other species.

Excerpted from Louder Than Words: The New Science of How the Mind Makes Meaning by Benjamin K. Bergen. Available from Basic Books, a member of The Perseus Books Group. Copyright © 2012.

Starting as early as the 1970s, some cognitive psychologists, philosophers, and linguists began to wonder whether meaning wasn’t something totally different from a language of thought [Call it Mentalese, whichtranslates words into actual concepts: a polar bear or speed limit, for instance]. They suggested that—instead of abstract symbols—meaning might really be something much more closely intertwined with our real experiences in the world, with the bodies that we have. As a self-conscious movement started to take form, it took on a name, embodiment, which started to stand for the idea that meaning might be something that isn’t distilled away from our bodily experiences but is instead tightly bound by them. For you, the word dog might have a deep and rich meaning that involves the ways you physically interact with dogs—how they look and smell and feel. But the meaning of polar bear will be totally different, because you likely don’t have those same experiences of direct interaction.

If meaning is based on our experiences in our particular bodies in the particular situations we’ve dragged them through, then meaning could be quite personal. This in turn would make it variable across people and across cultures. As embodiment developed into a truly interdisciplinary enterprise, it found footholds by the end of the twentieth century in linguistics, especially in the work of U.C. Berkeley linguist George Lakoff and others; in philosophy, especially in work by University of Oregon philosopher Mark Johnson, among others; and in cognitive psychology, where U.C. Berkeley psychologist Eleanor Rosch’s early work led the way.

The embodiment idea was appealing. But at the same time, it was missing something. Specifically, a mechanism. Mentalese is a specific claim about the machinery people might use for meaning. Embodiment was more of an idea, a principle. It might have been right in a general sense, but it was hard to tell because it didn’t necessarily translate into specific claims about exactly how meaning works in real people in real time. So it idled, and it didn’t supplant the language of thought hypothesis [Mentalese] as the leading idea in the cognitive science of meaning.

And then someone had an idea.

It’s not clear who had it first, but in the mid-1990s at least three groups converged upon the same thought. One was a cognitive psychologist, Larry Barsalou, and his students at Emory University, in Georgia. The second was a group of neuroscientists in Parma, Italy. And the third was a group of cognitive scientists at the International Computer Science Institute in Berkeley, where I happened to be working as a graduate student. There was clearly something in the water, a zeitgeist. The idea was the embodied simulation hypothesis, a proposal that would make the idea of embodiment concrete enough to compete with Mentalese. Put simply:

Maybe we understand language by simulating in our minds what it would be like to experience the things that the language describes.

Let’s unpack this idea a little bit—what it means to simulate something in your mind. We actually simulate all the time. You do it when you imagine your parents’ faces, or fixate in your mind’s eye on that misplayed poker hand. You’re simulating when you imagine sounds in your head without any sound waves hitting your ears, whether it’s the bass line of the White Stripes’ Seven Nation Army or the sound of screeching tires. And you can probably conjure up simulations of what strawberries taste like when covered with whipped cream or what fresh lavender smells like. You can also simulate actions. Think about the direction you turn the doorknob of your front door. You probably visually simulate what your hand would look like, but if you’re like most people, you do more than this. You are able to virtually feel what it’s like to move your hand in the appropriate way—to grasp the handle (with enough force to cause the friction required for it to move with your hand) and rotate your hand (clockwise, perhaps?) at the wrist. Or if you’re a skier, you can imagine not only what it looks like to go down a run, but also what it feels like to shift your weight back and forth as you link turns.

Now, in all these examples, you’re consciously and intentionally conjuring up simulations. That’s called mental imagery. The idea of simulation is something that goes much deeper. Simulation is an iceberg. By consciously reflecting, as you just have been doing, you can see the tip—the intentional, conscious imagery. But many of the same brain processes are engaged, invisibly and unbeknownst to you, beneath the surface during much of your waking and sleeping life. Simulation is the creation of mental experiences of perception and action in the absence of their external manifestation. That is, it’s having the experience of seeing without the sights actually being there or having the experience of performing an action without actually moving.

When we’re consciously aware of them, these simulation experiences feel qualitatively like actual perception; colors appear as they appear when directly perceived, and actions feel like they feel when we perform them. The theory proposes that embodied simulation makes use of the same parts of the brain that are dedicated to directly interacting with the world. When we simulate seeing, we use the parts of the brain that allow us to see the world; when we simulate performing actions, the parts of the brain that direct physical action light up. The idea is that simulation creates echoes in our brains of previous experiences, attenuated resonances of brain patterns that were active during previous perceptual and motor experiences. We use our brains to simulate percepts and actions without actually perceiving or acting.

Outside of the study of language, people use simulation when they perform lots of different tasks, from remembering facts to listing properties of objects to choreographing a dance. These behaviors make use of embodied simulation for good reason. It’s easier to remember where we left our keys when we imagine the last place we saw them. It’s easier to determine what side of the car the gas tank is on by imagining filling it up. It’s easier to create a new series of movements by first imagining performing them ourselves. Using embodied simulation for rehearsal even helps people improve at repetitive tasks, like shooting free throws and bowling strikes. People are simulating constantly.

In this context, the embodied simulation hypothesis doesn’t seem like too much of a leap. It hypothesizes that language is like these other cognitive functions in that it, too, depends on embodied simulation. While we listen to or read sentences, we simulate seeing the scenes and performing the actions that are described. We do so using our motor and perceptual systems, and possibly other brain systems, like those dedicated to emotion. For example, consider what you might have simulated when you read the following sentence... :

When hunting on land, the polar bear will often stalk its prey almost like a cat would, scooting along its belly to get right up close, and then pounce, claws first, jaws agape.

To understand what this means, according to the embodied simulation hypothesis, you actually activate the vision system in your brain to create a virtual visual experience of what a hunting polar bear would look like. You could use your auditory system to virtually hear what it would be like for a polar bear to slide along ice and snow. And you might even use your brain’s motor system, which controls action, to simulate what it would feel like to scoot, pounce, extend your arms, and drop your jaw. The idea is that you make meaning by creating experiences for yourself that—if you’re successful—reflect the experiences that the speaker, or in this case the writer, intended to describe. Meaning, according to the embodied simulation hypothesis, isn’t just abstract mental symbols; it’s a creative process, in which people construct virtual experiences—embodied simulations—in their mind’s eye.

If this is right, then meaning is something totally different from [a given] definitional model ... If meaning is based on experience with the world—the specific actions and percepts an individual has had—then it may vary from individual to individual and from culture to culture. And meaning will also be deeply personal—what polar bear or dog means to me might be totally different from what it means to you. Moreover, if we use our brain systems for perception and action to understand, then the processes of meaning are dynamic and constructive. It’s not about activating the right symbol; it’s about dynamically constructing the right mental experience of the scene.

Furthermore, if we indeed make meaning through simulating sights, sounds, and actions, that would mean that our capacity for meaning is built upon other systems, ones evolved more directly for perception and action. And that in turn would mean that our species-specific ability for language is built up from systems that we actually share in large part with other species.

Of course, we use these perception and action systems in new ways. We know this because other animals don’t share our facility with simulation…The capacity for open-ended simulation is something much more human than ursine, not just in language, but pervasively throughout what we do with our minds. You can simulate what you would look like if you covered your nose with your hand, just as easily as you can simulate what you’d look like if you had two heads or if you had a pogo stick in place of your right leg. If simulation is what makes our capacity for language special, then figuring out how we use it will tell us a lot about what makes us unique as humans, about what kind of animal we are, and how we came to be this way.

One of the important innovations of the embodied simulation hypothesis—and one way in which it differs from the language of thought hypothesis [Mentalese]—is that it claims that meaning is something that you construct in your mind, based on your own experiences. If meaning is really generated in your mind, then you should be able to make sense of language about not only things that exist in the real world, like polar bears, but also things that don’t actually exist, like, say, flying pigs. So how we understand language about nonexistent things can actually tell us a lot about how meaning works.

Let’s consider the case of the words flying pigs. I’d wager that flying pigs actually means a lot to you, even without thinking too hard about it. Over the years, I’ve asked a lot of people what flying pigs means to them, informally. (One of the luxuries of being a university professor is that people tend to be totally unsurprised when you ask questions like How many wings does a flying pig have?) According to my totally unscientific survey, conducted primarily with the population of individuals with time on their hands and a beverage in their glass, when most people hear or read the words flying pigs, they think of an animal that looks for all intents and purposes like a pig but has wings. The writer John Steinbeck imagined such a winged pig and named it Pigasus. He even used it as his personal stamp. What do you know about your own personal Pigasus? It probably has two wings (not three or seven or twelve) that are shaped very much like bird wings. Without having to reflect on it, you also know where they appear on Pigasus’ body—they’re attached symmetrically to the shoulder blades. And although it has wings like a bird, most people think that Pigasus also displays a number of pig features; it has a snout, not a beak, and it has hooves, rather than talons.

There are a couple things to draw from this example. First, flying pigs seems to mean something to everyone. And that’s important because there’s no such thing as an actual flying pig in the world. In fact, part of the meaning of flying pigs is precisely that flying pigs don’t exist. What all of this means, not to be too cute about it, is that the Mentalese theory that meaning is about the relation of definitions to real things in the world will only work when pigs fly.

Second, if you’re like most people, what you did when you understood flying pigs probably felt a lot like mental imagery. You might ask yourself, did you experience visual images of a flying pig in your mind? Were they vivid? Were they replete with detail? Of course, consciously experiencing visual imagery is just one way to use simulation—you can also simulate without having conscious access to images. But where there’s imagined smoke, there may be simulated fire. If you’re like most people, when you simulate a flying pig, you probably see the snout and the wings in your mind’s eye. You may see details like color or texture; you might even see the pig in motion through the air. The words flying pigs are not unique in evoking consciously accessible visual detail. The same is true for lots of language, whether the things it describes are impossible like flying pigs or totally mundane like buying figs or somewhere in between, like the polar bear’s nose.

Third, and I don’t expect that this occurred to you because it only became clear to me through my extensive research—flying pigs doesn’t actually evoke something of the genus Pigasus for everyone. For some people, flying pigs don’t use wings to propel themselves, but instead conscript superpowers. If your flying pig is of this variety—let’s call it Superswine—then it probably wears a cape. Maybe a brightly colored spandex unitard, too, with some symbol on the chest, like a stylized curly pig tail or, better yet, a slice of fried bacon. And what’s more, when it flies, Superswine’s posture and motion are different from those of winged flying pigs. Whereas winged flying pigs hold their legs beneath their body, tucked up to their bellies or hanging below them, Superswine tend to stretch their front legs out in front of themselves, à la Superman.

I’ll be the first to admit that the respective features of Pigasus and Superswine are not of great scientific value or vital public interest in and of themselves. But they do tell us something about how people understand the meanings of words. People simulate in response to language, but their simulations appear to vary substantially. You might be the type of person to automatically envision Superswine, or you might have a strong preference for the more common Pigasus. We observe individual variation like this not only for flying pigs, but equally for any bits of language. Your first image of a barking dog might be a big, ferocious Doberman, or it might be a tiny, yappy Chihuahua. When you read torture devices, you might think of the Iron Maiden or you might think of a new Stairmaster at your gym. Variation in the things people think words refer to is important because it means that people use their idiosyncratic mental resources to construct meaning. We all have different experiences, expectations, and interests, so we paint the meanings we create for the language we hear in our own idiosyncratic color.

And finally, flying pigs teaches us that when you engage your visual system to understand language, you do so creatively and constructively. You can take previously experienced percepts (such as what pigs look like) and actions (such as flying) and form new combinations out of them. What flying pigs means depends on merging together independent experiences, because you have probably never experienced anything in the real world that corresponds to flying pigs (unless you spent a lot of time at Pink Floyd concerts in the 1970s). That makes flying pigs an extreme case, but even when language refers to a corresponding real-world entity—even in mundane cases—you still have to build up a simulation creatively.

Consider the totally boring expression yellow trucker hat. Now, surely there exist yellow trucker hats in the world. You have probably seen one, whether or not you were so moved by the experience as to remember it. But unless you have a specific stored representation of a particular yellow trucker hat, the mental images that you evoke to interpret this string of ordinary words have to be fabricated on the spot. And to do this, you combine your mental representation of trucker hat with the relevant visual effects of the word yellow. When words are combined—whether or not the things they refer to exist in the real world—language users make mental marriages of their corresponding mental representations.

The next step is to put the idea of embodied simulation under a microscope and really put it to the test. But how? The currency of science is observable, replicable observations that confirm or disconfirm predictions, but, as I noted earlier, meaning doesn’t lend itself willingly to this kind of approach because it’s quite hard to observe. So, what to do? Facing this quandary as you are, you’re in pretty much the same place where the field of cognitive science was in about the year 2000. There was this exciting, potentially groundbreaking idea about simulation and meaning, and yet we had no idea how to test it.

And that’s when the ground shifted. Right about at the same time, a handful of trailblazing scientists started to develop experimental tools to investigate the embodied simulation hypothesis empirically. They flashed pictures in front of people’s faces, they made them grab onto exotically shaped handles, they slid them into fMRI scanners, and they used high-speed cameras to track their eyes. Some of these approaches failed completely. But the ones that worked rocketed meaning onto the front page of cognitive science. And they provided us with instruments that now allow us to scrutinize humans in the act of making meaning.