The Thinker, Auguste Rodin’s bronze sculpture, has become a visual cliché, a common representation of deep thought—a figure gazing down, chin on hand, completely alone. This is utterly misleading, according to the authors of The Knowledge Illusion, which carries the subtitle Why We Never Think Alone. Steven Sloman, a professor at Brown University, and Philip Fernbach, a professor at the University of Colorado Boulder’s Leeds School of Business, argue that our intelligence depends on the people and things that surround us and to a degree we rarely recognize. Knowledge, they say, is a community effort. Sloman answered questions from Mind Matters editor Gareth Cook.

Steven Sloman. Credit: Thad Russel

Scientific American: You argue that we don’t know as much as we think we do. Can you explain this?

Sloman: People overestimate how well they understand how things work. Direct evidence for this comes from the psychological laboratory. The great Yale psychologist Frank Keil and his students first demonstrated the illusion of explanatory depth, what we call the knowledge illusion. He asked people how well they understand how everyday objects (zippers, toilets, ballpoint pens) work. On average, people felt they had a reasonable understanding (at the middle of a 7-point scale). Then Keil asked them to explain how they work. People failed miserably. For the most part, people just can’t articulate the mechanisms that drive even the simplest things. So when he again asked them to rate their understanding, their ratings were lower. By their own admission, the act of attempting to explain had pierced their illusion of understanding. We have replicated this basic finding many times, not only with everyday objects but also with political policies. Psychologist Matthew Fisher has shown that people overestimate their ability to construct logical justifications for their beliefs.

More indirect evidence comes from the simple fact that people are surprisingly ignorant. [About] 50 percent of Americans don’t know that antibiotics kill bacteria, not viruses; only a minority can name even a single Supreme Court justice. Rebecca Lawson has shown that people can’t draw a bicycle, even with substantial help. Yet people are surprised when they discover what they can’t do.

That’s amazing that people have trouble justifying their beliefs logically. How can that be?

Human reasoning takes a couple of forms. Most of the conclusions we come to are the products of intuition. Intuitive processes can be identified because we have no introspective access to how they work; we are only conscious of their output. For instance, intuitive processes deliver stored conclusions from memory. We can’t introspect to see how memory retrieves information; it just serves it up to consciousness.

To illustrate, most of us believe that there was a great revolution in France in the late 18th century. How do we justify that belief? Most of us aren’t historians; we just dredge up the fact from memory. We can’t really justify it except by appealing to our own memories, and we can’t even say much about how we retrieved the memory. It just comes to mind. Intuitive processes are capable of more than just memory, though. They are also capable of pretty sophisticated pattern recognition. If you ask me to reconstruct what I know about the French Revolution, I can tell a story. The story will be pretty superficial and miss a lot of facts—really important facts—but it’ll be largely coherent because my intuitive system is sophisticated enough to have some sense of how the world works. For example, I don’t remember the name of the king, but I can tell you he was captured before his head was chopped off because you can’t chop someone’s head off unless you’ve captured the person. And, truth be told, I’m just guessing his head was chopped off because my memory tells me that lots of people had their heads chopped off in that period. So intuition is pretty powerful—it can tell a really good story. But it’s very limited in its factual basis. The late cognitive scientist Thomas Landauer calculated that humans can retain only about 1 gigabyte of information, just a fraction of what a modern flash drive can hold.

Beyond intuition, we can also reason by deliberating, thinking things through carefully. But we don’t do that very much, and we’re not very good at it as individuals. We need a lot of help. We often use things in the world to help us, like whiteboards and computers. But more than anything, we use other people. Most thinking involves collaborating with other people. That’s why scientists have lab meetings, why doctors consult with specialists, and why it’s important to have someone to talk to when you’re confused or upset. Individuals can’t justify their beliefs, but groups are great at justifying things (though not necessarily justifications that would pass muster with a philosopher). A little social support can generate a lot of confidence.

Philip Fernbach. Credit: Emily Sacco

Why is it important for people to know this?

The main reason is mere curiosity. We are told to “know thyself,” and what could be a more important thing to know than what we are capable of mentally? Moreover, knowing that we’re more ignorant than we think should make us more humble and give us greater respect and gratitude for others and the knowledge they bring to the table. This is important in all our human relationships, whether at home, at work or elsewhere. And it’s also important to live with others in a just and peaceful society.

Tell me more about this idea that what we know is “social”?

People fail to distinguish the knowledge that’s in their own heads from knowledge elsewhere (in their bodies, in the world and—especially—in others’ heads). And we fail because whether or not knowledge is in our heads usually doesn’t matter. What matters is that we have access to the knowledge. In other words, the knowledge we use resides in the community. We participate in a community of knowledge. Thinking isn’t done by individuals; it is done by communities. This is true at macro levels: fundamental values and beliefs that define our social, political and spiritual identities are determined by our cultural communities. It is also true at the micro level: We are natural collaborators, cognitive team players. We think in tandem with others using our unique ability to share intentionality.

Individuals are rarely well described as rational processors of information. Rather we usually just channel our communities.

What do you mean that we are “cognitive team players”?

The deliberative mind is designed to work with other people. When we’re crossing the street, we have to think about what oncoming drivers are thinking, and we often make eye contact with them to confirm that we’re on the same page. This kind of meshing of cognitive gears is even more pronounced when we’re engaging in any group activity: playing sports, sitting around the dinner table telling jokes, fixing our car or trying to crack the genetic code. We think together. We feed one another’s intuitions, we complete one another’s thoughts, we hold knowledge that others can make use of. There’s a division of cognitive labor.

Some cognitive anthropologists have made a strong argument that human beings are the only animals—indeed, the only cognitive systems—capable of this kind of collaboration. Humans are capable of sharing intentionality. We don’t merely pursue collective goals as many animals can do (for instance, some species hunt in packs), but we pursue a common goal together. When, for example, a child and parent are building a sand castle together, they are literally sharing thoughts: they are pursuing a common end result and doing so with knowledge that they hold in common, about the weather, the tides (if they’re on the beach), the availability of tools, et cetera. If one runs into a problem, the other might help. This requires that they understand that they share a goal. There are clever experiments showing that this kind of sharing of mental states is common and easy for even young children but beyond the capabilities of our closest genetic relative, chimpanzees.

How do you think about the way technology fits into this?

Technology fits in many ways. First, it exacerbates the knowledge illusion because it is a powerful source of information. Studies by psychologist Adrian Ward and others have shown that we feel smarter around Google. Of course, we should, as long as we have access to it. It is a source of information like no other. It is perhaps the most significant member of our community of knowledge. But its role is different from that of other humans because it lacks the critical ability that humans possess: It doesn’t share our intentionality. It doesn’t read our minds to figure out what we’re looking for. It sometimes does a good imitation of a human because some clever programmer has figured out what a good response to a query would be. But the cleverness in that case is in the human programmer, not the machine. As a result, we have to be careful with technology. GPS software has sent many a driver the wrong way, sometimes causing them to drive into lakes. And there have been disasters caused by overreliance on technology because we sometimes treat our technology as if it were sharing our goals, as if it were human. So we have to remain vigilant.

But the scariest role of technology is what it has done to our social systems. It’s so easy today to live in a bubble of like-minded individuals on social media; indeed, it’s hard not to. And the biggest Web sites just tell us what we want to hear, stuff that agrees with the view we already have. As a result, we could be getting entirely different news than people on other sides of the political divide. We might be living in entirely different worlds with respect to the information we have about what’s going on in the world. This is a recipe for social tension. And we’re seeing precisely this kind of tension and the damage it can cause.