A simple, naive picture of how a human experiences the world might look like this:



This image is, of course, way too simple. We all know that we aren’t seeing all of reality. We have to filter out some information, because much of it is noise, or redundant. So perhaps a slightly more realistic view is like this:



This drawing feels closer to how most people see things – at least, from my perspective. We all know we’re filtering data from the world, but most of what we’re filtering is just noise, right? If we were filtering out more than noise, this would be bad, right?



I don’t think this is accurate. There’s a lot of evidence that says no, we’re not doing that. We just like to think we are. I think a better representation is like this:

The reason I say this is because I can draw a picture like the “comfortable story” one above, believe it’s accurate, and then immediately pretend it’s not. So yes, right now I say “most of what we observe is heavily biased by our perceptions, so we should always be skeptical of what our minds present to us as a true, accurate, unbiased picture of the world.” And yet immediately after saying this, I’ll still fall victim to the fallacy of assuming my beliefs about reality are essentially correct. In other words, here’s the most realistic picture I can draw:



I think everyone is doing this all time: ignoring and distorting reality to suit the way we think about it, and then swearing up and down, that oh no, we’re just telling it like it is. I’d like to think that by drawing the above picture, and saying “yes, see, it’s really like this,” that I’m somehow above it, and that my way of viewing reality is at least a little more accurate because it contains this pattern, but that’s the pattern showing up again!



The very moment you allow yourself to think that you’re seeing the world accurately and clearly, and those other people other there are just idiots, you’ve already lost the game. Yes, there are idiots in the world, and you’re one of them. I certainly am.



How could I not be? I’m a primate, evolved to live and hunt and socialize with small groups of people I’ve known my whole life. I spend my days working on a massive computational pipeline with other primates who are equally out of our element. None of us grew up together. I spend most of my day with people I like a lot, but whom I’ve only known for a few years. From a modern perspective, this is normal – if anything, I’m lucky to like my coworkers. But from a historical perspective, this is crazy.



The human condition is wildly absurd, but we’re used to it, so we go along with it every day without really considering how insane it all is. We are used to it because we’ve told ourselves a story about reality. The story lines up with our experience closely enough that we’re comfortable discarding the parts that don’t line up. That’s the basic feedback loop at play.



Stories generate predictions about what we’re going to experience. Reality determines what we actually experience. That leads to a vector diagram like this:



The predictions that we’re making aren’t broad-based, future predictions like “I bet the Giants win the World Series.” These predictions are much lower level, and much more mundane, such as “The vaguely dark object at the lower-left hand of my field of vision will stay at the same spot.” The general name for this theory is “Predictive Coding,” or “Predictive Processing.”



We do make higher level predictions, of course. Most people know this, but are unaware that the prediction making is happening all the way down, to very basic levels of perception. You aren’t looking at reality. You’re looking at a simulation that your brain constructs for you, based on what it expects reality to look like.



Usually, what ends up happening isn’t that far off from what we expect. The distance between what we expect to happen, and what actually happens, leads to the experience of cognitive dissonance. This is an unpleasant feeling.



Cognitive Dissonance is the force that can eventually lead people to discard a story that doesn’t work for them. If you believe you can fly, and you try to do so, and fail, that cognitive dissonance will be pretty hard to ignore.



On the other hand, if you think “People in the outgroup are all boorish idiots”, it isn’t hard to provoke most people into acting like jackasses, just by being one yourself. You might, on rare occasions, meet someone in the outgroup that isn’t a totally boorish jackass, despite your best attempt to provoke them into being one. This will lead you to experience some cognitive dissonance.

That cognitive dissonance is easily attributed to “those jerks over there”, and you can keep on believing that all good and decent people believe the same things you do. After all, it feels good to believe your beliefs are correct. That confidence – from feeling that your story is accurate – is the force that cognitive dissonance has to compete against:



At this point, I’m feeling pretty good about myself. I’ve taken an insanely complex phenomena – what people believe, and why – and then boiled it down into a simple set of vector diagrams. The idea here is that people will change their beliefs when the cognitive dissonance of reality telling them they are wrong overcomes the good feeling that comes from being right often enough.



Why does that feel good? Because it’s false confidence! This thing I’m telling you is also a model, it’s obviously too simple to be completely true, and yet it still has this strong appeal to me because I’m still a primate thinking with a brain made of meat.



Yup, I’m doing this again:



When I say that people are computers, some take this to mean that I think people are hyper-rational and perfectly attuned to observed reality. They think I’m saying something like “people are computers running excellent software that gives them accurate views on reality.” No, I definitely haven’t forgotten that we’re primates and that we’re prone to all kinds of biases. I simply think it makes sense to reason about these biases as if they were buggy software.



I think political problems then arise naturally, from what you’d expect to happen if you had a bunch of buggy software running in a network of biological machines all running firmware that evolved to help them to defend their territory and purse social status.



The worst aspect of all of this is that what you see isn’t what’s real – it’s the portion of reality that lines up with what you already believe to be true.

So it’s not even the case that information comes in, you become aware of it, and consciously reject it. You don’t even see the parts of reality that are “wrong” to your model. They just make you feel a little awkward or uncomfortable. That’s the only clue you get. A little nagging feeling that something isn’t right. When you get that feeling around someone who has different beliefs from you, it’s far, far easier to attribute that discomfort to the Other, than it is to attribute it to the mismatch between what you experienced and what you expected to.



Show a hard-core conservative evidence of rising sea levels, mass coral-reef die-offs, or long term changes in sea ice. Show a hard-core anti-capitalist evidence of the declining global poverty rate, and the drop in infant mortality. Show a hard-core social liberal evidence that increased gender equality in societies increases the skewed gender ratios in fields like engineering. They’ll all say the same thing:



“That doesn’t look like anything to me.”



What these people have in common is a hardcore devotion to a story. This devotion keeps them blind to facts that contradict the story. Are these people using the story to help them navigate reality? Or are the stories they believe using those people to perpetuate themselves?



In the next post we’ll talk more about the interaction between stories and emotion, so we can build out a more complete model of the human computational architecture, and talk about the distributed system of human beings we call global culture.



Further Reading

How Emotions are Made: The Secret Life of the Brain, by psychologist and neuroscientist Lisa Feldman Barrett. This is the most accessible source I’ve seen on how dissonance between expectations and reality give rise to emotional experiences. I don’t agree entirely with the author that emotions arise entirely as a result of concept formation; I think there’s a ground truth at play. Still, the book does an excellent job of explaining how our brains are constantly making predictions about what we will experience.



Surfing Uncertainty: Prediction, Action, and the Embodied Mind, by Andy Clark. This book is a little less accessible, but covers more ground than Barrett’s book. This is probably the most popular book out there on Predictive Processing. It also deals less with emotion, and more with how the prains predictions lead to action.



The Structure of Scientific Revolutions is a classic here. Thomas Kuhn argues that changes in scientific thinking depend on a generation of old scientists dying. Revolutions don’t happen because new evidence is so compelling it changes everyone’s mind. They happen when a number of younger scientists, who haven’t spend year thinking in one paradigm, weigh the evidence and decide it supports a new theory. Then they wait decades for the older scientists – who never accept the new theory – to die off.



Anything by Venkatesh Rao is worth reading. He’s best known for The Gervais Principle, a multi-part essay on how corporations operate. The short version is that you can break a company down into three groups. Clueless people believe the story that the corporation tells itself – that it’s about connecting the world or making it a better place. Sociopaths make up these stories, knowing the company is really a place to play the game of wealth and power. Losers understand that the stories are false, but either don’t want to play the game, or don’t have what it takes. This essay on negotiation is probably my favorite piece by him, because it summarizes the importance of stories so succinctly.

