Drone pilots spend 12-hour shifts in a bubble of anonymous war. When their shift is over, they come home to their families and are forced to engage in the “normal” activities of the real world. This is in contrast to combat soldiers who live in a war zone and adjust their entire reality accordingly. Drone pilots are anonymous participants in a war that exists and doesn’t exist at the same time.

While most of us aren’t logging on to kill people, we are living similarly parallel lives. Dropping in and out of anonymity, engaging in interactions in an alternate universe. Interactions which, sometimes, even our closest loved ones are unaware of. Some of us make this switch hundreds of times a day.

But what about those of us who aren’t engaging? Most of us aren’t bullying or being bullied. What if we’re logging in just to watch?

For drone pilots, even watching a war anonymously from a distance has significant impacts. An NPR piece about reconnaissance drone pilots quotes military surgeon Lt. Col. Cameron Thurman on the emotional burden:

“You don’t need a fancy study to tell you that watching someone beheaded … or tortured to death, is gonna have an impact on you as a human being. Everybody understands that. What was not widely understood is the level of exposure that [pilots have] to that type of incident. We see it all.”

Even if we aren’t the ones being bullied or doing the bullying, we are all seeing it. Every day. Verbal abuse, violence on video, self-righteous shaming, condescension, belittlement, jealousy, posturing, and comparison. Our experience of the internet often feels private, but it is all happening on the world stage. Unlike road rage, which is usually contained to our little pod on four wheels, web rage is flung out into the universe, where the rest of us are forced to watch it all unfold from our own bubble. Processing it across a weird chasm of pixels and fiber optics. Anonymous observers in a world where the names are made up, but the problems are real. I’d say, we’re only just beginning to understand the psychological impacts of this.

Technology Addiction

A lot has been written about our addiction to technology, especially through the lens of the habit-forming design of things like social media.

Psychologists break the formation of habits into three distinct components — a trigger, an action, and a reward. Something triggers (or reminds) you to take an action. You take the action. You get a reward. This habit cycle drives a surprising amount of our everyday behavior.

When we talk about the addictive nature of the web, we pay particular attention to the design of specific features within applications that deliver “hits of dopamine” (the pleasure hormone). These features are: likes, hearts, shares, comments, and retweets. They are also feeds that constantly refresh, delivering little bits of new information at unpredictable intervals. Where this focus falls short is that it deals almost exclusively with the action and reward portion of the cycle. The action is checking your stats or refreshing your feed. The reward is new likes on your posts or new posts in your stream. But what about the trigger? What is initiating the cycle? You might say it’s notifications, but we are checking the web constantly with or without notifications. It is deeper than that.

Our desire for escape is the trigger that drives our incessant checking of the web.

The bubble on anonymity provides something fundamental for people. It provides escape. It pulls you out of whatever real-world situation you are in and lets you forget about your life for a moment. Have you ever been relieved to just get in the car and drive? Our desire for escape is the trigger that drives our incessant checking of the web. Every time we want to get away, our new action is logging in. Whether we’re escaping from boredom, an awkward social situation, or the responsibilities of life, our digital devices give us an ever-present “out.” A portal to temporary anonymity, albeit only perceived.

This ability to temporarily “disappear” not only represents the trigger in our cycle, it is also our reward. Our addiction is less about the mini dopamine hits we get from social validation metrics and more about the escape. The dopamine hit from likes and new posts is just the final icing on the cake, reminding us that escape is always the right choice.

In online culture, the “1 percent rule” is a framework for thinking about activity in online communities. It breaks users into three stratifications based on activity: creators, commenters, and lurkers. The idea is that 1 percent of people are creators. They drive the creation of all the new content in the community. Nine percent are commenters who actively engage with a creator’s content — liking, commenting, etc. The other 90 percent are lurkers who watch from the background.

Whether these percentages are completely accurate doesn’t matter. What matters is the idea that the majority are not creating content or even actively engaging with content in online communities. This means that our addiction to these services cannot be driven solely by the dopamine hits created by social metrics. Most people are not using them. It has to be deeper than that. We’re addicted to the escape. We’re addicted to our perceived anonymity.

Fake News, Filter Bubbles, and Echo Chambers

Our conversations are becoming more divisive, our views more polarized. The 2016 election in the U.S. brought this into sharp relief. For many, the blame for this divide lies with the algorithms that serve us content.

In more and more web platforms, including almost all major social media services, content is served by algorithms. Fundamentally, this means a computer calculates which posts you’re most likely to engage with and shows you those, while hiding posts it thinks you won’t like. The goal is to deliver the best content, personalized for you.

The problem is that these algorithms are backward-looking. They calculate based on what you’ve done in the past: “Because you read this, you might also like this.” In algorithm world, past behavior determines future behavior. This means that algorithmically driven services are less likely to show you information that opposes your existing views. You probably didn’t engage with it in the past, so why would you in the future? So, your feed becomes an echo chamber, where everything you see supports what you already believe.

Algorithms feed one of our most primitive psychological needs. We are hardwired to seek out information that confirms our beliefs. This is known as confirmation bias.

From Psychology Today:

Confirmation bias occurs from the direct influence of desire on beliefs. When people would like a certain idea/concept to be true, they end up believing it to be true. They are motivated by wishful thinking. This error leads the individual to stop gathering information when the evidence gathered so far confirms the views (prejudices) one would like to be true.

We want our beliefs to be true. It can be hard, painful work to let go of a belief. This is why fake news is like jet fuel for content algorithms. It tells us exactly what we want to hear. If a service put opposing views in our face all the time, it could be emotionally painful. We might not come back to that service. From a business perspective, it makes sense to show us what we like.

The prevailing wisdom is that this constant reinforcing of our worldview kills open-mindedness, hardening our beliefs to a point where we are no longer able to find common ground with anyone who opposes them. As the repercussions of our online echo chambers become increasingly evident, there are calls to change the way we surface content in order to show more diverse perspectives. The idea is that a more diverse feed means a more open-minded worldview. The question is, would this work?

Fake news is like jet fuel for content algorithms. It tells us exactly what we want to hear.

In 2015, Facebook published a study suggesting that it is actually users who cause their own filter bubbles, not the Facebook algorithm. That we are the ones actively choosing to ignore or hide opposing views. At first blush, it’s easy to pass this off as a clear conflict of interest. Of course Facebook would say it’s us and not the algorithm. But it may not be so clear-cut.

We engage online in a bubble of psychological anonymity. Our reward is escape. If we are already hardwired to seek out information that supports our beliefs, and it is painful to be exposed to information that opposes them, of course we would do our own filtering.

The internet is a fire hose. It can be so overwhelming that sometimes we literally go numb. It is information hypersensitization. It is more than our brain can deal with. We’re here to escape, not to feel overwhelmed. So, we start turning off as much of the noise as possible. We reject anything that makes us feel uncomfortable.

Luckily for us, the internet is the perfect machine for supporting our existing beliefs. Communities of like-minded people are just a Google search away, no matter how niche our interests. Our bubble of anonymity frees our brain from any social pressures stopping us from indulging our innermost desires, no matter how subversive or extreme. On top of that, services have given us all the tools we need to sanitize our feeds. We can block, mute, flag, and unfollow. Combine all of it with an algorithm predisposed to reinforce our worldview and you have a perfect storm for polarization and radicalization.

Additionally, the way we process interactions online is different than the way we process them offline. A recent study found that Twitter users who were exposed to opposing views on the service actually became more rooted in their beliefs. This flies directly in the face of the prevailing wisdom about exposure to diverse views driving open-mindedness.

The internet is the perfect machine for supporting our existing beliefs.

While the study results may be true, the question is: Do they represent a natural human state? We operate online in a psychological bubble of anonymity. That bubble does not exist in the outside world. In the physical world, exposure to diverse views and experiences happens with real people. In those cases, our brain is operating in a completely different mode.

When we’re online, as far as our brain is concerned, we aren’t engaging with real people. Like when another driver notices you picking your nose, coming into contact with opposing views online pops our bubble of anonymity. It is a real-world intrusion into our alternative universe by some faceless gray blob. The psychological response is different. It is much more fight or flight than listen and consider.