Photograph by Kitja Kitja / Shutterstock

A few years ago, Justine Sacco, then the senior director of corporate communications at the holding company InterActiveCorp, tweeted about the nuisances of air-travel during a long, multi-leg journey from New York to South Africa. She started with sardonic observations—one about a smelly passenger at JFK Airport, another about London’s peculiar food and predictably inclement weather. Then came this one, shortly before her final flight: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”

As she settled in to sleep, she had good reason to expect that that tweet would fade away into the hectic ether of Twitter. She had only 170 followers, after all. But no. While her phone was off, Sacco became the number one worldwide trending topic on Twitter, as tens of thousands of users across the globe filled her feed with their outrage. When she landed in Cape Town, she found herself receiving the full brunt of the online community’s capacity for public shaming. Her public persona destroyed, Sacco was fired from her job and saw much of her social circle—both online and offline—wither away. “I thought there was no way that anyone could possibly think it was literal,” she later told author Jon Ronson for his book, So You’ve Been Publicly Shamed.

This wasn’t the first time caprice was punished with viral outrage. Sacco’s tweet is just one of countless examples of provocative online behavior drawing a seemingly disproportionate social punishment. Why does this keep happening? Because the architecture of social media exploits our sense of right and wrong, reaping profit from the pleasure we feel in expressing righteous outrage. The algorithms that undergird the flow of information on social media are, like the sensationalist print media and incendiary talk radio that came before them, designed to maximize ad revenue by engaging consumers’ attention to the fullest extent possible. Or as novelist John Green puts it, “Twitter is not designed to make you happier or better informed. It’s designed to keep you on Twitter.”

Columbia Law professor Tim Wu, author of The Attention Merchants, calls this “attention harvesting.” And as a business model, it’s extremely lucrative. Many are aware on some level that those persistent, weirdly personal ads on our devices have a lot to do with how Twitter, Facebook, and Google make money. What some may not be aware of, though, is exactly how those platforms manage to hold our attention well enough to make their ads so profitable.

Scientists have been at this question for several years, studying people’s activity online and revealing interesting trends as to what makes content eye-catching and more likely to go viral. Emotional arousal is one key determinant. After analyzing 7,000 articles from the New York Times, Jonah Berger and Katherine Milkman from UPenn found that one of the main factors driving readers to share a story via email was how much it stirred them up. Billy Brady from NYU built on Berger and Milkman’s work by analyzing hundreds of thousands of tweets in an effort to understand the role of moral emotions—the feelings associated with our sense of right and wrong, like pride and outrage—in social networks. Brady and his colleagues found that tweets about political topics were much more likely to go viral if they contained words that are both morally and emotionally charged, like “evil,” “shame,” “fight,” “punish,” and “faith.” What’s more, Brady’s analysis revealed that viral political tweets propagated almost exclusively among people with similar ideological leanings, consistent with evidence that we spend much of our online lives inside echo chambers.

This constant flurry of moral-emotional content has turned much of Twitter—and, by the looks of it, other platforms—into what writer Samuel Ashworth described as “an endlessly self-renewing bonfire of outrage and confusion.” And given how profitable it has become, social media companies have little financial incentive to scale it back. “I think it’s really important to ask ourselves and to have a serious conversation about how we feel about our moral emotions being used to make a lot of money for tech companies,” Molly Crockett, a psychology professor at Yale, said recently on the popular psychology and philosophy podcast Very Bad Wizards.

In a recent article in Nature Human Behavior, Crockett argued that the constant triggering of moral outrage—an ancient emotion that motivates the shaming and punishing of others—on social media not only makes money for tech companies, but also alters how we experience and express the emotion. Provocative content is ubiquitous, the tools to react to it have never been more accessible, and outrage expression is often positively reinforced by likes, retweets, and shares. All of this together, Crockett wrote, may cause people to undergo “outrage fatigue,” whereby the intensity of the outrage they feel gradually fades. Or, in light of research showing that venting anger can stoke more anger, it may do the opposite, Crockett pointed out, amplifying successive expressions of outrage. What’s more, Crockett suggested that social media may uncouple the expression and experience of moral outrage. “[J]ust as a habitual snacker eats without feeling hungry,” she wrote, “a habitual online shamer might express outrage without actually feeling outraged.” Studies on social media activity could illuminate, she concluded, “how new technologies might transform ancient social emotions from a force for collective good into a tool for collective self-destruction.”

Which raises the question: To what extent do social media companies have a moral obligation to improve the way we communicate with each other? Facebook C.E.O. Mark Zuckerberg is one of many tech executives who were once skeptical of their own power. He confidently dismissed the idea that Russian hackers influenced the 2016 presidential election as “pretty crazy.” He and many others have since changed how they view social media’s role in public discourse. “I think most people assume…that if democracy is going to be healthy, we need civil discourse to be healthy also,” Brady told me. “And so if there is data that continues to come in that shows that social media is amplifying [the expression of negative moral emotions], then I think they do have a moral obligation.”

Many social media companies now appear to agree with Brady and are making efforts to address some of the concerns that research like his raises. Twitter, for example, recently announced an open call for proposals on how to improve “conversational health” on its platform. And last October, Reddit rolled out a more robust policy for actively monitoring its discussion boards.

But some efforts to improve online communication have been flops, like Facebook’s short-lived feature that allowed users to flag fake news. And in view of the recent Cambridge Analytica scandal, which has exposed serious flaws in Facebook’s privacy policy, it’s unclear whether social media companies can even be trusted to monitor themselves, let alone global public discourse.

There are no easy solutions to any of these issues, perhaps because moral outrage online is a mixed bag. “Digital media may promote the expression of outrage by magnifying its triggers, reducing its personal costs and amplifying its personal benefits,” Crockett wrote. At the same time, she continued, digital media may reduce moral outrage’s benefits for society by “reducing the likelihood that norm-enforcing messages reach their targets” and possibly by imposing “new social costs by increasing polarization.” Until we find solutions, our moral emotions will remain subject to monetized technological forces that nobody fully understands. What an outrage.

Scott Koenig is a doctoral student in neuroscience at CUNY, where he studies psychopathy, emotion, and morality. This piece was adapted with permission from Koenig’s blog post “Twitter Triggers,” published on his website.



Get the Nautilus newsletter The newest and most popular articles delivered right to your inbox!

WATCH: How social networks have changed human interaction.

Professor at Indiana University and a research fellow at the Santa Fe Institute” data-credits=”” style=“width:733px”>