The framing of the term “fake news” orients you toward thinking whether a claim is true or false. By this logic, it assumes that people share things online because they’re 100% concerned about accuracy—but people share because of deeper political allegiances and viewpoints.

In places like China, Uganda, Mexico, and the United States, memes can fuse new narratives in society and influence international media discourse in powerful ways. Activists utilize their power to draw attention and build communities around issues that might otherwise be ignored or censored.

But memes also play a key role in the spread of both misinformation and disinformation. And many of those narratives can themselves be highly destructive and manipulative.

A critical report published by Alice Marwick and Rebecca Lewis at the technology think tank Data and Society asserts that meme-making is part of a larger effort to manipulate the media into spreading alt-right and alt-right-affiliated views:

“Taking advantage of the opportunity the internet presents for collaboration, communication, and peer production, these groups target vulnerabilities in the news media ecosystem to increase the visibility of and audience for their messages.”

This involves a pipeline of news that moves from highly experimental social networks like 4chan and Reddit, onward to right-wing blogs and news sites, till they finally reach mainstream media. “For [media] manipulators,” notes the report, “it doesn’t matter if the media is reporting on a story in order to debunk it or dismiss it; the important thing is getting it covered in the first place.”

As I read this report, I was struck by the similarity of the media-manipulator pipeline to the activist meme-maker pipeline (which I’ve explored in this book in many global contexts): Make a number of memes, relying on a high rate of remix and mashups, testing, and iterating on ideas over time. When something seems to stick, use sympathetic blogs and other influencers to drive attention. Over time, the media might be refined further, and a new narrative emerges that mainstream media covers.

What memes, among other social media phenomena, help fuel is the expansion of the Overton Window. This refers to the window of acceptable public discourse and was named after public policy analyst Joseph Overton. NYU professor Clay Shirky points out its implications in relation to the 2016 US presidential election:

The Overton Window was imagined as a limit on public opinion, but in politics, it’s the limit on what politicians will express in public. Politically acceptable discourse is limited by supply, not demand. The public is hungry for more than politicians are willing to discuss. This is especially important in the US, because our two-party system creates ideologically unstable parties by design.

By exposing people to each other, and each other’s ideas, memes are part of a broader phenomenon that expands the range of acceptable discourse, feeding a hungry public who wants to talk about issues that in previous eras might not have been discussed as openly. At times this can support progressive, antiauthoritarian actions. At times, this also means the expansion of false narratives and information.

Identity and community are at the heart of meme culture.

What’s key to understanding the Window is that these strategies are adaptations to the communications environment, where algorithms on such platforms as Facebook and YouTube float up content that is optimized for attention and emotion, and where broadcast media tunes in to platforms like Twitter for breaking news. By serving up content and information most conducive to users’ ways of taking in information, many platforms reinforce divisions in society.

Cultural researcher and ethnographer Whitney Phillips has noted that people are more likely to share and engage with content online because they want to create and perform a political identity. Identity and community are at the heart of meme culture. Phillips has proposed that we think about fake news more as folkloric news—or “folk news”—to focus on motivations rather than accuracy. It’s not that content online can’t be true or false; it’s just that we’re missing the bigger picture when we think just about factual accuracy.

Nausicaa Renner at the Columbia Journalism Review noted that much of fake news gets attention through memes. In one simple example, Renner pointed out that people shared articles on Breitbart less frequently than they viewed the images the site shared to accompany each article. Each image contained a simple array of statements and figures and pointed back to the article, and these images got way more traction than the articles themselves. As Renner noted in her study, “While Breitbart is a partisan news site and not explicitly a generator of fake news, this type of content [the images and memes] sometimes hardly resembles news.”

What, then, is to be done about misinformation ecosystems and the memes that amplify them? It can be tempting to add legal frameworks to address the problem of fabricated news and misinformation. But the risk of censorship presents a difficult challenge, as the examples of China and other countries demonstrate, when a government can define rumors to serve its own best interests while quashing genuinely false information that is demonstrably harmful to the general public. Legal frameworks can’t address the deeper problem of belief systems and are difficult to enforce in the face of how rumors spread, often in a decentralized way and via private social media. In the worst case, legal frameworks can be designed to strengthen authoritarian governments and weaken democratic systems like the free press.

The internet’s greatest power is also its greatest weakness.

The metaphor about the internet as an information superhighway has obscured this reality in many ways. Access to information is an important part of what makes the internet valuable for activists, journalists, researchers, and policy makers. But in so many ways, the internet serves more as an affirmation superhighway, a way to affirm political beliefs and identities.

Memes play a key role in this problem, as they are more frequently emotive in nature, giving people a place to express themselves and their values. Sometimes this can help marginalized perspectives find voice and access; other times, this serves to marginalize viewpoints even further. Any attempts to circulate useful facts and figures can fail quickly, because networks can be so easily isolated online, and besides, the people making up those networks have different value systems that don’t always accept the presentation of evidence.

An Xiao Mina The “This Is Fine” meme appropriated by a protester walking at the Women’s March in 2017.

Before the internet, the goal of propaganda was simple: tell a single story, dominate the media environment, and repeat the information until people either believe it or succumb to at least agreeing in public life. The central power of traditional propaganda is also its weakness: any attempt to break this single story is a threat to the system.

Controlling the media environment is a costly endeavor. Memes, as a powerful tool for generating new narratives, are well suited to challenging this form of propaganda. It is nearly impossible for a single story to take hold, because forms of narrative resistance are so easy to come by. Rather than resist the internet’s ability to create a story, today’s propaganda takes advantage of it by creating too much information.

Contemporary disinformation attempts to do at least two things:

(1) overwhelm people with a series of conflicting and confusing narratives such that they give up trying to make sense of it all, and

(2) string along those who do sincerely believe the story, and recruit them in amplifying the disinformation.

Using the narrative-building capacity of the internet is increasingly an online strategy for regimes and agents of disinformation hoping to control the conversation. In addition to controlling it overtly—methods like jailing or harassing “offenders”—states now can also wield memes to generate an alternative discussion and shift the narrative online.

Researching social media–based propaganda in Russia, journalist Adrian Chen has found that at least one paid agency, the Internet Research Agency, exists that generates memes, blog posts, and comments that either favor the government or challenge and mock any posts that seek to resist these narratives. The agency uses all the tools that activists have had at their disposal, like Photoshop and video editing tools, to test and disseminate messages online. They are rewarded for their efforts monetarily. As special counsel for the US Department of Justice Robert Mueller charged, some of these efforts sought to influence the outcome of the 2016 US presidential election.

Twitter A tweet that was put out by the Russian government-linked organization Internet Research Agency.

And here, ultimately, is one risk of the environment of misinformation and disinformation: on the one hand, any sincere form of activism can be deemed fake and therefore libelous or outright criminal by the powers that be. On the other hand, governments and anyone with a social and political message can embrace the power of memes and social media rather than resist it.

Regardless, the outcome appears to be predictable each time: decreased trust in media, institutions, and other people more generally, combined with decreased usefulness of the internet in helping spread an activist message. The internet’s greatest power is also its greatest weakness, and the human brain’s capacity to make sense of competing stories is susceptible to being easily overwhelmed and confused.

Challenging this new landscape of misinformation and disinformation won’t be easy, and there are precious few examples to give us reason for optimism. A 2017 report by Claire Wardle and internet researcher Hossein Derakshan for the Council of Europe argued that combating misinformation and disinformation will require understanding how and why these types of media resonate.

We need to fight rumours and conspiracy with engaging and powerful narratives that leverage the same techniques as dis-information. [Some] effective strategies for dis-information include: provoking an emotional response, repetition, a strong visual aspect and a powerful narrative. If we remember the powerful, ritualistic aspects to information seeking and consumption, the importance of integrating these elements into our solutions is obvious.

And as it turns out, effective activism might require strategies that leap forth from the internet and enter the physical world, and that command attention and narrative in ways that cut across different forms of media and make themselves impossible to ignore, just as activism has always aimed to do. And maybe, just maybe, there’s a way to talk about how the narrative power of the world’s internet memes can be brought to bear on the increasingly complex challenges that face our world.

Excerpted from Memes to Movements: How the World’s Most Viral Media Is Changing Social Protest and Power by An Xiao Mina. Copyright 2019. Excerpted with permission by Beacon Press.