It’s natural to assess what sociocultural lessons we’ve learned from the previous decade, now that we’ve entered a new one — and whether they’re the kinds that might help us make the 2020s a better era. No honest attempt at such an assessment can be complete without grappling with the messy human dramas and the increasing trend toward polarized, incendiary conversations that emerged in the latter half of the 2010s. And that means contending with the unlikely, unpleasant, and far-reaching watershed movement that was Gamergate.

As it was happening, many members of the media were quick to dismiss it. Sparked by a single blog post published in August 2014, Gamergate was still very much alive and well when an editor asked me, as a reporter who covered it since the very beginning, to write a “recap” of it near the end of that year. The editor wanted a piece that framed the entire event in the past tense, even though the hashtag was still going strong, the women it targeted were still being harassed, and supporters were planning offline actions to take place at upcoming geek conventions.

Soon after I recapped it, other publications wrote about Gamergate as if it were more or less over, too. One predicted that 2015 would be “the year everyone forgot about Gamergate,” noting that it “is still around as a twitter hashtag and a forum topic, but the relevance is waning from its peak in the fall. That’s going to keep happening.”

That did not keep happening. But the media’s insistence that it would provides a key insight into why Gamergate endured, and why it ultimately coalesced into the larger alt-right movement that helped fuel the election of President Trump.

If you never really understood what Gamergate was to begin with, here’s a brief refresher: In the fall of 2014, under the premise that they were angry at “unethical” games journalists — a lie that persists today — thousands of people in the games community began to systematically harass, heckle, threaten, and dox several outspoken feminist women in their midst, few of whom were journalists. The harassment occurred under the social media hashtag “Gamergate,” which is still a hotbed of debate and anti-feminist resentment today.

Harassment and misogyny had been problems in the community for years before this; the deep resentment and anger toward women that powered Gamergate percolated for years on internet forums. Robert Evans, a journalist who specializes in extremist communities and the host of the Behind the Bastards podcast, described Gamergate to me as partly organic and partly born out of decades-long campaigns by white supremacists and extremists to recruit heavily from online forums. “Part of why Gamergate happened in the first place was because you had these people online preaching to these groups of disaffected young men,” he said. But what Gamergate had that those previous movements didn’t was an organized strategy, made public, cloaking itself as a political movement with a flimsy philosophical stance, its goals and targets amplified by the power of Twitter and a hashtag.

Again and again, throughout 2014 and afterward — and, really, well before that, as women in online subcultures withstood years of targeted harassment — many failed to understand and assess what Gamergate was. The media, tech platforms, the niche internet communities these reactionaries came from (places with marginally obscure names like 4chan, 8chan, and Voat, for instance), the corporations they easily manipulated, and the general public, who seemed to take it in as nebulous online noise; no one properly identified Gamergate as a major turning point for the internet. The hate campaign, we would later learn, was the moment when our ability to repress toxic communities and write them off as just “trolls” began to crumble. Gamergate ultimately gave way to something deeper, more violent, and more uncontrollable.

It’s tempting to wonder if we could have stopped Gamergate before it happened, in the years before it coalesced into a systematized movement. Perhaps we could have quashed these kernels of hate with better forum moderation, more serious attention to the problem of misogynistic harassment, and less reliance on the longstanding twin internet wisdoms of prioritizing free speech and starving a troll until it leaves. In truth, by the time Gamergate had begun, it was probably already unstoppable — but our inability to learn any lessons from it is what allowed it to scale all the way to the White House.

Six years later, here’s a look at some of the lessons we still need to learn from Gamergate in order to keep its victims safe — and in order to keep the next decade from producing a movement that’s even worse.

1) Police have to learn how to keep the rest of us safe from internet mobs

At the time Gamergate began, the question of how and when law enforcement should step in to deal with online harassment was a burning one, as multiple women reported being threatened and doxxed out of their homes. (Among them was Zoë Quinn, the game developer who became Gamergate’s target zero after an ex-boyfriend wrote a blog post accusing her of entering an unethical romantic relationship with the reporter Nathan Grayson, of the gaming news site Kotaku. To that blog post’s target audience of disgruntled gamers, the alleged infidelity rendered Quinn the poster child of hypocritical feminism and Kotaku the emblem of unethical journalism in the eyes of Gamergaters.)

Cyberstalking and “revenge porn” were also major issues that had been around for years but gained new prominence in 2014, as “Celebgate” saw celebrities like Jennifer Lawrence and Kim Kardashian joining the countless women who’ve had private photos circulated online without their consent.

Today, however, the justice system continues to be slow to understand the link between online harassment and real-life violence. Although the Violence Against Women Act made cyberstalking illegal in 2006, and although one in four young women report being stalked or sexually harassed online, women frequently have difficulty getting law enforcement to take online harassment seriously — especially the veiled kind that’s intimidating but not overtly violent or hateful. There are more on-the-books laws about online harassment now and more prosecutions, but police are often untrained and undereducated regarding what type of behavior constitutes harassment, how to legally counter such behavior, and what should be investigated. Frequently, people who report harassment are left unsatisfied with the response.

Recently, while I was researching a violent crime with an online component, one police officer told me that most of the time, officers in his department have never heard of Twitter, let alone other social media platforms and more niche websites. Evans told Vox that while he believes people in positions of power in government and law enforcement take internet threats much more seriously, the change has yet to fully trickle down.

“When I started receiving death threats earlier this year, after I was on a documentary about 8chan’s [politics] board,” he told Vox in an interview in 2019, “I went to the West Los Angeles police department with pictures of this bounty on my head and Photoshopped images of me with a bullet in my head. I had to try to explain what was going on to them, and they had never heard of the website, didn’t seem to really understand that an online threat was a serious thing, [...] and I spent most of my time with the police trying to explain Bitcoin to a bunch of 50-year-old Los Angeles police officers.”

I would have expected as much in 2014, when Gamergate was first in the news, because many social media platforms were still niche enough that you might not expect law enforcement to be familiar with them or their communities. Hearing it five years later was an eye-opening moment — but it’s a stark reminder that we still have a long way to go to protect the general population, and women in particular, from violence online and off.

“I think they have very slowly, far too slowly, learned certain things that are valuable,” Evans said. “They do now take online threats of school and [mass] shootings much more seriously. So ... I am seeing things get better. But not nearly as quickly as it ought.”

In order to increase public safety this decade, it is imperative that police — and everyone else — become more familiar with the kinds of communities that engender toxic, militant systems of harassment, and the online and offline spaces where these communities exist. Increasingly, that means understanding social media’s dark corners, and the types of extremism they can foster.

2) Businesses have to learn when online outrage is manufactured

One of the strangest side effects of Gamergate was its effectiveness at convincing corporations to stop advertising on media outlets it targeted as part of its “ethics in journalism” motto. Among the corporations that dropped advertising from various publications as a result of Gamergate petitions were Adobe, Mercedes-Benz, and Intel, the latter of which later said it had no idea what online politics it had waded into. Mercedes also later realized its mistake and restored advertising.

But despite wide discussion within the gaming community and the media about Gamergate’s manipulative tactics when it was drawing peak media attention, it seems that many corporations and other businesses failed to grasp a vital takeaway from these incidents: that it’s crucial to understand how, when, and why an online mob is expressing outrage before you decide how to respond to it. Gamergate should have taught businesses that online mobs can and do look for excuses to be outraged, as a pretext to harass and abuse their targets.

There’s a difference between organic outrage that arises because an employee actually does something outrageous, and invented outrage that’s an excuse to harass someone whom a group has already decided to target for unrelated reasons — for instance, because an employee is a feminist. A responsible business would ideally figure out which type of outrage is occurring before it punished a client or employee who was just doing their job.

Instead, companies have continued to fall for the manufactured outrage playbook Gamergate and its online heirs created, often blaming employees who are dealing with harassment rather than blaming the people doing the harassing. In 2016, Gamergate brutally harassed a Nintendo employee, targeting her on social media, unearthing her old writing in order to accuse her of pedophilia, and vilifying her to her employer. Instead of protecting the employee from the onslaught of misogynistic abuse, Nintendo responded by firing her. In 2018, game developer ArenaNet fired two of its employees after their responses to what they saw as Twitter harassment sparked major backlash from gamers, prompting the company president to blame the employees for reacting to the community with “hostility.” In both cases, the employers framed the employees’ outspoken response to prolonged and intense harassment as a liability.

Also in 2018, Marvel fired popular writer Chuck Wendig over reactionary outrage that was literally manufactured — most of it was generated by bots rather than people. And in reaction to what was perhaps the most effective manufactured outrage of all, Disney fired James Gunn from Guardians of the Galaxy 3 after a harassment campaign straight out of the Gamergate playbook, stripping past tweets of their context to generate contrived backlash. While Disney — nearly a year later — ultimately acknowledged it had made a mistake and rehired Gunn, it’s a familiar note in an exhausting, repetitive, and, crucially, easily avoidable cycle that companies have yet to learn how to sidestep.

This cycle is frustrating. And it’s the fault of corporations’ failure to understand whether an internet mob’s outrage is hyperbolic, and the larger cultural failure to understand and deal with the way those mobs also spread violence, online and off.

3) Social media platforms didn’t learn how to shut down disingenuous conversations over ethics and free speech before they started to tear their cultures apart

The current debate around whether to privilege freedom of speech over the damage done by extremist rhetoric and other types of harmful speech arguably began with Gamergate.

“There’s the perception of not having bias, particularly in American media,” Evans told me. “This idea that you’re a veteran journalist if you don’t take a side, even if it’s an issue that really somebody ought to be taking a side on, like Nazis.” Dedication to free speech over the appearance of bias is especially important within tech culture, where a commitment to protecting free speech is both a banner and an excuse for large corporations to justify their approach to content moderation — or lack thereof.

During Gamergate, Evans said, the movement’s members found out that with “a little bit of plausible deniability,” they could trick the media and social media platforms into taking their harassment campaigns seriously. When Gamergate found its “It’s about ethics in journalism” mantra, it had a cloak under which to argue that all of its violent speech wasn’t about a misogynistic abuse of women at all, but rather about a loftier philosophical purpose.

“There were mouthpieces of the movement, like [Milo] Yiannopoulos, who were happy to provide enough of a justification that suddenly [they could claim] it was not just the story of a harassing campaign,” he said. Research conducted by Newsweek in 2015 analyzing the Gamergate hashtag showed that its real purpose was abusive harassment, and that targeted women in gaming were more frequently responded to using the hashtag than the journalists whose ethics were ostensibly up for discussion. But even though the movement’s real motives were widely known, the community structures of platforms like Twitter, Reddit, and YouTube, to say nothing of the anonymous forum 4chan, all fostered Gamergate.

Reddit’s free-speech-friendly moderation stance resulted in the platform tacitly supporting pro-Gamergate subforums like r/KotakuInAction, which became a major contributor to Reddit’s growing alt-right community. Twitter rolled out a litany of moderation tools in the wake of Gamergate, intended to allow harassment targets to perpetually block, mute, and police their own harassers — without actually effectively making the site unwelcome for the harassers themselves. And YouTube and Facebook, with their algorithmic amplification of hateful and extreme content, made no effort to recognize the violence and misogyny behind pro-Gamergate content, or police them accordingly.

Gamergaters chalked up the movement’s grievances over ethical journalism to confusion or self-deception. This was an intentional strategy, one that gained traction on social media with those unaware of the harassment component. And Gamergaters learned they could scale this approach. “Gamergate showed them that they could make a difference in the real world, but it [also] showed them something else important,” Evans said. “They saw that not only could they get away with a harassment campaign like this ... they [saw that they] could do that shit all day long and nobody was going to do anything about it. They learned that it worked, and they kept doing it.”

So the Gamergate movement merged into the larger alt-right sphere of online extremist culture that emerged in the middle of the decade, spreading hate speech throughout social media and setting the stage for the alt-right to influence the 2016 election.

Google, YouTube, and Twitter did eventually begin taking steps to moderate the rampage of extremist content Gamergate ushered in; Facebook continues to lean into on its free speech policies. Reddit finally isolated (though not ban) its biggest pro-Trump forum — which has heavy overlap with its biggest Gamergate forum — in 2019, but only after its members repeatedly violated rules and began threatening politicians with real-world violence. A recent Wired cover article profiled a Google culture in meltdown over ideological debates and Google’s indecision over how to handle them. Mired in its own culture war, YouTube has become a haven for reactionaries and the alt-right, and its 2019 ban on white supremacist and conspiracy content may do little to quell the growth of extremism on the platform. And Twitter and Facebook, each with its own set of problems, are caught in the national conversation related to free speech.

All of these platforms are wrestling with problems that seem to have grown beyond their control; it’s arguable that if they had reacted more swiftly to slow the growth of the internet’s most toxic and misogynistic communities back when those communities, particularly Gamergate, were still nascent, they could have prevented headaches in the long run — and set an early standard for how to deal with ever-broadening issues of extremist content online.

As things stand, many of these platforms are still wrestling with the most basic ingredients for keeping toxic elements out of their communities, even though these are the kinds of foundational building blocks inherent to good internet forum moderation. It’s past time for leaders in the tech industry to learn how to be good stewards of the communities to which they are home.

4) Violence against women is a predictor of other kinds of violence. We need to acknowledge it.

2014 should have been the year the cultural conversation began to acknowledge how serious aggression toward women really is. It wasn’t.

One of the most frustrating things about watching Gamergate unfold is that the seeds of it had been in place for years. Targeted online harassment against women had been occurring for years, across numerous communities, from men who spent years harassing one woman who complained of getting hit on at a professional conference to harassment of actors for playing unlikable women.

In 2012, male backlash against feminist media critic Anita Sarkeesian over her attempt to expand her commentary on films into commentary on games was so intense it made international headlines — and her harassment involved doxxing, death threats, rape threats, and bomb threats, some so serious that she was driven out of her home for weeks. One planned Sarkeesian lecture at a college campus was canceled over a mass shooting threat. And there were other signs prior to Gamergate that online harassment of women and minorities could escalate to real-life violence — for instance, the 2014 Santa Barbara mass shooter’s misogynistic online manifesto and history of participation in deeply misogynistic online spaces.

All these events garnered widespread media coverage and attention — but still, in 2014, when this misogyny escalated into a systematic, organized, scalable, and sustained attack on women through the establishment of Gamergate as a movement, the media and many members of the public initially dismissed it as a watershed event. During Gamergate, as Evans put it, gamers attacked women like Sarkeesian and Zoë Quinn with horrific threats that escalated offline: They “threat[ened] to murder people, mail[ed] them letters written in blood, sent dead animals to their door.” But none of this harassment seemed to permeate mainstream discussions of Gamergate, which tended to center more on the personalities involved — from profiles describing Gamergate target Quinn as “troubled” to those describing its hero Milo Yiannopoulos as a “descendant of William S. Burroughs.”

And in the same way that none of those years of escalating online assaults against women prepared us for Gamergate, somehow, the formation of Gamergate itself didn’t prepare society for the cultural rise of the alt-right. The journalists who did anticipate that Gamergate could and would morph into something worse were, by 2015, drowned out by the general cultural idea that Gamergate had somehow “failed”— even though it was a movement inherently meant to scale and grow. Somehow, the idea that all of that sexism and anti-feminist anger could be recruited, harnessed, and channeled into a broader white supremacist movement failed to generate any real alarm, even well into 2016, when all the pieces were firmly in place.

In other words, even though all the signs were there in 2014 that a systematized online harassment campaign could lead to an escalation in real-world violence, most people failed to see what was happening. Gamergate ultimately made us all much more aware of the potential real-world impact of online extremism. Yet, years after Gamergate, despite increasing evidence suggesting a connection between online violence against women and real-world violence — including mass shootings — many corporations and social media platforms still struggle to identify and eradicate extreme forms of violence against women from online spaces.

For instance: In early 2019, Valve, the parent company of the online game platform Steam, allowed a game called Rape Day, in which the object of the “game” was to rape women, to stay up in its store for days before finally removing it. Despite all of its algorithmic tweaking, Twitter is still abysmal at identifying and taking action against rape and death threats on its website. The 2019 murder of 17-year-old Bianca Devins, a well-known Instagram user, carried a disturbing online component that involved her killer posting graphic online photos of her death. The photos rapidly went viral, including on Instagram and Twitter, which were both largely ineffective at curbing their spread.

This failure to act has serious consequences, because many of the perpetrators of real-world violence are radicalized online first. In 2018, the International Center for Research on Women identified online gender-based violence as “an emerging public health and human rights concern” and linked it to a growing number of mass shootings, noting, “Failing to detect and deter technology-facilitated GBV is a missed opportunity to prevent deadly consequences offline.” Other research has found that more than half of the US’s mass shootings involve the targeting of an intimate partner or ex-partner, and many of the most recent mass attacks involve a perpetrator who displayed or threatened violent behavior toward one woman or multiple women, either online or off. In the past year alone, multiple mass shootings have had an element of misogynistic or domestic violence targeted at women.

It remains difficult for many to accept the throughline from online abuse to real-world violence against women, much less the fact that violence against women, online and off, is a predictor of other kinds of real-world violence. The dots are there — we just have to connect them.

5) Politicians and the media must take online “ironic” racism and misogyny seriously.

Gamergate masked its misogyny in a coating of shrill yelling that had most journalists in 2014 writing off the whole incident as “satirical” and immature “trolling,” and very few correctly predicting that Gamergate’s trolling was the future of politics — the political wave that would essentially morph into the broader alt-right movement.

But the movement was serious. “It served as a rallying point for a lot of groups that wouldn’t necessarily have gotten along, like more traditional conservatives and outright neo-Nazis,” Evans said, “and gave them all a banner to rally under where the Nazis could pretend to not be Nazis. And the conservatives could give themselves plausible deniability and pretend they weren’t working with Nazis. It acted as a blanket for all of this stuff.”

Gamergate was all about disguising a sincere wish for violence and upheaval by dressing it up in hyperbole and irony in order to confuse outsiders and make it all seem less serious. As Evans noted to me, Gamergate was fueled in part by online extremists, who initially bonded with young men in gaming communities over what started as “ironic humor and jokes about the Holocaust, jokes about racial differences and whatnot. And over time, all those things became less joking.” This tactic was a deliberate strategy that formed the core of the alt-right playbook, and years after Gamergate, media outlets continued to fall for it.

Take, for example, the highly disturbing instigator Milo Yiannopoulos, who gained notoriety as a Gamergate commentator before he went to work at the alt-right blog site Breitbart in 2014. The media continued referring to Yiannopoulos as a “troll,” despite ample evidence suggesting his words and actions were associated with real bigoted or extremist beliefs, espoused by both him and his followers. Yiannopoulos was a prime example of a rabble-rouser who manipulated Gamergate toward his own ends. He benefited from the mayhem and chaos his rabble-rousing caused, whether he was making campus tour stops that inspired increases in hate speech as well as acts of serious violence, or just egging on the racist harassment of a public figure.

Yiannopoulos constantly exacerbated his followers and their anger. The danger posed to marginalized members of the communities he visited was immediate and real. Yet even into 2018 he would explicitly encourage violence and then claim he was “just trolling.” Just as Evans noted, the merest suggestion that none of his extremist rhetoric was sincere allowed him to continue spreading it.

Understanding this concept is crucial to understanding why Gamergate was able to morph into the alt-right. Gamergate simultaneously masqueraded as legitimate concern about ethics that demanded audiences take it seriously, and as total trolling that demanded audiences dismiss it entirely. Both these claims served to obfuscate its real aim — misogyny, and, increasingly, racist white supremacy. By the time Yiannopoulos joined Breitbart, and Breitbart’s Steve Bannon joined the Trump campaign, the links between Gamergate and the national political machine should have been clear. “The de facto merger between Breitbart and the Trump campaign represents a landmark achievement for the ‘alt-right,’” Hillary Clinton said in a 2016 campaign speech. “A fringe element has effectively taken over the Republican Party.”

But three years after that, and five years after Gamergate, it seems that very few people have really learned how to tell when a troll is just trolling or when it’s about to commit real-world violence. “It’s hard to spot the terrorists among the trolls,” the Wall Street Journal acknowledged in 2019, in response to the Christchurch mass shooting. The Christchurch shooter had posted a manifesto online; full of hyperbolic alt-right internet memes, it was intended to both obfuscate and amplify the genuine white nationalist rhetoric at its center.

The public’s failure to understand and accept that the alt-right’s misogyny, racism, and violent rhetoric is serious goes hand in hand with its failure to understand and accept that such rhetoric is identical to that of President Trump. Now we see similar ideologies as Gamergaters from someone as powerful as Trump. He retweets and amplifies alt-right memes on his Twitter; his son openly affiliates with the alt-right; Trump defended and continues to present the 2017 “Unite the Right” rally in Charlottesville, North Carolina, as though it wasn’t intentionally planned and organized as a white supremacist rally. (It was.)

As described by Vox’s Ezra Klein, Trump’s willingness to engage in incendiary racist rhetoric is similar to the tactics that have led many journalists to dismiss his followers as trolls: “He chooses his enemies based on who he thinks will rile up his base. He uses outrageous, offensive insults to get the media to take notice. And then he feeds off the energy unleashed by the confrontation.” In other words, he and his followers — many of whom, again, are members of the extreme online right-wing that got its momentum from Gamergate — are using the strategy Gamergate codified: deploying offensive behavior behind a guise of mock outrage, irony, trolling, and outright misrepresentation, in order to mask the sincere extremism behind the message.

Just as Yiannopoulos did before him, Trump speaks to his supporter base through wink-wink-nod-nod moments that lead them to respond in alarming ways, including with violence. But many members of the media, politicians, and members of the public still struggle to accept that Trump’s rhetoric is having violent consequences, despite all evidence to the contrary.

That divide between reality and perception is part of the larger cultural epistemic crisis that has loomed over the US for the past five years — and arguably began with Gamergate. The movement’s insistence that it was about one thing (ethics in journalism) when it was about something else (harassing women) provided a case study for how extremists would proceed to drive ideological fissures through the foundations of democracy: by building a toxic campaign of hate beneath a veneer of denial.

Five years later, it seems, the rest of us are still struggling to learn from the consequences.