The shooting at Florida’s Marjory Stoneman Douglas High School on Valentine’s Day inspired an energetic group of young activists to weigh in on the national debate on guns, safety, and personal freedoms. But as they found their voice, conspiracy theories purporting that they were “crisis actors”—frauds pretending to be students—spiraled across social media and into the mainstream.

As documented elsewhere, this idea of “false flag” operations and actors being used by liberals to stage media stories for political purposes is a long-running narrative in far-right media outlets like InfoWars (and, perhaps worth noting, something Russian propaganda networks have been caught actually doing multiple times in Ukraine). In this case, it is an easy if cynical tactic to discredit the voices of victims and undermine the moral weight behind their message.

In the days that followed the shooting, social media companies scrambled to deal with complaints about the proliferation of the crisis actors conspiracy across their platforms—even as their own algorithms helped to promote that same content. There were new rounds of statements from Facebook, YouTube, and Google about addressing the problematic content and assurances that more AI and human monitors must be enlisted in this cause.

But there are a lot of assumptions being made about how this content was amplified, and how it got past controls within the algorithmic star chambers. Russian bots, the NRA echo-chamber, and so-called alt-right media personalities have all been fingered as the perpetrators.

And, as our research group, New Media Frontier—which collects and analyzes social media intelligence using a range of custom and commercial analytical tools—recently outlined in an analysis of the #releasethememo campaign, there are many contributing factors to the amplification of American far-right content, including foreign and domestic bots, intentional amplification networks, and other factors. Whether it’s fully automated bot or semi-automated cyborg accounts, automation is a vital part of accelerating the distribution of content on social media.

But in looking at the case of the Parkland, Florida, shooting and the crisis actors narrative it spawned, there was another important factor that allowed it to leap into mainstream consciousness: People outraged by the conspiracy helped to promote it—in some cases far more than the supporters of the story. And algorithms—apparently absent the necessary “sentiment sensitivity” that is needed to tell the context of a piece of content and assess whether it is being shared positively or negatively—see all that noise the same.

This unintended amplification created by outrage-sharing may have helped put the conspiracy in front of more unsuspecting people. This analysis looks at how one story of the crisis actor conspiracy—the claim that David Hogg, a senior at Marjory Stoneman Douglas High School, was a fraud because he had been coached by his father—gained amplification from both its supporters and its opponents.

The story began as expected. At 5:30 pm EST on February 19, five days after the shooting, alt-right website Gateway Pundit posted a story claiming that student David Hogg was coached on his lines as part of an FBI plot to create false activism against President Trump. On Twitter, this story was initially amplified by right-leaning accounts, some of which are automated.

Of the 660 tweets and retweets of the “crisis actors” Gateway Pundit conspiracy story during the hour after it was posted, 200 (30 percent) came from accounts that have tweeted more than 45,000 times. Human, cyborg, or bot, these accounts are acting with purpose to amplify content (more on this in a moment). And this machinery of curation, duplication, and amplification both cultivates echo chambers that keep human users engaged and impacts how social media companies’ algorithms decide what is important, trending, and promoted to other users—part of triggering a feedback loop to win the “algorithmic popularity contest.”