It didn’t take long for lies and misinformation to spread in the wake of Friday’s Texas school shooting that left 10 people dead. Within minutes, fake Facebook accounts popped up showing the suspected shooter with a doctored image of him wearing a “Hillary 2016” hat, for example.

Some of the fakes were quickly flagged by users and deleted by the social network. But according to Chris Sampson, a disinformation analyst for a counterterrorism think tank, new fakes were being spawned fast and filled out with false information that included images trying to link the 17-year-old suspect, Dimitrios Pagourtzis, to anti-fascist groups, while others had “Trump/Pence 2020” as his banner image.

The onslaught of fake and false information has become a regular feature in the aftermath of mass shootings and terrorist attacks in the U.S. and elsewhere. The perpetrators are typically trying to sow discord, score political points or simply make readers question the very concept of truth.

Sampson was watching the clock to see how fast it would take for a fake account to be created after law enforcement officials released the suspect’s name: less than 20 minutes. After a second fake account was taken down, another popped up in only four minutes.

FACEBOOK HIT WITH ANOTHER DATA BREACH, 3 MILLION USERS EXPOSED

“It seemed this time like they were more ready for this,” he said. “Like someone just couldn’t wait to do it.”

Facebook officials told the Washington Post that the suspect’s real account was removed and they were working to shut down the impersonating accounts. The tech giant, which has come under fire for its response to disinformation and questions about users’ data privacy, said this week that it disabled more than 500 million fake accounts in the first three months of 2018.

According to Christopher Bouzy, whose site Bot Sentinel tracks more than 12,000 automated Twitter accounts that are often used to spread disinformation, four of the top 10 phrases tweeted by bot or troll accounts in the immediate 24-hour aftermath were related to the Santa Fe shooting — which he called “significant activity,” reports the Post.

FACEBOOK, YOUTUBE STILL STRUGGLING TO STOP TERRORIST CONTENT

Hoaxes, conspiracy theories and fake news reports have spread like wildfire in our digital age, often blossoming on message boards like 4chan or platforms like Reddit before being picked up by far-right news sites.

Misinformation can also reach the mainstream, as happened in the wake of the shooting at Stoneman Douglas High School, when a video labeling a shooting survivor a “crisis actor” zoomed to the top of YouTube’s “Trending” list and eventually resulted in the student being forced to respond and YouTube apologizing.

Facebook has 10,000 human moderators monitoring the site, plans to hire many more in the coming year and utilizes artificial intelligence to remove certain types of banned or fake content. Mark Zuckerberg’s firm recently announced a partnership with the Atlantic Council, a think tank that has received money from a wide range of foreign corporations and governments, including Saudi Arabia and Turkey, to battle disinformation.

YouTube seems to have avoided some of the mistakes following the Parkland rampage, but there were seven videos posted as of Sunday morning claiming without evidence that the incident was a “false flag” operation.