On Sunday, President Trump again infuriated progressives by retweeting a GIF from a user named Fuctupmind showing him hitting Hillary Clinton with a golf ball and knocking her over. He also retweeted a series of memes from an account called Team_Trump45, who had previously posted that Obama should get his “goddamn Muslim feet off President Trump’s desk”.

The Team_Trump45 account posts voraciously, but in a manner commonly associated with bots or partially automated accounts. Out of Team_Trump45’s 13,900 tweets from November 2015 to September 2017, 9,900 were links, 2,200 photos and just 725 of them were text-based. The account’s posting activity also shows that from November 2015 to January 2017 it never tweeted more than 550 times a month. Then in April 2017 it posted 7,147 tweets.

This graph shows just how out of sync @Team_Trump45's post activity was in April 2017 was compared to their regular Twitter activity (Credit: Crowdtangle)

To get a better idea of how unusual Team_Trump45’s Twitter activity, take a look at the graph below, which compares his post count (black) with the verified account of Donald Trump himself (in red):

All this suggests that the account that Trump retweeted early on Sunday morning was a bot, either fully or partially automated.


It wouldn’t be the first time that Trump has been fooled. On August 5, he retweeted the account @Protrump45, which was later found to be part of a marketing campaign to sell Trump merchandise. On August 21, he retweeted an account that had been set up three days earlier, and had gathered 6,000 followers by exclusively posting pro-Trump tweets. It’s not just Trump’s retweets that are problematic either: an analysis by TwitterAudit found that nearly half of all Trump’s followers were fake.

By re-tweeting these automated accounts, Trump has shown that Twitter bots can pass the “Turing Test”, the famous artificial intelligence test to see if a computer can pass for a human being with a written question. More importantly it shows the growing influence of bots on social media.

According to Twitter’s own estimates, 8.5 percent of its accounts are “highly automated.” Independent researchers place the number much higher, nearly 15 percent (or nearly 50 million accounts). Facebook, meanwhile, won’t share its private data with outside sources, but it’s regulatory report says that 8.7% of users are “fake, invalid accounts” – which amounts to roughly 84 million profiles.

Together, these automated or partially automated accounts help spread fake news and misinformation across the internet with ease.

“If fake news spreads like a contagion, bots accelerate the proliferation,” a report from the New Democrat Network said. “Just as…airline travel can speed the spread of a communicable disease around the world.”


“Deceptive bots create the impressions that there is a grassroots, positive, sustained human support for a certain candidate, cause, policy or idea,” the NDN continued. “In doing so, they pose a real danger to the political and social fabric.”

There are three separate types of bot: fully automated, partially automated, and trolls. Fully automated accounts are programmed to tweet or post on social media sites, often hiding behind creative bios and stolen pictures. Partially automated accounts, or cyborgs mix in human-created posts to hide the automated nature of most of their posts. Meanwhile trolls are fully human, and use social media accounts to disrupt online conversations.

Bots have always been part of the internet, driving people towards fraudulent websites. But the hyper-partisan nature of modern politics means that both liberals and conservatives are much more susceptible to bots that drive them towards fake news websites.

“This indicates a national security issue, if you can engineer software to exploit people,” said Professor David Carroll of the Parsons School of Design. “It’s a troubling development especially if you look at it from a global perspective, bots have been tailored to natural conflicts of different countries.”

Facebook has come under pressure in recent months because of its unwillingness to engage with the problem of bots and fake news. The company recently announced that it would tighten rules about who can profit from advertising as a way of combating fake news and clickbait. However, experts remain skeptical about how willing social media giants are to address the automated elephant in the room.

“The fundamental problem for [Facebook and Twitter] is that their IPO is built around user growth as valuation,” Carroll said. “Investors want to see growth and consumers want to clean out the bots.”