“Bots” — automated social media accounts which pose as real people — have a huge presence on platforms such as Twitter. They number in the millions; individual networks can number half a million linked accounts.

These bots can seriously distort debate, especially when they work together. They can be used to make a phrase or hashtag trend, as @DFRLab has illustrated here; they can be used to amplify or attack a message or article; they can be used to harass other users.

At the same time, many bots and botnets are relatively easy to spot by eyeball, without access to specialized software or commercial analytical tools. This article sets out a dozen of the clues, which we have found most useful in exposing fake accounts.

First principles

A Twitter bot is simply an account run by a piece of software, analogous to an airplane being flown on autopilot. As autopilot can be turned on and off, accounts can behave like bots and like human users at different times. The clues below should therefore be viewed as indicators of botlike behavior at a given time, rather than a black-or-white definition of whether an account “is” a bot.

A cryptic post from the @AutoShakespeare poetry bot.

Not all bots are malicious or political. Automated accounts can post, for example, poetry, photography or news, without creating any distorting effect.

What the bot sees… A post from @cloudvisionbot.

Our focus is therefore on bots which masquerade as humans and amplify political messaging.

In all cases, it is important to note that no single factor can be relied upon to identify bot-like behavior. It is the combination of factors which is important. In our experience, the most signifcant three can be called the “Three A’s”: activity, anonymity, and amplification.

1. Activity

The most obvious indicator that an account is automated is its activity. This can readily be calculated by looking at its profile page and dividing the number of posts by the number of days it has been active. To find the exact date of creation, hover the mouse over the “Joined …” entry.

Screenshot of @Sunneversets100, taken on August 28, and showing the exact creation date. Account archived on January 13, 2017, and again on August 28, 2017, showing the change in posts over that period.

The benchmark for suspicious activity varies. The Oxford Internet Institute’s Computational Propaganda team views an average of more than 50 posts a day as suspicious; this is a widely recognized and applied benchmark, but may be on the low side.

@DFRLab views 72 tweets per day (one every ten minutes for twelve hours at a stretch) as suspicious, and over 144 tweets per day as highly suspicious.

For example, the account @sunneversets100, an amplifier of pro-Kremlin messaging, was created on November 14, 2016. On August 28, 2017, it was 288 days old. In that period, it posted 203,197 tweets (again, the exact figure can be found by hovering the mouse over the “Tweets” entry).

This translates to an average of 705 posts per day, or almost one per minute for twelve hours at a stretch, every day for nine months. This is not a human pattern of behavior.

2. Anonymity

A second key indicator is the degree of anonymity an account shows. In general, the less personal information it gives, the more likely it is to be a bot. @Sunneversets100, for example, has an image of the cathedral in Florence as its avatar picture, an incomplete population graph as its background, and an anonymous handle and screen name. The only unique feature is a link to a U.S.-based political action committee; this is nowhere near enough to provide an identification.

Another example is the account @BlackManTrump, another hyperactive account, which posted 89,944 tweets between August 28, 2016 and December 19, 2016 (see archive here), an average of 789 posts per day.