Who’s following whom? (Image: RCO productions/Alamy)

THREE anonymous teams have let loose software that pretends to be human, and used it to manipulate a group of Twitter users.

Over a two-week period, the three “socialbots” were able to integrate themselves into the group, and gained close to 250 followers between them. They received more than 240 responses to the tweets they sent.

This sinister-sounding effort was in fact part of Socialbots 2011, a competition designed to test whether bots can be used to alter the structure of a social network.


Each team had a Twitter account controlled by a socialbot. Like regular human users, the bot could follow other Twitter users and send messages. Bots were rewarded for the number of followers they amassed and the number of responses their tweets generated.

The socialbots looked at tweets sent by members of a network of Twitter users who shared a particular interest, and then generated a suitable response. In one exchange a bot asks a human user which character they would like to bring back to life from their favourite book. When the human replies “Jesus” it responds: “Honestly? no fracking way. ahahahhaa.”

Interactions like this were realistic enough to attract attention from members of the targeted community, who started to follow the bots and respond to their messages. The best-performing bot was able to gain more than 100 followers and generated almost 200 responses.

When the experiment ended last month, a before-and-after comparison of connections within the target community showed that the bots were “able to heavily shape and distort the structure of the network”, according to its organiser, Tim Hwang, founder of the startup company Robot, Robot and Hwang, based in San Francisco. Some members of the community who had not previously been directly connected were now linked, for example. Hwang has not revealed the identities of the entrants, or of the members of the 500-person Twitter network that the bots infiltrated.

The success suggests that socialbots could manipulate social networks on a larger scale, for good or ill. “We could use these bots in the future to encourage social participation or support for humanitarian causes,” Hwang claims. He also acknowledges that there is a flip side, if bots were also used to inhibit activism.

The military may already be onto the idea. Officials at US Central Command (Centcom), which oversees military activities in the Middle East and central Asia, issued a request last June for an “online persona management service”. The details of the request suggest that the military want to create and control 50 fictitious online identities who appear to be real people from Afghanistan and Iraq.

It is not clear, however, if any of the management of the fake identities would be delegated to software. A Centcom spokesperson told New Scientist that the contract supports “classified blogging activities on foreign language websites to enable Centcom to counter violent extremist and enemy propaganda outside the US”.

Hwang has ambitious plans for the next stage of the socialbot project: “We’re going to survey and identify two sites of 5000-person unconnected Twitter communities, and over a six-to-12-month period use waves of bots to thread and rivet those clusters together into a directly connected social bridge between those two formerly independent groups,” he wrote in a blog post on 3 March. “The bot-driven social ‘scaffolding’ will then be dropped away, completing the bridge, with swarms of bots being launched to maintain the superstructure as needed,” he adds.

When this article was first posted, we gave an incorrect affiliation for Tim Hwang.