A study by UK academics looking at how fake social media accounts were used to spread socially divisive messages in the wake of a spate of domestic terrorists attacks this year has warned that the problem of hostile interference in public debate is greater than previously thought.

The researchers, who are from Cardiff University’s Crime and Security Research Institute, go on to assert that the weaponizing of social media to exacerbate societal division requires “a more sophisticated ‘post-event prevent’ stream to counter-terrorism policy”.

“Terrorist attacks are designed as forms of communicative violence that send a message to ‘terrorise, polarise and mobilise’ different segments of the public audience. These kinds of public impacts are increasingly shaped by social media communications, reflecting the speed and scale with which such platforms can make information ‘travel’,” they write.

“Importantly, what happens in the aftermath of such events has been relatively neglected by research and policy-development.”

The researchers say they collected a dataset of ~30 million datapoints from various social media platforms. But in their report they zero in on Twitter, flagging systematic use of Russian linked sock-puppet accounts which amplified the public impacts of four terrorist attacks that took place in the UK this year — by spreading ‘framing and blaming’ messaging around the attacks at Westminster Bridge, Manchester Arena, London Bridge and Finsbury Park.

They highlight eight accounts — out of at least 47 they say they identified as used to influence and interfere with public debate following the attacks — that were “especially active”, and which posted at least 427 tweets across the four attacks that were retweeted in excess of 153,000 times. Though they only directly name three of them: @TEN_GOP (a right-wing, anti-Islam account); @Crystal1Jonson (a pro-civil rights account); and @SouthLoneStar (an anti-immigration account) — all of which have previously been shuttered by Twitter. (TechCrunch understands the full list of accounts the researchers identified as Russia-linked has not currently been shared with Twitter.)

Their analysis found that the controllers of the sock puppets were successful at getting information to ‘travel’ by building false accounts around personal identities, clear ideological standpoints and highly opinionated views, and by targeting their messaging at sympathetic ‘thought communities’ aligned with the views they were espousing, and also at celebrities and political figures with large follower bases in order to “‘boost’ their ‘signal’” — “The purpose being to try and stir and amplify the emotions of these groups and those who follow them, who are already ideologically ‘primed’ for such messages to resonate.”

The researchers say they derived the identities of the 47 Russian accounts from several open source information datasets — including releases via the US Congress investigations pertaining to the spread of disinformation around the 2016 US presidential election; and the Russian magazine РБК — although there’s no detailed explanation of their research methodology in their four-page policy brief. They claim to have also identified around 20 additional accounts which they say possess “similar ‘signature profiles’” to the known sock puppets — but which have not been publicly identified as linked to the Russian troll farm, the Internet Research Agency, or similar Russian-linked units. While they say a number of the accounts they linked to Russia were established “relatively recently”, others had been in existence for a longer period — with the first appearing to have been set up in 2011, and another cluster in the later part of 2014/early 2015. The “quality of mimicry” being used by those behind the false accounts makes them “sometimes very convincing and hard to differentiate from the ‘real’ thing”, they go on to assert, further noting: “This is an important aspect of the information dynamics overall, inasmuch as it is not just the spoof accounts pumping out divisive and ideologically freighted communications, they are also engaged in seeking to nudge the impacts and amplify the effects of more genuine messengers.” ‘Genuine messengers’ such as a Nigel Farage — aka one of the UK politicians directly cited in the report as having had messages addressed to him by the fake accounts in the hopes he would then apply Twitter’s retweet function to amplify the divisive messaging. (Farage was leader of UKIP, one of the political parties that campaigned for Brexit and against immigration.) Far right groups have also used the same technique to spread their own anti-immigration messaging via the medium of president Trump’s tweets — in one recent instance earning the president a rebuke from the UK’s Prime Minister, Theresa May. Last month May also publicly accused Russia of using social media to “weaponize information” and spread socially divisive fake news on social media, underscoring how the issue has shot to the top of the political agenda this year.

“The involvement of overseas agents in shaping the public impacts of terrorist attacks is more complex and troubling than the journalistic coverage of this story has implied,” the researchers write in their assessment of the topic.

They go on to claim there’s evidence for “interventions” involving a greater volume of fake accounts than has been documented thus far; spanning four of the UK terror attacks that took place earlier this year; that measures were targeted to influence opinions and actions simultaneously across multiple positions on the ideological spectrum; and that activities were not just being engaged by Russian units — but with European and North American right-wing groups also involved.

They note, for example, having found “multiple examples” of spoof accounts trying to “propagate and project very different interpretations of the same events” which were “consistent with their particular assumed identities” — citing how a photo of a Muslim woman walking past the scene of the Westminster bridge attack was appropriate by the fake accounts and used to drive views on either side of the political spectrum:

The use of these accounts as ‘sock puppets’ was perhaps one of the most intriguing aspects of the techniques of influence on display. This involved two of the spoof accounts commenting on the same elements of the terrorist attacks, during roughly the same points in time, adopting opposing standpoints. For example, there was an infamous image of a Muslim woman on Westminster Bridge walking past a victim being treated, apparently ignoring them. This became an internet meme propagated by multiple far-right groups and individuals, with about 7,000 variations of it according to our dataset. In response to which the far right aligned @Ten_GOP tweeted: She is being judged for her own actions & lack of sympathy. Would you just walk by? Or offer help? Whereas, @ Crystal1Johnson’s narrative was: so this is how a world with glasses of hate look like – poor woman, being judged only by her clothes.

The study authors do caveat that as independent researchers it is difficult for them to guarantee ‘beyond reasonable doubt’ that the accounts they identified were Russian-linked fakes — not least because they’ve been deleted (and the study is based off of analysis of digital traceries left from online interactions).