Fake outrage, not fake news, is the biggest threat to our democracy A fringe protest that might once have mustered half a dozen people with leaflets now rallies an army of robot followers behind it.

It is almost election time once more in the UK and creating a troll farm of fake social media accounts to try to influence the result has never been easier.

A Google search reveals that the going rate for buying 1,000 new Twitter accounts to be $230 (£187).

The “attack” software to make them all sing from the same propaganda sheet will cost you $202 a year.

i's opinion newsletter: talking points from today Email address is invalid Email address is invalid Thank you for subscribing! Sorry, there was a problem with your subscription.

To bolster your credibility you will need fake followers – available for $50 per 1,000, and bogus retweets, courtesy of other “bot” fake accounts and costing $12 per 1,000. And away you go.

Permanent state of anger

Similar packages are available for most other social media platforms. Is it any wonder that our national discourse is so contaminated and that so many seem to live in a permanent state of anger?

A fringe protest that once mustered half a dozen souls with leaflets can now rally an army of robot followers behind it. The lone green-ink letter writer is taken more seriously when he buys a cohort of fake voices to echo his poisonous attacks on an individual in the public eye.

The noise grows exponentially when the campaign is under-written by a PR company on a hefty commission to damage a reputation, or is part of a state-sponsored attempt to create political instability.

The problem is not so much fake news as fake outrage. “Fake news” has become a politically motivated insult used by populist leaders to diminish the influence of their critics in mainstream media. The term was inspired by bogus news sites created in Macedonia to target Americans on Facebook and attract advertising revenue.

These are now relatively insignificant. The troll farm feeds on stories from genuine sources which are negative in tone and can be used to damage an opponent by whipping up a frenzy of faux indignation through fake accounts. The social media platforms have an expression for this: co-ordinated inauthentic behaviour.

Facebook used the term in March when it took down 137 related accounts targeting UK politics with hate speech designed to spread division. The accounts obsessed over immigration, free speech, racism and LGBT issues.

With an election looming, Instagram has rolled out a fact-checking service that allows users to flag suspect posts so that – theoretically – they can be independently verified.

The problem of “astroturfing”

But Ali Tehrani, a London-based tech entrepreneur working to tackle fake outrage online, says it is unrealistic to expect social platforms to address the problem themselves. “They can’t do it for every topic, every election, every person, every brand,” he says.

He points out that soon after Microsoft was founded in 1975, cyber security companies arrived to fight computer viruses. New software emerged to address email “phishing”.

Coordinated inauthentic behaviour attacks are just the latest threat, he says.

Tehrani has created the start-up Astroscreen (it detects “astroturfing”, the practice of creating fake digital grassroots movements) with a team of data scientists and disinformation analysts recruited from backgrounds including Nato and the cyber security industry. They highlight fake accounts by identifying user name patterns, and other methods. “It’s like a cat-and-mouse game,” he says.

The motive behind astroturfing is nothing new, he says, pointing out that tobacco firms employed PR firms to lobby against regulation by hand writing thousands of letters to US politicians.

“Social networks allow astroturfing at scale. Years ago you had to write 10,000 letters and today you spend 20 minutes spinning out 10,000 fake accounts,” he says.

‘Very innocent’

Tehrani realised the scale of the problem while founding two other UK start-ups. Sherlike was a “very innocent” Facebook-based recommendation engine that preceded Cambridge Analytica’s abuse of the social platform’s personal data. “Back in 2012, I realised how much data was out there and how much you could infer from that,” he says.

His next venture, Contactable, a news analytics business, gave him insight into astroturfing. “I could see back in 2015 that if an article was biased or negative it would get shared more… I knew there were people out there who would amplify certain articles over others.”

He accepts that sometimes the anger on social networks is genuine, even when it begins with bots. “There is a tipping point and, once it hits mainstream, real people see it and believe it and join the cause; inauthentic outrage does become real outrage.”

Astroscreen’s promise is to identify fake social storms before they reach a “tipping point” of coverage by mainstream media and needless apologies issued in panic (as happened with the actress Bella Hadid when she was accused of racism in a bot-driven campaign).

Celebrities, companies and politicians can start to insulate themselves from such attacks. The rest of us should keep in mind that the world is not so angry as social media would suggest.