Vulnerabilities in the social media platform allow it to be manipulated for state propaganda in the Gulf.

When the United Arab Emirates, Saudi Arabia, Bahrain and Egypt accused Qatar of aiding “terrorism” and being too close to Iran last year, they unleashed a media onslaught that made TV airwaves and the internet a battlefield to make their claims stick.

The June 2017 announcement by the four countries to sever ties with Qatar was the culmination of a longer media campaign that manipulated Twitter, one of the most popular social media platforms in the Middle East, according to research conducted at the University of Exeter and Princeton.

While much attention has been paid to the role of bad actors on social media in the 2016 US election and Brexit vote outcomes, this emerging field of research focusses on the Gulf crisis, and the ways in which Twitter has been used to spread state propaganda, and to silence criticism.

A common tool used for this kind of manipulation is a Twitter bot, a type of software that can be programmed to interact with other accounts autonomously.

Bots can tweet, retweet, send direct messages, and conduct other activities to promote a person, product, or set of ideas.

One study by the University of Southern California and Indiana University suggests up to 50 million active Twitter accounts are bots, representing up to 15 percent of active users.

They are generally used by companies, news organisations, and charities to publish content, and foster “healthy” interactions among millions of people around the world, every day.

In the Middle East, however, bots have been used for more sinister purposes, which violate Twitter’s terms of service.

An enormous quantity of active Twitter users in the Gulf – and in Saudi Arabia, in particular – appears to be bots, according to research by Marc Jones, lecturer in history of the Gulf and Arabian Peninsula at the University of Exeter.

Jones’ research suggests half of active Twitter users in the kingdom may be bots, including large networks that tweet up to 100,000 times a day.

Jones told Al Jazeera that these bot networks can “create propaganda messages that can distort the reality of the discussions going on in the world” and that Arabic-language Twitter bots – or “online flies” as they’re sometimes known – have been instrumental in promoting government narratives.

Before the hack

“One of the bot networks revealed there were bot accounts set up in April 2017,” Jones told Al Jazeera, after analysing data he acquired from Twitter of thousands of anti-Qatar bots behind trending hashtags, such as “Qatar is the bankroller of terror”, which was trending in Saudi Arabia four days before the hack of Qatar News Agency.

In addition to more obvious bot “red flags” (Twitter handles with randomly generated numbers and generic profile images), he identifies patterns of correlation between multiple accounts to reveal entire networks, and in some cases, individuals or organisations behind them.

[Institute of Arab and Islamic Studies/University of Exeter]

Weeks before US President Donald Trump flew to Riyadh, thousands of Arabic-language accounts were activated in April 2017, and began spreading sectarian, anti-Iranian, and anti-Semitic rhetoric, while heaping praise on Trump, according to his findings.

The Twitter bots also began framing the discussion of “extremism” towards criticism of Qatar for being a “servant of Iran” (while being paradoxically close to Israel), and for its alleged ties to the Muslim Brotherhood, Hamas and others.

By the time Trump issued his call to Arab and Muslim leaders in May 2017 to “drive out the extremists”, Arabic-language Twitter was already abuzz with bot-driven hashtags accusing Qatar of bankrolling “extremists”, and other accusations that would soon explode from the Twitterverse, to every major media outlet in the world.

The accusations have frequently appeared as trending topics, after being amplified by these networks.

For example, when Jones analysed accounts using the trending hashtag “We Demand the Closing of the Channel of Pigs” last July (a reference to demands to shutter Al Jazeera), he found that 70 percent of the conversation participants were bots.

“This message, following closely on the heels of Saudi Arabia’s official list of demands, was surely intended to create the impression that Riyadh’s official demands enjoyed wider regional support,” Alexei Abrahams, research fellow at Princeton University’s Woodrow Wilson School of Public and International Affairs, also told Al Jazeera.

“Professor Jones’ approach to identifying propaganda bots on Twitter is unique – not only in the Gulf but worldwide – and systematically outperforms best existing alternatives,” Abrahams says.

The two are now working on a project to warn Gulf Twitter users of hashtag manipulation in real-time.

Quartet rhetoric

In the early hours of May 24, an article appeared on Qatar News Agency’s website, with false comments attributed to Qatar’s emir, which “confirmed” the suspicions echoed by the bots, and provided a pretext for a much larger messaging-push by the quartet (Saudi Arabia, UAE, Bahrain and Egypt).

The comments attributed to Qatar’s emir included incendiary remarks, as well as more subtle “dog whistles” – code language using terminology that triggers certain groups of people, while others remain unaffected by the hidden messaging.

Almost point-by-point, the fabricated remarks echoed what rhetoric programmed into speech patterns of the bots, which was later amplified by satellite television channels.

The emir appeared to have criticised “some governments who caused terrorism by following an extreme version of Islam,” (an apparent reference to Saudi Arabia), while praising the Muslim Brotherhood, Hezbollah and Hamas, and calling Iran an “Islamic power” and a major force of stability in the region.

The “remarks” also made reference to tensions between Qatar and a “passive” Trump administration, while suggesting its time in office might be cut short due to “criminal investigations”.

The QNA’s YouTube page was also compromised, posting a doctored version of the emir’s appearance at a military ceremony, with the false information appearing in the ticker. The fake news was then tweeted out from other compromised QNA accounts, in a cyberattack that lasted for three hours.

After nearly two weeks of media messaging, the cyber army made a new hashtag trend on Twitter: “Sever relations with Qatar,” days before the quartet imposed an air, land and sea embargo, accusing Doha of fomenting regional instability.

By the time the blockade began, the hashtag was the top trending tag worldwide, and had been used more than a million times, which became a news item in and of itself in outlets such as the BBC and Al Jazeera.

Without a whistle-blower, it’s difficult to ascertain who is behind this anti-Qatar network, but the rhetoric of the bots has consistently mirrored the official demands made on Qatar from officials in Abu Dhabi and Riyadh.

They also include “unofficial” demands targeting Qatar as host of the largest US base in the Middle East, and the World Cup in 2022.

Targeting Tamim

Due to other vulnerabilities on the platform, sophisticated bot networks are able to change their input location to Qatar, and propel anti-government hashtags to the top of Qatar’s Twitter trends, according to both researchers.

In these cases, bot-driven trending hashtags may lead foreign observers to conclude that Qataris were actually calling for a change in leadership – another claim propagated by Saudi and UAE-funded media outlets during the crisis.

Networks can be set up fairly easily in countries where SIM cards can be purchased in bulk, and used to create multiple accounts.

Some accounts tweet for a while, then disappear, making it difficult to track them down for enforcement actions. Twitter says it challenges eight million accounts a week, suspected of being bots.

Despite these efforts, Jones tells Al Jazeera that “Twitter’s verification process is hugely broken in the region, and because of that it’s been exploited by people who have the money and the resources to game it.”

During the Arab Spring, Twitter was seen as essential tools for activists calling for democratic change, but Abrahams now says “enemies of democratisation have learned to repurpose this technology to discoordinate and de-mobilise the Arab public”.

“Qatar, by taking a sympathetic stance towards the uprisings, now finds itself a target of such manipulation,” he added.

Like everything else in the Gulf crisis, the parties have accused each other of operating bot armies.

A few weeks after Jones published an article in the Washington Post, “Hacking, bots and information wars in the Qatar spat,” Saudi Arabia’s Ministry of Culture and Information accused Qatar of running thousands of bot accounts to incite protests within the kingdom.

The ministry cited a study by Royal Court Adviser Saud al-Qahtani, who is himself accused of using bot networks against political opponents and push government propaganda.

Qahtani promoted a hashtag last year called #القائمة_السوداء – or “the blacklist” for Twitter users to turn in fellow citizens who “sympathise” with Qatar online (Displaying pro-Qatar “sympathy” was declared illegal in Saudi Arabia and the UAE shortly after the blockade began).

In a post to his million followers, Qahtani vowed to follow and prosecute anyone who conspires against the quartet, turning a hashtag into a referral service for the police.

Even Twitter CEO Jack Dorsey has acknowledged the “real-world negative consequences” of the social media platform, citing manipulation through bots and human-coordination as some of the ways people have taken advantage of the service.

A company spokesman told Al Jazeera that “unsolicited, repeated actions that negatively impact other people are a violation of our spam policies” and that Twitter is proactively going after large-scale spam behaviour.

Twitter declined to provide Al Jazeera with country-specific data about the extent of bot use in the Gulf, but assured the network it is proactively addressing this issue.