Editor’s Note: This piece was researched in collaboration with BuzzFeed.

Twitter has taken down a network of more than 9,000 Twitter bots that published inauthentic posts promoting the political interests of the United Arab Emirates and Saudi Arabia. This “astroturfing” network criticized Turkey’s intervention in Libya — a shared interest of both governments — by targeting Turkish President Recep Tayyip Erdogan, DFRLab analysis confirmed through an analysis of the network prior to its takedown. On at least two recent occasions, the network had begun politicizing the COVID-19 coronavirus pandemic.

The network of accounts was first reported to Twitter in December 2019 by Stanford Internet Observatory. Meanwhile, it was independently discovered in March 2020 by Indiana-based researcher Josh Russell, while analyzing Twitter posting patterns regarding COVID-19. The accounts were shared with the DFRLab and BuzzFeed for analysis. The DFRLab confirmed its coordinated nature by reviewing accounts characteristics using its Twelve Ways to Spot a Bot methodology.

An enormous platform for public discussion, Twitter remains an ongoing target for amplification campaigns, including political influence operations. The DFRLab has documented astroturfing botnets ranging from promoting a Korean pop band to boosting political campaigns in India and reported extensively on coordinated operations on Twitter to support the UAE’s interests in Libya and elsewhere. Bots on Twitter are wired for a particular purpose like boosting a campaign using a hashtag, increasing the chances that audiences will be exposed to it through Twitter’s trending topics functionality.

In this instance, Russell uncovered the network by searching for coronavirus-related hashtags. While posts related to the coronavirus were not the primary goal of the network, a close look at the accounts suggests they were used for broader political messaging, demonstrating how information ops can be repurposed for different uses.

Crowning the corona response

One example of messages amplified by the botnet was a solidarity video from an account called UAE_v0ice. Though the connection between the botnet and UAE_v0ice, if any, remains to be determined, it was subsequently taken down by Twitter before the rest of the network for violating its policies. A copy of the video can still be found on an affiliated Twitter account, @uaevoiceurdu, which was still active at the time of publication.

The video showed an Arabic speaker expressing support for China during the coronavirus outbreak. The message was posted on February 17, when the outbreak was at its peak in Wuhan, the city in China where the virus was first documented. At that point in time, China had already reported more than 70,000 cases. The video message was intended to highlight the UAE government’s cooperation with China. One month prior to that, Abu Dhabi Crown Prince Mohammed bin Zayed Al Nahyan tweeted that the UAE was closely monitoring the developments in China and was “ready to provide all support to China.” China and the UAE have a history of positive relations. Last year’s bilateral trade between the two countries reached about $34.7 billion

Message from the account that received amplification from the botnet. (Source: UAEVoiceUrdu/archive)

Analyzing the botnet

As previously noted, the DFRLab was able to identify bot-like behavior across the pro-UAE network using its Twelve Ways to Spot a Bot methodology. For instance, all of the accounts had screen names with random alphanumeric handles, such as @xwdBSuZ3u5VuDdu, @7oPDa5YBSrJPqHS, and @EB94QQBpSTJ0sqW.

Alphanumerical screen names of Twitter accounts (Source: Josh Russell)

Other than the alphanumeric names, the accounts also featured so-called “egg” avatars, since the botnet creator did not take time to individual upload pictures for the accounts. Egg avatars are a common indicator of bot-like activity.