Editor’s Note: With the exception of the president of the United States, we all know that Russia and other powers have run amok in their attempts to influence U.S. elections and those of other democracies around the world. Learning the scope of the problem, however, has proved difficult. In a groundbreaking study, Arya Goel, Diego Martin and Jacob Shapiro, all of Princeton University, find that more than 20 countries have been targeted. Russia (no surprise) is by far the most active, but Iran, China and Saudi Arabia all are joining the fray. Both government and internet company efforts are making it harder to meddle, however, and the authors raise the hope that attempts to interfere might be marginalized further in the years to come.

Daniel Byman

***

President Donald Trump has repeatedly shown that he does not take the issue of Russian interference in elections seriously, most recently at the G-20 summit in Japan when he issued a “wink-wink” warning to Russian President Vladimir Putin when pressed on the issue by reporters.

This is no laughing matter. Such warnings—even when they are issued seriously—are not working. For example, in August 2018, U.S. National Security Adviser John Bolton warned Russians not to meddle in the upcoming midterms. Had the warnings worked, the Russian government would not have tried to manipulate the election and the United States would not have needed to protect the integrity of the midterms from Russia. But just a few months later, U.S. Cyber Command reportedly launched offensive cyber operations against Russian targets, and while it was apparently able to block Russian troll farms on Election Day, its defensive actions suggest that multiple warnings by the United States did not dissuade Russia’s efforts to interfere. Russia has continued its efforts since, using myriad social media platforms to spread disinformation in an attempt to sway elections and call into question the stability of democracies.

While much of the media coverage has focused on Russian interference in U.S. elections, this is not just an American problem. As our new report on online foreign influence efforts (FIEs) demonstrates, this is a global problem. Since 2013, Russia has conducted at least 38 distinct influence campaigns targeting 19 different countries—and Russia isn’t alone. Defined as (a) coordinated campaigns by one state to affect one or more specific aspects of politics in another state, (b) through media channels, including social media, by (c) producing content designed to appear indigenous to the target state, 53 distinct online FIEs were launched by Russia and other countries between 2013 and the end of 2018, and several remain ongoing today. While many of the FIEs target elections, others focus on discrediting specific political actors, such as the campaign targeting the Syrian Civil Defense (SCD), which involved both Russian and Syrian actors.

Our big-picture analysis shows that Russia is by far the most active state conducting FIEs. About 72 percent of the campaigns were conducted solely by Russia, which had 29 distinct operations ongoing in 2017. Iran has also been increasing its activity steadily since 2016, targeting Israel, Saudi Arabia, the United Kingdom and the United States to date. The remainder of the FIEs are split evenly between China and Saudi Arabia. We also identified at least 40 purely domestic online influence campaigns, in which state actors targeted their own populations in ways designed to mask the government’s involvement.

Our data suggest that at least 24 different countries were targeted by the end of 2018. About 38 percent of the FIEs targeted the United States, with Great Britain and Germany accounting for about 9 percent and 6 percent, respectively. Other repeatedly targeted countries include Australia, France, the Netherlands and Ukraine (4 percent each); and the remaining 31 percent of FIEs were one-off operations targeting Austria, Belarus, Brazil, Canada, Finland, Israel, Italy, Lithuania, Poland, South Africa, Spain, Sweden, Taiwan and Yemen.

The findings challenge some common claims about FIEs. Only 15 percent of the FIEs included efforts to polarize people in targeted countries (defined as pushing on both sides of one or more political issues; for example, promoting both pro- and anti-gun control content). Other tactics were more common: About 65 percent of the efforts worked to defame specific individuals. In Austria, for example, Russia-linked trolls targeted former Chancellor Sebastian Kurz, accusing him of supporting immigration from Islamic countries and of being close to Hungarian-American financier George Soros. Similarly, in Germany, Russia-linked trolls who had previously backed President Trump attacked Chancellor Angela Merkel’s Instagram, among various other strategies tried by Russian actors. And fully 55 percent of the FIEs tried to persuade their audiences to take a particular political position. In Spain, for example, Russia-linked trolls used online social media to promote voting yes in the 2017 Catalan independence referendum, likely in an attempt to destabilize Spain.

Twitter and Facebook were by far the most popular platforms for these kinds of coordinated inauthentic behaviors. Of the cases we identified, about 83 percent used Twitter as a tool, while about 50 percent used Facebook.

In some ways, this is good news. Both platforms have significant resources to crack down on FIEs, and each has made progress in combating disinformation campaigns by combining manual investigation with algorithmic tools. In the past two years, Twitter has removed thousands of accounts (representing millions of tweets)—including a large corpus of activity related to the Internet Research Agency (IRA), the infamous Russian troll factory involved in the 2016 U.S. presidential elections whose leaders were indicted during Special Counsel Robert Mueller’s investigation. Twitter has released data on those actions that has supported a wide range of research. Facebook has also taken action. In 2018, it removed more than 2,000 pages of groups and accounts identified as engaged in “coordinated inauthentic behavior”—many linked to Iranian and Russian organizations—and hundreds more in early 2019.

Both Facebook and Twitter have taken other measures as well. Twitter launched a new tool for users to report accounts aimed at misleading voters and spreading disinformation in the spring 2019 India and European Union elections. Facebook employed a team of specialists to analyze different types of media information in order to identify coordinated inauthentic activity (mostly from Russia) and reduce viewership of that content. And new evidence is consistent with the claim that those efforts paid off in the run-up to the 2018 U.S. midterm election. One new study based on data from a commercial database that tracks user engagement with social media content shows that user interactions with stories from 570 identified fake news sites dropped on Facebook from their peak in mid-2017 through the end of 2018. A more recent study using voluntarily installed browser tracking software shows that the share of Americans visiting at least one fake news site dropped substantially from October 2016 through October 2018.

Having learned lessons from the 2016 U.S. presidential election, governments in France, Germany, the Netherlands, Sweden and the United Kingdom have taken measures to protect their electoral processes. France, for example, successfully combined voter education, retaliatory threats, and proactively countering propaganda and leaks in its 2017 election. Unfortunately, broader counterpropaganda efforts face significant challenges. This is in part because any effort to counter an FIE that favors one party can easily be framed as support to the other party. Facebook learned this lesson in the recent Indian election when some articles suggested its removals affected Prime Minister Narendra Modi’s Bharatiya Janata Party more, while others suggested more pro-Indian National Congress pages were taken down.

Still, the overall picture suggests that the combination of private-sector and government efforts is bearing fruit. The harder it is for accounts pushing inorganic activity to escape notice, the more expensive it will be for the Russians and others to accomplish their goals. And while influencers can always move to new platforms, those have smaller audiences and make it more expensive to reach a critical mass of voters. As an excellent new DFRLab report highlights, one recent Russian effort got little engagement due to “the obsessive secrecy with which the operation was conducted, using a separate burner account for every stage.”

This is what victory in this fight will look like: not an end to FIEs, but their gradual marginalization by the combination of internet platform self-policing (sometimes in order to protect their businesses) and government actions that raise the expected costs for attackers. As others have argued eloquently and at length, the evidence suggests that a collective response that integrates actions by the government, the private sector and civil society groups will make it harder and harder for foreign nations to interfere and shape the politics of their adversaries.