Exploring the Information-Laundering Ecosystem: The Russian Case

August 31, 2017

This simple model aims to describe how interference tools can penetrate a target media ecosystem and use it as an echo chamber. While focusing primarily on the U.S. media environment (and the debate about Russian interference as a case study), it does not intend to quantify Russian efforts or to assess whether interference attempts during the 2016 presidential election were successful. Instead, it explores parameters that should inform such an assessment and helps determine whether institutional responses are desirable. This commentary is premised on the U.S. intelligence services’ assessment that Russia conducted influence operations during the latest U.S. presidential campaign—an accusation Russia denies.

Russia’s Interference and the U.S. Media Ecosystem

Russian officials, including the Russian chief of the general staff, General Valery Gerasimov, contend that information warfare is part of a continuum of conflict and begins in peacetime. The origins of modern “active measures”—actions aimed at shaping the adversary’s psychological and political environment in a direction favorable to Russia’s interests—trace back to the pre–Cold War era. Though Russia never stopped using active measures after the Cold War (despite widespread skepticism that they ever resulted in any success), the Kremlin has been adapting its tools to the emergence of the Internet and social media. It has also gained experience in shaping perceptions by progressively extending control over Russia’s own media ecosystem. Building on this experience, it has started to resort to more ambitious interference patterns, coordinating efforts in different environments (cyber, informational, military, political…) in Georgia or in Ukraine. This evolution has raised new questions about the potential efficacy of such measures and the susceptibility of Western societies to outside political interference.

Interference may seek to achieve various goals at once, beyond its general aim to paralyze a nation’s decision making process. In the information domain, objectives can be classified in three broad categories: harming the political or economic infrastructure; undermining social cohesion; or destroying confidence in democratic institutions. Infiltrated information must penetrate, adapt to survive, and shape perceptions in the target society for as long as possible, while the attacker might pursue more concrete tactical gains (dismissal of a public figure or raising the profile of a specific issue). It is important to understand the different components of interference, as well as how these actors and elements resonate inside the U.S. media sphere to shape the news environment. The disinformation process is mostly self-perpetuating and needs only loose guidance from instigators after the initial public push. The persistence of the Tsarist-era secret-police forgery, “The Protocols of the Elders of Zion,” long after it was exposed as a fake is a good example. Consequently, there is every reason to believe that the effect of such campaigns can continue long after active efforts have ceased.

The U.S. media ecosystem features several spheres that partially overlap and constantly interact with each other:

The mainstream media: These media actors hold the view that the system as it exists is legitimate and acceptable, even if not perfect. In theory, news reporting and factual accuracy are at the core of their self-assigned mission, and they have few ties to conspiratorial media. Interestingly, in the United States this category of media features great variety of opinions, but it also focuses more on breaking stories and 24-hours-a-day/7-days-a-week coverage. In the United States, the mainstream media’s coverage of the Russian government is generally critical.



The conspiratorial media: These actors dismiss U.S. institutions as illegitimate or corrupted and systematically contradict the mainstream media’s assumptions. Analysis is driven by paranoia and belief that radical change is necessary, even though this media might claim to protect core U.S. values. Some of its actors, though, might be motivated by the lure of profit or playful will to disrupt rather than genuine belief. The conspiratorial media seems to have gained visibility both in Russia and the United States over the past decade. Russia’s attempt to incarnate an alternative leadership in the United States may coincidentally converge with the conspiratorial sphere’s dismissal of U.S. institutions.



Disclosers: Not limited to WikiLeaks, these actors are usually deeply critical of the system. They care about accuracy, in the sense that they are not known for releasing fabricated information, but they focus on releasing secret and exclusive information (intense emotional content). Although they are few and create limited content, disclosure platforms and “whistleblower” outlets are major information hubs in the media ecosystem due to their ability to appeal simultaneously to the other two spheres. They feed mainstream media outlets by releasing previously unavailable pieces of information in the public domain as part of their purported mission of increasing transparency. Yet they also appeal to conspiracy believers because the secret nature of what they reveal can often be selectively chosen and twisted into narratives of societal dysfunction. They usually focus their investigations and criticism on the Western societies to which they belong.

The Information-Laundering Machinery

According to studies, Russia affiliates’ approach to information and political warfare includes a constant low-level effort to sow confusion among the target society. Forged documents and fake stories, such as the “Lisa case” in Germany, can be used to undermine confidence in institutions, fuel pessimism, or exacerbate thorny political or social problems. It is generally assumed that only physical disruption, not information warfare, can disrupt the normal course of Western societies. However, activities such as the hacking of voting machines or externally induced power outages are risky. They could be considered an act of war and would have a counterproductive (from the point of view of the aggressor) rally-around-the-flag effect against foreign aggression. For this reason, an attacker might focus instead on actions that sow doubts about the target society’s resilience and undermine confidence in the authorities and the democratic system. In a 24/7 media-saturated environment, the goal might simply be to trigger a debate on the government’s alleged inability to protect a core democratic pillar and then stimulate the frenetic exploration of endless catastrophic scenarios. In this regard, that the debate about Russia’s interference in the U.S. electoral campaign clearly has overwhelmed the U.S. media ecosystem since the fall of 2016 can be described as a success for the attacker.

In order to impose its narratives and shape perceptions, an interfering actor may proceed in two complementary ways. It may use sponsored groups to uncover, engineer, and release narratives that serve its interests, at times acquiring compromising data through clandestine tradecraft. Then, these groups advertise purported “proof” of mischief, either real of forged, with third parties such as WikiLeaks that will help them gain an audience. In this case, the reputation of the organization and the titillating nature of classified or private information is used to enhance the attractiveness of the leak. The dumping process itself might be more important than the acquisition of truly embarrassing content. In fact, some absurd stories (notably the recent infamous “Pizzagate”, unrelated to Russia) sometimes gain traction despite the insignificant character of the original material. Petty suspicions might suffice to reach out to a significant portion of the population, as conspiracy believers are prone to pick up those narratives that flatter their rejection of public authorities. However, this would probably be insufficient to generate a systemic effect beyond the conspiratorial media environment. This sphere generally lacks the necessary clout to shape the mainstream news environment, and using it as an exclusive vector to disseminate narratives might well taint the information that is transmitted through it.

In order to help advertise its sponsored content beyond fringe activists and increase its survivability, the interfering actor uses its own echo chamber. This media ecosystem, enhanced by affiliated leakers and a troll army on social media, is able to establish connections to the independent U.S. media outlets and funnel narratives. Affiliated disclosers try to gain attention by disclosing a story or passing it on to dump outlets. As the U.S. media sphere digests the story, the tale gains a life of its own, laundered from the initial obvious traces of interference. At that stage, the ghost media system gives it a further iteration of attention, reinforcing the story by quoting genuine national media sources about the initial story. Russian-sponsored channels like RT or Sputnik flood the public with intense negative messages associated with the initial story, amplifying or twisting facts in a way that challenges or dismisses the mainstream media’s description of events. Across social media, were psychological arousal drives virality, an army of trolls and bots promotes catastrophic interpretations of the events. In principle, none of the U.S. media sphere elements needs to be wittingly involved for the scheme to succeed. They only need to exist and play the role Russia has assigned to them as an echo chamber. This represents a systemic vulnerability to external manipulation.

Breaking the Disinformation-Laundering Cycle

Many recent studies on Russia’s interference in the U.S. election have focused on the ability (or inability) of public authorities to protect U.S. assets and neutralize Russia’s media ecosystem and cyber tools. Policy proposals often include deterrence by denial, namely efforts to improve critical infrastructure’s cyber and physical protection against intrusion. Yet in the information domain, the open and fast-evolving nature of the Western media environment does not make it desirable to lock Russia’s narratives out systematically, as that would have adverse effects on freedom of speech. Public authorities could also seek to constrain Western companies that are complacent toward fake news or that generate such news simply because it constitutes a good source of ad revenue. Moreover, Russia’s ghost media system could be hindered by cracking down on its social media affiliates—trolls and bots in particular. A focus on attribution would help identify, expose, and indict sponsored hackers. However, forcibly “unplugging” Russia’s media echo chamber from the Western system comes at a risk, as these galaxies are intermingled. Though this could give the mainstream media some room to breathe by clearing the news environment, it would alienate and further empower an indigenous conspiratorial sphere that is eager to serve as a relay for Russia’s narrative.

Another popular proposal at a lower conflict level is to fight disinformation with fact-checking efforts, if not counterpropaganda. The European Union, also subject to active propaganda efforts (see Sweden), established in 2015 a task force whose role is to refute Russia’s narrative. An effort to identify nascent rumors when they emerge would certainly reduce their attractiveness by making them public and nonexclusive, disrupting the attacker’s agenda. Crowd-sourced reputational assessments could limit the online visibility of highly suspect information, if such steps are implemented carefully to avoid abuse by private platforms. Exposing the plot and debunking false assertions might also help curb the attractiveness of its arguments. Though such actions may limit the pervasiveness of disinformation, their effects will be mixed at best. Indeed, disinformation primarily targets conspiracy-prone believers, many of whom will spontaneously empathize with the Kremlin’s narrative. There is a genuine risk that, if carried out in a transparent manner by legitimate authorities, this debunking effort will backfire and effectively strengthen the relevance of Russia’s narrative—demonstrating that the mainstream media sphere is subdued and that the news is rigged.

These necessary efforts should not mask a thoroughly needed self-examination effort by the mainstream media of its role as a “gatekeeper.” The U.S. media echo chamber is an indispensable element of the information-laundering machinery, which unwittingly contributes to fulfilling its objectives. Best-practices standards aimed at immunizing the national media scene against hysteria should therefore be contemplated even before coercive measures aimed at curbing the flow of disinformation. A prerequisite would be a systematic analysis of the 24/7 news environment to determine the respective contribution of Russia’s own measures and that of the U.S. debate about Russia’s interference, which enhanced the perception that U.S. democracy has been undermined. Steps may include a recommendation to become increasingly prudent when reporting on developing or second-hand stories and increased standards for online versions of media outlets. Such an introspective endeavor, which could also be applied to political parties, would limit Russia’s ability to contemplate interference as an effective policy, as well as reduce the reliance on sketchy attribution procedures and the need to adapt the entire response each time a different interference configuration surfaces.

Arguably, governmental measures could also contribute to improve the resilience of the target society by limiting the reach of its indigenous conspiratorial sphere. Trolls and bots would be much less effective if they could not find authentic individuals willing to act as spokespersons or relays. In this regard, limited parallels may be drawn with actions aimed at tackling online radicalization in Europe. A way to achieve this goal would be to raise children with a critical mind, as in Finland, develop their analytical skills to avoid being trapped by one-sided accounts of events, and fight extreme content online in general. Yet in open societies where any private person can reach a significant public audience, apart from this very long-term educational effort, there is limited prospect for the whole population to endorse a fully rational interpretation of events. Conspiracy theories, after all, have been an integral part of the U.S. political realm since the American revolution. One can argue that, in a free speech environment, the best public policies can hope to achieve is to contain the conspiratorial sphere’s ability to shape perceptions beyond its traditional audience.

In European countries, authorities sometimes release direct guidance on how to handle information that media outlets often follow when they are deemed justified. During the French election, for example, when a dubious data pack known as the “Macron leaks” was dumped in the public domain in the final days of the campaign, most media outlets followed the electoral commission’s recommendation to respect the end of the official campaign period (and not report on the alleged leaks) so as not to disturb the voting process. By contrast, in the United States such measures would certainly be perceived as an infringement on the First Amendment. However, awareness-raising efforts from the authorities might help mainstream media outlets evaluate how to report on interference without inflating the disinformation bubble further. In particular, they should be careful not to help Russia gain momentum by overestimating its ability—or even its willingness—to systematically shape perceptions. Whenever confronted with media panic on Russia, authorities could try to reestablish a balance by releasing content that enhances societal resilience to interference, making it difficult for pessimistic narratives to shape the news environment (intense positive emotions are believed to diffuse more easily). By contrast, efforts aimed at discrediting disclosers in advance, as in the Macron leaks case, should be reserved for very specific situations as they might inadvertently harm the credibility of the general media environment.

Finally, it is possible to contain Russia’s currently unimpeded ability to use disclosers as an echo chamber. Disclosers play a pivotal role in the online media environment and will not disappear simply because coercive measures are in place. Fighting dump outlets on the general assertion that they promote Russian interests would be dubious and probably not effective, as their active consent is necessary for the completion of the information-laundering cycle. By contrast, credibility is a major worry for them, and suspicions of being compromised would reduce their access to the mainstream media that amplify their message. Significant resources should be dedicated to monitoring the supposed nonaffiliation of dump outlets and to challenging the relevance of dumped information whenever it seems to be part of a broader interference scheme. Then, the disclosers themselves should be challenged each time they demonstrate complacency toward and act as an echo chamber for dubious troves. Indeed, as major actors in the public sphere, they too should be held accountable and exposed whenever they—knowingly or not—publish forged content without carrying out any cross-reviewing.

Conclusion

The cycle of propaganda and disinformation laundering cannot be broken by solely coercing or squeezing out an adversary’s media outlets and their followers. In the case of Russia, the main challenge is not to forcibly cut the information flow it delivers, but rather to manage the debate its mere existence generates. If any “Russian active measures” were deemed successful, it would be because in part the mainstream media sphere has failed to contain the pernicious messages they convey and channel them in a productive way to enhance information quality standards and democratic practices. It would also be because state institutions have failed to generate a convincing narrative upstream about the benefits of the existing social and political order and thereby gather a strong community of supporters willing to defend them. Building mutual confidence between the institutions and the public, basic measures to debunk disinformation, and self-examination on the part of the mainstream media would detoxify the debate on active measures that obsesses the entire media sphere.

Boris Toucas is a visiting fellow with the Europe Program at the Center for Strategic and International Studies in Washington, D.C.

Commentary is produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).

© 2017 by the Center for Strategic and International Studies. All rights reserved.

Photo Credit: Chip Somodevilla / Staff