A growing body of research has studied how autocratic regimes interfere with internet communication to contain challenges to their rule. In this review article, we survey the literature and identify the most important directions and challenges for future research. We structure our review along different network layers, each of which provides particular ways of governmental influence and control. While current research has made much progress in understanding individual digital tactics, we argue that there is still a need for theoretical development and empirical progress. First, we need a more comprehensive understanding of how particular tactics fit into an overall digital strategy, but also how they interact with traditional, “offline” means of autocratic politics, such as cooptation or repression. Second, we discuss a number of challenges that empirical research needs to address, such as the effectiveness of digital tactics, the problem of attribution, and the tool dependence of existing research.

Introduction In most autocratic regimes, governmental interference in digital infrastructure and communication is commonplace. Governments control where and when modern communication technology (ICT) is introduced in the first place, who gets access to it, and what information is communicated. This influence occurs for political motives—to ban opposition activists from mobilizing their followers online, to contain the spread of information that is critical of the regime, or to spy on the population to identify potential dissenters. Examples include Hosni Mubarak’s complete internet shutdown in January 2011 (Dainotti et al., 2014), or the censoring of online content deemed unacceptable by the Chinese government (King et al., 2013). In this review article, we take stock of the literature on autocratic interference in internet communication, but also identify gaps and propose pathways for future research. The fact that dictatorships interfere in communication is not surprising, nor is it a new subject of study in political science. In fact, some of the classic work on authoritarian rule has emphasized the importance for autocrats to control the flow of public and private information (Friedrich & Brzezinski, 1965). In the digital age, this has become a greater challenge, but at the same time a tremendous opportunity for autocrats. Technological progress has vastly expanded the complexity, reach, and bandwidth of communications, requiring higher levels of technical sophistication for governmental interference. At the same time, however, digital communication technology opens up new possibilities for (fully or partly) automated interference: censorship software can automatically detect and block unwanted content, and network traffic can be scanned to single out users transmitting suspicious information. Our review focuses on the different network layers that can be used for interference. In using a (simplified) technology-centered structure for our discussion, we do not mean to suggest that these are the only means of authoritarian influence over the internet. Most regimes rely also on political and legal measures to regulate the provision of telecommunication services and the actors involved. However, once internet services are available to large segments of the population—which is now the case in the vast majority of countries worldwide—internet control usually means tampering with the network infrastructure, the data traffic, and the content being transmitted. After our review of the literature, we conclude our essay with a discussion of the theoretical and empirical challenges that future research in this field should address.

Theoretical Shortcomings Recent scholarship has made much progress to help us understand how autocrats interfere with online communication. Nonetheless, there are a number of shortcomings and gaps in the literature that should be addressed in future research. In this section, we discuss the need for more theoretical work on governmental interference, from the perspective of comparative autocracy research. We focus in particular on the interplay of different tactics of authoritarian control. Autocrats rarely ever use a single tactic to ensure political influence; similar to social movements and opposition groups (Horowitz et al., 2018), they rely on a portfolio of different tactics that together constitute a strategy for political survival (Tilly, 2010). Hence, to fully understand why an autocratic government employs particular tactics but not others, we need to adopt a broader perspective and examine autocratic repertoires or “toolkits”—different autocratic tactics in combination with each other. This stands in sharp contrast with existing work on governmental interference in digital communication, which has usually examined particular tactics such as shutdowns, censorship, or propaganda independently of others. We believe that this research agenda should be advanced in two ways, which we describe in more detail below. The Autocrat’s Digital Toolkit As our literature review above has shown, most of the research in political science on digital interference remains confined to a single tactic, rather than examining it in combination with others. We have yet to understand better what “digital strategy” autocratic governments adopt to fend off challenges to their rule. As regards this strategy, there are two sets of questions that research should address. First, we need to understand the overall purpose of the digital strategy. Autocrats may keep all interference secret, or they may censor blatantly in an open fashion to signal their strength. To what extent does this depend on whether they interfere preemptively to deter contention, or rather as a reaction to visible contention? Large-scale shutdowns may be signs of crises when used as a last-straw response to mobilized masses, whereas covert censorship may be motivated by precaution to avoid mobilization in the first place. Governments may also follow a more differentiated strategy and combine overt and covert tools. This seems to be practiced by the Chinese government, where a message from a fictitious “internet police” is displayed when a user accesses content that has been removed for political reasons (King et al., 2013). Other operations remain much less visible to the public such as the censorship and surveillance functions built into many chat applications in China (Deibert, 2015, p. 67). Second, we need to understand the relation between governments’ digital strategy and the targets of interference. To what extent is interference directed at specific users or groups rather than targeting the population as a whole? An obvious example for the latter is internet shutdowns (Dainotti et al., 2014), while Chinese online censorship targets specific users and content (King et al., 2013). Also, how does the combination of tools change when different groups are targeted? In general, there are reasons to assume that more developed countries resort to more differentiated types of interference (Guriev & Treisman, 2019) and that some tactics work for ordinary citizens, but not the elites (Roberts, 2018). We have to keep potential targets in mind when we assess autocrats’ tools of choice and differences among governments in their overall strategies. Digital Tools and Conventional Tactics Research has long argued that autocratic regimes select from a large repertoire of approaches to ensure political survival (Davenport, 2007; Gerschewski, 2013), for example by co-opting elites, increasing legitimacy, or by violently repressing dissent. With the advent of digital tactics, a regime’s repertoire has expanded tremendously. How do these modern digital tools relate to established, conventional strategies of autocratic survival? In general, we can distinguish between three scenarios. First, digital tools can serve as replacement for conventional tools. For example, if a regime can effectively contain mass mobilization by censoring and blocking online channels, this reduces the need for violent repression of protest. This is what we call substitution. Second, digital interference can be used in addition to traditional means of control, as for example when a government restricts freedom of the press, but at the same time censors online channels. This is an instance of reinforcement of conventional tactics of control. Third, conventional and digital tactics may complement each other. This is the case if digital interference interacts with conventional strategies, for instance, when governments shut down the internet “just-in-time” to disrupt opposition forces’ coordination and increase violent repression on the ground (Gohdes, 2015). Xu (2020) shows how digital interference helps the government to refine its conventional tools of repression and cooptation. Weidmann and Rød (2019) study the effect of internet technology on mobilization for protest and analyze how conventional tactics (violent repression of protest) interact with online mobilization. While these are first steps, future research needs to tackle these questions head-on.

Challenges for Empirical Analysis In addition to the need to theoretically situate autocrats’ digital tactics in their entire portfolio, there are several challenges we face in the empirical research on digital interference in autocracies. Effectiveness of Digital Tactics A key assumption in almost any analysis of autocratic interference is that it serves a political purpose, for example by deterring political challenges and helping autocrats to stay in power. Yet, there are few systematic tests of whether particular tactics are actually effective in achieving these ends. So far, research on the short-term impacts of “just-in-time” interference has produced results that are rather inconclusive. We know that large-scale shutdowns can facilitate offline repression of opposition groups during violent conflict (Gohdes, 2015). At the same time, however, interference can also backfire (Huang, 2018). Hobbs and Roberts (2018) find that China’s blocking of Instagram motivated users to bypass also other blocks and spurred political interest and critical online discourse. Finally, Pan and Siegel (2020) suggest that repression of dissenters can also simply be ineffective even if it does not backfire. An even larger gap exists when it comes to understanding the long-term impact of authoritarian interference. While some theoretical work suggests that the internet may impede non-democratic rule (Edmond, 2013), empirical research provides tentative support for cyber-pessimists who claim that the internet plays into the hands of autocrats (Rød & Weidmann, 2015). However, our understanding of the reasons for this is limited. Do efforts of information framing and manipulation, for instance, actually lead to increased perceptions of regime legitimacy among the public—and if so, does this in turn bolster authoritarian rule? Autocrats actively disseminate information in their favor, and while we know that propaganda may inhibit collective action (Huang, 2018), we do not know whether the recipients of these digital messages actually believe this information. Similarly, evidence also suggests that increased internet coverage reduces mobilization (Weidmann & Rød, 2019), but we again do not know whether this is because of the intentional use of digital tactics by governments. Moreover, particular digital tactics employed by the government may even backfire and undermine autocrats’ rule in the long run. When autocrats exclude certain groups from the internet, they establish new “digital divides” in the population, which in turn could increase grievances and motivations to mobilize. Even if digital tools do not backfire, they might simply be ineffective. Roberts (2018) notes that friction, a form of “porous” censorship that requires users to spend more time or money to access information, does not hold back everyone from seeking censored content. In other instances, interference can be countered and blocks can be circumvented by knowledgeable users, despite governmental efforts to prevent this (Deibert et al., 2012). The Attribution Problem The problem of attribution is something that affects much research on digital interference: in most cases, it is difficult to identify the actors who actually intervene in online communication. While most instances of interference likely happen in secret, for the ones we are able to observe, we oftentimes cannot say with certainty that the government is actually responsible. Some forms of interference might not be carried out by or in the name of the government but by private actors. Even though the nature of interference might suggest a governmental act—for instance, when an opposition website is defaced or taken down by a DoS attack—we cannot be sure that it is not some loyal individual responsible for it (Villeneuve & Crete-Nishihata, 2012). The attribution problem is exacerbated when governments hire private companies and actors to interfere with the internet. These practices help autocrats to shift responsibility for censorship and surveillance (Deibert et al., 2012), which makes it even more difficult to analyze the nature and extent of government control. Similarly challenging are technical issues that make it difficult to judge whether an outage is the intentional result of an attack. If we mis-attribute a particular action, we risk over- or underestimating the extent to which autocrats are willing and able to control online communication. Addressing the attribution problem is difficult; in rare instances, we may be able to work with network forensics experts that are able to trace certain activities back to their origins. Tool-Dependent Research An increasing fraction of research is “tool-dependent,” which means that it relies on very specific digital platforms (such as Twitter or Facebook) and, in addition, specific functions these platforms offer. This begs the danger of producing research and results that apply exclusively to this tool or functionality, but cannot easily be generalized. The challenge is to identify functionality that exists across platforms (e.g., posting “messages,” or “sharing” information), such that we can at least theorize (but possibly also study) their impact independently of a particular platform. A second limitation imposed by tool-dependency is that research—by abiding to their terms of service—has to adapt to possibly changing policies of those platforms, which can severely hinder ongoing research. A current example includes Facebook’s recent closure of its Pages API, which impedes the legal and technical means of content extraction for research (Freelon, 2018).

Conclusion Recent research has significantly increased our knowledge of the political role of digital communication in autocracies. Not surprisingly, autocrats make systematic use of digital tools and interfere with online communication to contain challenges to their rule. In this review, we have given an overview of the literature by referring to the key layers of the internet—the infrastructure, the network, and the application layer. We have also discussed theoretical gaps and empirical challenges in research on the internet’s political role in non-democratic countries. Related to the former, we encourage research that looks at governments’ overall strategy, both when it comes to the different digital tactics that governments have at their disposal, but also how they interact with conventional ones such as violent repression. For example, in the digital age, governments may resort to overt violent repression less frequently, because they can better anticipate and prevent potential dissent. Our article also discussed a number of empirical challenges that arise in the study of internet communication and autocratic rule: the need to analyze, rather than assume, the effectiveness of digital tactics, the difficulty of observing the perpetrators of digital interference, and the tool dependence of existing research. Overall, theoretical and empirical progress depend on each other; for example, being able to better observe a particular digital tool can help us theorize its relationship to other, conceptually distinct forms of interference. Similarly, we need to advance (and possibly revise) theories of conventional repression by considering the numerous ways in which dictators and their agents influence internet communication.

Declaration of Conflicting Interests

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding

The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors gratefully acknowledge funding from the German National Science Foundation (DFG) under Research Grant 402127652. ORCID iD

Nils B. Weidmann https://orcid.org/0000-0002-4791-4913

Notes 1.

This idea is at the core of the Open System Interconnection (OSI) model, the most important conceptual model for computer networks. 2.

While technically the latter examples refer to network interference, we include them with the work on access provision and control.