Political and technological developments have often spurred responses from international humanitarian law (IHL). We already have a good sense of the major questions on the agenda in upcoming years. Two are especially noteworthy: First, how to apply IHL to cyberwarfare? Second, how to regulate autonomous weapons systems (AWS)—including whether to create new laws regarding both domains? These two issues, more than commonly appreciated, have a direct relationship with one another, which lawyers and policymakers should acknowledge. Advocates for a ban on the development and deployment of AWS, especially, will have to address the cyber theater in addition to traditional battlegrounds. Meanwhile, all who are invested in the regulation of cyberwarfare through IHL will have to account for the potential central role of AWS in that battlespace.

Attention to the application of IHL to cyberwarfare is relatively recent. The most prominent effort to translate the existing body of law to this domain is the Tallinn Manual, which is unequivocal in its conclusion that IHL can and does apply to cyberwarfare. Reaching answers on the details of that application will be an ongoing process, in which the Manual will play a significant role, but few would disagree with its underlying premise.

IHL will also govern the development and potential deployment of AWS used in the context of armed conflict. The most controversial questions concern whether to ban development and deployment of AWS entirely, and whether (or in what circumstances) IHL-compliant AWS might be theoretically possible.

Arguments exist for the potential compatibility of AWS with the law-of-war framework. Yet as Charli Carpenter has chronicled, “[t]he prospect of gradually outsourcing kill decisions has made a growing number of robotics experts, ethicists, and, now, a transnational network of human security campaigners and governments uneasy.” The campaign to ban “killer robots” has made its disagreement about making a place for AWS clear: “fully autonomous weapons would not meet the requirements of the laws of war.”

Advocates for a ban on autonomous weapons systems have thus far trained much of their attention to kinetic warfare in land, air, and sea. They will have to extend greater attention to AWS in the cyber context.

The Department of Defense defines AWS as systems “that, once activated, can select and engage targets without further intervention by a human operator.” Cyber weapons of this kind would include malware that autonomously identifies and exploits vulnerabilities in enemy networks.

As with AWS in kinetic warfare, autonomous cyber weapons could significantly increase the effectiveness of various kinds of cyber operations. Alessando Guarino has argued that autonomous systems “would be considered a force enhancer and would make offensive operations even more attractive.” Militaries evaluating autonomous systems in their assessments of necessary future capabilities will see particular value in their application to cyber. Powerful trends will exist toward optimizing offensive operations in cyber, and the paths of development for offensive malware could increasingly involve autonomous agents. Consider, for instance, a Washington Post report on the NSA’s proposed use of a system, “code-named TURBINE, that is capable of managing ‘potentially millions of implants’”—e.g., sophisticated malware—“for intelligence gathering ‘and active attack.’” Though the details would matter for classifying such a system as autonomous, as opposed to “semi-autonomous” or automated, it is easy to envision capabilities in the medium-term for which no other description is possible.

Of course, those familiar with the debate over AWS in kinetic warfare have already heard arguments about potential upsides for efficacy. Yet the nature of the cyber battleground, and especially cyber defense, will provide strong incentives to employ autonomous offensive cyber systems.

The cyber theater consists in whole or in part of computerized systems, where the speed of movement is not constrained by the physical limitations of feet and engines and rockets, and where the scope and scale of combat may proceed beyond the ability of human observers to comprehend in real-time. As Dorothy Denning argues, “[a]t the speed of cyber, placing humans in the loop at every step is neither practical nor desirable.” As a result, in direct analogy to defense systems such as anti-missile systems,

[m]ost anti-malware and intrusion prevention systems have both manual and automated components. Humans determine what goes into the signature database, and they install and configure the security software. The processes of signature distribution, malicious code and packet detection, and initial response are automated, but humans may be involved in determining the final response.

Effective cyber defenses, in short, will have to rely upon automatic routines. Further, to date, the technological development and practical adoption of autonomous weapons systems for defense has progressed further than that of autonomous offensive systems.

The cyber theater will thus involve warfare where the likely targets of offensive operations will be protected by automated routines, and likely by semi-autonomous or autonomous systems. The practical challenges of overcoming those defenses will only increase the desirability of autonomous systems for offense. Further, one can envision that at a certain point no other option will be practically available.

The nature of the battlefield in question and the character of cyber defense will dictate the capabilities necessary for effective offensive operations, and the trend will be toward autonomous systems. This marks the central challenge for envisioning an AWS ban taking hold in the cyber theater. Advocates for the ban on AWS do not naively believe that war will go away; they are arguing that a particular tool should not be deployed. (Analogous to similar bans on weapons in kinetic warfare such as chemical weapons or blinding lasers). Yet in offensive cyberwarfare, AWS may have to be deployed, because they will be integral to effective action in an environment populated by automated defenses and taking place at speeds beyond human capacities.

Kinetic warfare has proceeded without certain weapons that states have agreed (by law, or in practice) not to employ; and states may well be able to make do without autonomous weapons platforms deployed against enemy targets in land, air, and sea. Conversely, in the cyber theater, a ban on autonomous offensive weapons systems could create an unstable structural imbalance, significantly advantaging defenders at the expense of attackers. Preserving this equilibrium may thus ask far more of cyber belligerents than a ban on AWS in kinetic warfare.

For this reason, a ban on AWS in cyberwarfare would do more than preclude a specific category of weapons: it would target cyberwarfare as a phenomenon. To import a ban on autonomous weapons systems into cyberwarfare is arguably tantamount to advocating a ban on cyberwarfare altogether.

No one expects a twenty-first century without cyberwarfare, however, and this presents an unavoidable problem. If offensive AWS will come to be a feature of war in cyberspace—lest states forego the ability to conduct such operations effectively—then AWS ban advocates will have to fashion a discrete exception for cyber. Yet what reason would there be for a ban in kinetic space and no ban in cyberspace? And how could this justification not undercut the case for the former?

After all, if one accepts that the “robots” are on their way in cyber, it will be no answer to say that such robots will not be of the “killer” variety. The Tallinn Manual and all other authorities envision cyber as a growing theater in military operations, and anticipate attacks through cyber that cause physical damage in the real world. Offensive military options in cyberwarfare will seek to kill people or at least break things, like their kinetic counterparts.

What is more, if cyberwarfare will make up a greater proportion of military activity, and may increasingly incorporate AWS, this could pose a serious spillover effect for attempts to ban or strictly regulate AWS in traditional kinetic warfare. Advocates for AWS restrictions may begin to hear arguments directly challenging why autonomous offensive systems can be used in cyber with lethal effects, but still can’t be allowed in kinetic warfare. What distinction justifies killing with autonomous malware and precludes killing with an autonomous drone?

Cyberwarfare with offensive AWS will not only pose a challenge for advocates of a ban on development and deployment of AWS, however: it will also make more pressing demands upon IHL. Most current arguments against a ban on AWS depend upon the eventual possibility of IHL-compliant weapon systems. Yet if there are reasons to think a ban will not take hold in cyber (thanks to the relationship between offensive and defensive AWS mentioned above) then IHL-compliant cyber capabilities must become a reality in the medium-term, all of which puts tension on the international legal system. IHL’s ability to effectively regulate cyberwarfare in general—and not just in terms of a ban or no ban on AWS–may thus depend on the technological ability to achieve a form of compliance that meets the realities of conflict.

Preserving traditional norms will remain a paramount goal of IHL, but it will have to account for the realities of new battlespaces. The cyber theater will involve warfare where development and deployment of offensive AWS may well be unavoidable. And while commentators of all dispositions will need to bear in mind this relationship, advocates for a ban on AWS—and perhaps even for strict constraints on AWS—will need to be especially attentive, lest they fight a battle for ground they cannot hold.

The views expressed above are those of the author in his personal capacity. Click here for more of Just Security’s coverage of cyber-related issues.