On April 13, China’s delegation to United Nations Group of Governmental Experts on lethal autonomous weapons systems announced the “desire to negotiate and conclude” a new protocol for the Convention on Certain Conventional Weapons “to ban the use of fully autonomous lethal weapons systems.” According to the aptly named Campaign to Stop Killer Robots, the delegation “stressed that [the ban] is limited to use only.” The same day, the Chinese air force released details on an upcoming challenge intended to evaluate advances in fully autonomous swarms of drones, which will also explore new concepts for future intelligent-swarm combat.

The juxtaposition of these announcements illustrates China’s apparent diplomatic commitment to limit the use of “fully autonomous lethal weapons systems” is unlikely to stop Beijing from building its own.

Although momentum towards a ban on “killer robots” may seem promising—with a total of twenty-six countries now supporting such a measure—diplomacy and discussion about autonomous weapons may still struggle to keep up with technological advancement. Moreover, great-power militaries like the U.S. and U.K. believe a ban would be premature. Even as multiple militaries are developing or have already attained autonomous weapon systems, the U.N. group has yet to reach a consensus on what even constitutes a lethal autonomous weapons system, “fully autonomous” or otherwise. And despite emerging consensus on the importance of human control of these systems—however they might be defined—the U.S., Russia, Israel, France and the United Kingdom have explicitly rejected proposals for a ban.

Countries recognize that artificial intelligence is a strategic technology that could be critical to future military power, so it is hardly surprising that major militaries may hesitate to constrain development, particularly at a time when rivals and potential adversaries are actively seeking an advantage. Why might China, unlike the U.S. and Russia, have chosen to publicly support a ban? Clearly, the Chinese military is equally focused on the importance of artificial intelligence in national defense, anticipating the emergence of a new “Revolution in Military Affairs” that may transform the character of conflict. As my report “Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power” described, the Chinese military is actively pursuing a range of applications, from swarm intelligence to cognitive electronic warfare or AI-enabled support to command decision-making.

While China’s engagement in the U.N. group should be welcomed, its objectives and underlying motivations merit further analysis. China’s involvement is consistent with the country’s stated commitment under its 2017 artificial intelligence development plan, which calls for China to “strengthen the study of major international common problems” and “deepen international cooperation on AI laws and regulations.” In historical perspective, China’s integration into international security institutions shows at least mixed success, as post-Mao China has proven willing in some cases to undertake “self-constraining commitments to arms control and disarmament treaties,” as Iain Johnston’s research has demonstrated.

However, China’s recent engagement with cyber issues reflects a mixed record, including the aggressive advancement of “cyber sovereignty,” which reflects Beijing’s security priorities. In 2017, China’s reported rejection of the final report of the U.N. group on information security contributed to the collapse of that process. Meanwhile, Beijing’s repeated denouncements of U.S. “cyber hegemonism” (sic)—and calls for cooperation and a “community of shared future” in cyberspace—have not constrained its own development of offensive cyber capabilities through the military’s new “Strategic Support Force.” Will China seek to leverage this latest Group of Governmental Experts process to condemn U.S. efforts without restraining its own development of new capabilities?

China’s two position papers for the group indicate an interesting evolution in its diplomatic posture on autonomous weapon systems, which remains characterized by a degree of strategic ambiguity and apparent preference for optionality. The first paper, from the December 2016 session, declared, “China supports the development of a legally binding protocol on issues related to the use of LAWS, similar to the Protocol on Blinding Laser Weapons, to fill the legal gap.”

However, the latest April 2018 position paper—released just a few days before its delegation called for a ban—did not include support for such an agreement. It merely highlighted the importance of “full consideration of the applicability of general legal norms” to lethal autonomous weapons. Notably, this latest position paper characterizes autonomous weapon systems very narrowly, with many exclusions. China argues that lethal autonomous weapons are characterized by:

lethality;

autonomy, “which means absence of human intervention and control during the entire process of executing a task”

“impossibility for termination” such that “once started there is no way to terminate the device”;

“indiscriminate effect,” in that it will “execute the task of killing and maiming regardless of conditions, scenarios and targets”; and

“evolution,” “through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations” (emphasis added throughout).

Banning weapons systems with those characteristics could be a symbol, while implicitly legitimizing the development of semi-autonomous or even fully autonomous systems that do not possess such qualities. By such a standard, a weapons system that operates with a high degree of autonomy but involves even limited human involvement, with the capability for distinction between legitimate and illegitimate targets, would not technically be a LAWS, nor would a system with a failsafe to allow for shutdown in case of malfunction. Interestingly, this particular definition is much more stringent than the Chinese military’s own definition of the concept of “artificial intelligence weapon.” According to the dictionary of People’s Liberation Army Military Terminology, an artificially intelligent weapon is “a weapon that utilizes AI to automatically [] pursue, distinguish, and destroy enemy targets; often composed of information collection and management systems, knowledge base systems, assistance to decision systems, mission implementation systems, etc.,” such as military robotics. Because this definition dates back to 2011, the Chinese military’s thinking has evolved as technology has advanced. It is important, therefore, to consider that there may be daylight between China’s diplomatic efforts on autonomous weapons and the military’s approach.

The Chinese military does not have a legal culture analogous or directly comparable to that of the U.S. military. It’s also important to recognize that Beijing’s military has traditionally approached issues of international law in terms of legal warfare, seeking to exploit rather than be constrained by legal frameworks. The military’s notion of legal warfare focuses on what it calls seizing “legal principle superiority” or delegitimizing an adversary with “restriction through law.”

In line with this approach, China might be strategically ambiguous about the international legal considerations to allow itself greater flexibility to develop lethal autonomous weapons capabilities while maintaining rhetorical commitment to the position of those seeking a ban—as it does in its latest position paper. (The paper does articulate concern for the capability of LAWS in “effectively distinguishing between soldiers and civilians,” calling on “all countries to exercise precaution, and to refrain, in particular, from any indiscriminate use against civilians.”) It is worth considering whether China’s objective may be to exert pressure on the U.S. and other militaries whose democratic societies are more sensitive to public opinion on these issues.

Despite the likely asymmetries in its approach to law, it seems unlikely that the military would unleash fully autonomous “killer robots” on the battlefield. Beyond the fact that the AI technology remains too nascent and brittle for such an approach to be advantageous, the military will likely concentrate on the security and controllability of its weapons systems. The core of China’s military command culture prioritizes centralized, consolidated control. In the absence of a culture of trust, the military is hesitant to tolerate even the uncertainty associated with giving humans higher degrees of autonomy, let alone machines. Even if the military someday trusts artificial intelligence more than humans, it may still face issues of control, given the potential unpredictability of these complex technologies. (As the armed wing of Chinese Communist Party, the military is required to “remain a staunch force for upholding the CCP’s ruling position” and preserve social stability. A chatbot in China was taken offline after its answer to the question “Do you love the Communist Party?” was simply “No.”)

China’s position paper highlights human-machine interaction as “conducive to the prevention of indiscriminate killing and maiming … caused by breakaway from human control.” The military appears to have fewer viscerally negative reactions against the notion of having a human “on” rather than “in” the loop (i.e., in a role that is not directly in control but rather supervisory), but assured controllability is likely to remain a priority.

As the U.N.’s autonomous-weapons group continues its work, China’s evolving approach to these issues—including whether Beijing will aim for rhetorical dominance on the issue—will remain an important bellwether of how a great power that aspires to possess a world-class military may approach the legal and ethical concerns inherent with the advent of artificial intelligence. While continued engagement with Beijing will remain critical, it is also important to recognize that the Chinese military will almost certainly continue to pursue military applications of artificial intelligence (likely with limited transparency). Unsurprisingly, China’s position paper emphasizes the importance of artificial intelligence to development and argues that “there should not be any pre-set premises or prejudged outcome which may impede the development of AI technology.” At the same time, the boundaries between military and civilian applications of AI technology are blurred—especially by China’s national strategy of “civil-military fusion.” China’s emergence as an artificial intelligence powerhouse may enable its diplomatic leadership on these issues, for better and worse, while enhancing its future military power.