The next president will have a range of issues on their plate, from how to deal with growing tensions with China and Russia, to an ongoing war against ISIS. But perhaps the most important decision they will make for overall human history is what to do about autonomous weapons systems (AWS), aka "killer robots." The new president will literally have no choice. It is not just that the technology is rapidly advancing, but because of a ticking time bomb buried in US policy on the issue.

In 2012, the Obama administration created Department of Defense Directive 3000.09, which sets policy on how the Pentagon handles the questions of this new technology. However, the directive has a 5-year limit, meaning that Hillary Clinton or Donald Trump will need to decide what the US policy on killer robots will be within the first year of their term.

It sounds like science fiction to think that a president will have to wrestle with the idea of robots outside of direct human control taking on a combat role, but that is where the technology is headed. Just as we see the increasing use and acceptance of driverless cars, the same is playing out in the realm of war. The US military already has more than 10,000 unmanned aerial systems (aka "drones") in its inventory and another 12,000 on the ground. Early versions, like the Predator, were almost completely remote controlled, but each new version has gained in intelligence and autonomy. While we are not in the world of the Terminator, robots have already shown the ability to do complex tasks on their own, such as take off and land from aircraft carriers and track targets, be they anything from a human to a submarine. Many air defense and cybersecurity roles are almost completely automated already.

WIRED OPINION About Heather M. Roff is a research scientist at Arizona State University, a senior research fellow at the University of Oxford, and a fellow at New America. Peter W. Singer is strategist at New America and the author of Ghost Fleet.

Indeed, by our count the US military is working on at least 21 different projects to increase the autonomous capacities of weapons systems in war. And just last week, the Pentagon's Defense Science Board released a major new study on what it thinks should be the future of robotics, concluding that "autonomy will deliver substantial operational value across an increasingly diverse array of DoD missions, but the DoD must move more rapidly to realize this value."

Policy's Global Reach

This revolution is global, though; American efforts are being paralleled by research and deployments of increasingly capable systems from nations around the world. Just this summer, China's military leaders talked up their plan to put artificial intelligence into their new cruise missiles, Russia’s Foundation for Advanced Studies (that country's version of DARPA) displayed work on what they call "Iron Man" humanoid robots, while even Iraq showed off a new armed ground robot, a crude remote-operated mini-tank called "Alrobot" (Arabic for robot).

This emerging reality has prompted debate—in places ranging from the Pentagon to the United Nations—on the need for policies, regulations, or, as some even argue, preemptive bans on AWS. Yet, to date only the US and its close partner the UK have created actual policies regarding those weapons, albeit limited. Both countries allow research to move forward on AWS, but set a goal to limit their deployment without appropriate human judgment or meaningful human control. In the US, the policy also has an out clause, allowing us to build and use the tech, if senior leadership deems it necessary.

But this loose policy terminates in 2017. It will either have to be renewed, amended, or let lapse. Even making no decision would be momentous, as it would signal full entry into an unregulated world of autonomous weapons.

The upcoming deadline offers the next president a historic opportunity to shape the future of robotics and war. They should seize it, using the forced deadline to establish a clearer US policy going forward, which, in turn, would strengthen US global influence on this issue.

Fix Vague Definitions

If the US is going to renew the policy, two key areas of amendment need action. First, the present policy does not sufficiently identify what sorts of actions are permissible—such as employing lethal force—for various weapons systems. The present policy allows "semi-autonomous" weapons to use lethal force, but seeks to limit "autonomous weapons" from doing so.

But the definitions are murky: The present policy defines an autonomous weapon system as one that "once activated, can select and engage targets without further intervention by a human operator." By comparison, the allowable semi-autonomous weapons systems "…only engage individual targets or specific target groups that have been selected by a human operator." However, these semi-autonomous weapons can still employ "autonomy for engagement related functions," so long as "human control is retained over the decision to select individual targets and target groups for engagement."

The result: ambiguity regarding what precisely differentiates a semi-autonomous robotic weapon from an autonomous one. Under the current definition, both would be activated and deployed by a human, and both would still require a human to identify the permissible kinds of targets they could shoot at. The difference would be in how or perhaps when the autonomous one actually "selects" or "detects" its targets, which isn't all that clear. And even in this, the directive states that it is crucial for “commanders and operators to exercise appropriate levels of human judgment over the use of force." The words "appropriate" and "judgment" are pretty loaded terms here, given that people might reasonably debate their meanings in all sorts of contexts. In short, if something is going to be banned or not, our definitions of it need to be much more clear and accessible.

Which leads to a second area of concern: the advancement of artificial intelligence. Learning systems are the future of computing and autonomy. They'll be useful for everything from navigation to weapons systems target-recognition. By learning how to deal with new kinds of targets not previously selected or identified by its human creators, they'll be able to keep up with the decoys, deception, or ruses that an adversary is likely to try to use. But such new capabilities introduce a question not answered in our old policy. While the US presently puts a premium on verification, validation, testing, and evaluation to determine the likely behaviors of a system, learning systems by their very definition might not act in perfectly predictable manners. To put it another way, traditional testing, verification and validation procedures may not work when the technology is literally designed to learn and thus change.

Part of the solution may be to develop a more explicit policy on using online versus offline learning. Take, for instance, DARPA’s Target Recognition and Adaptation in Contested Environments (TRACE) project. TRACE uses a deep neural net to classify synthetic aperture radar data that a targeting system might gather (these images appear to a human as grainy black and white 3D pictures). TRACE is supposed to provide a lower false-alarm rate than what either humans or existing machines could achieve, only identifying "real" targets, as well as requiring low-power consumption (for computation and search).

The "contested environments" part of TRACE's name points to something else important. The TRACE system is designed to be used in places where an enemy might jam or degrade communications, trying to defeat manned or remotely operated vehicles—the sort of scenario Pentagon planners worry about in a war with a China or Russia. TRACE points to a valuable answer, an AI that can still operate when the lines of communication with its human creators is cut, adapting to adversary countermeasures, learning new targets, and even be co-located on a guided or loitering munitions. But here's the rub: Where should the learning that drives all of this happen? In the lab (off-line) or in the field (online)?

This difference matters greatly. If learning happens off-line, the system is frozen and cannot continue to learn when it is deployed. We might be able to test it to a sufficient degree, through modeling and simulation (though with difficulties, as we can’t actually test "all possible" states of the system), but it wouldn’t be as useful as a system that could keep on learning once it is deployed.

Yet if one decided to deploy a system that could continually learn, and thus less likely to be deceived, there is another problem: There is no way to know whether it might learn something that we did not intend for it to learn. We would only know after the fact—after it did something we did not want it to do. Thus there should be some guidance from the new commander-in-chief about the appropriateness of creating or fielding these kinds of systems. One approach might be to require a mix of what are known as "negative" and "positive" controls on learning autonomous systems, akin to how nuclear weapons are designed to be used only as planned, but also have built-in vetoes for human controllers to stop their actions.

It's crucial that the US clarifies its own policies on killer robots not just because of this deadline, but because there is also a limited window for the next president to take the lead in shaping the coming global debate in this space. The US will gain valuable leverage if it can be the first nation to have a truly robust policy on armed robotics, but it would be a bad thing for overall global peace and stability if it was alone in doing so.

The Need for Consensus

This track of external engagement on robotics should take place at three levels.

First, the US should try to build consensus among its partners and allies about what shared policies in this area ought to be. Just as the western nations within NATO had to establish collective approaches to the emerging domain of cyber conflict, they now should do so for AI and robotics. This is valuable not just for each individual nation and the broader alliance, but also to create a key building block for the bigger global debate. The second track is to work with regional multilateral groups aiming at the same goal, using these groups as a means to steer the small or middle powers that are starting to dabble in this space to start to take more seriously the issue of robotic weapons safety. And the third is to use these building blocks to drive the development of policy behavior and laws at the global level, which is the best means to influence the states like Russia or China with whom the US has the least leverage.

The first venue for that will be the upcoming review conference of the United Nations Convention on Conventional Weapons (CCW). The CCW is the primary international body for regulating weapons, and has already debated "killer robots" three times in the past three years. Given that the CCW review conference is in December (a few weeks after the presidential election) the defense and arms control advisors for whichever team that wins the election will need to keep track of it during its transition.

Running through this strategy must be the understanding that, while self-defense is any state's right and protected as such under the United Nations' Charter, how states pursue their defense—particularly with weapons that operate autonomously—is a precarious area. It surfaces all sorts of concerns, from arms races and human rights abuses to new risks of unintentional or accidental uses of force. The focus must, therefore, be on what the norm of international behavior should be. If nations are indeed going to push forward with this technology, nations should agree on clear policies for their development, use and regulation. Moreover, there need to be established control mechanisms (both positive and negative), weapons reviews, and even confidence-building measures.

This effort would advance further with a focus not just on the technology itself, but also the places and targets it is designed to be used against. This would be similar to what has played out in the discussions surrounding cyberwar. While it's clear that we can’t stop all cyber attacks, we can engender norms to seek to put certain types and targets off limits. The same may be the case with robotic weapons: Nations might not all agree on their outright ban, but could agree on key areas or times where their use would just be too problematic. Such no-go's might be a release of the technology before a conflict has started; where the system would be more prone to abuse or accidents (such as those targeting individual humans vs. large-scale platforms like a tank or submarine); where the risk for escalation is high, such as anything related to the nuclear weapons or their command and control; or where it threatens shared global commons, like space (any weapons use there would create space junk that threatens all satellites).

This all seems daunting, but the US may be able to take a page from past generations and how they wrestled with new technologies. About a century back, the nations of the world couldn't agree on an outright ban of naval mines, but they found a way to come to concord on certain types and uses of them. Akin to our worries now with robotics, they found they shared concern over free-floating mines that there was no control over, as they risked everyone's shipping, even the maker of the mines themselves. So, in what became known as "Hague VIII," the Convention of 1907 Relative to the Laying of Automatic Submarine Mines banned any mines that were free-floating and didn't self destruct within an hour, a rule that still holds today. This type of foresight is necessary for the killer robot issue as well.

The next president can't avoid the challenge of robotics and war, but he or she may be able to turn it into an opportunity. He or she will have to make a decision about US policies towards this new technology. Will the decisions on armed robots be a thoughtful, strategic one, or will the nation just operate on auto-pilot?