As custom government malware becomes an increasingly common international weapon with real-world effects—breaking a centrifuge, shutting down a power grid, scrambling control systems—do we need legal limits on the automated decision-making of worms and rootkits? Do we, that is, need to keep a human in charge of their spread, or of when they attack? According to the US government, no we do not.

A recently issued Department of Defense directive signed by Deputy Secretary of Defense Ashton Carter sets military policy for the design and use of autonomous weapons systems in combat. The directive is intended to minimize "unintended engagements"—weapons systems attacking targets other than enemy forces, or weapon systems causing collateral damage. But the directive specifically exempts autonomous cyber weapons.

Most weapon systems, the policy states, "shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force," regardless of whether the system is using lethal "kinetic" weapons or some form of non-lethal force. If bullets, rockets, or missiles are to be fired, tear gas is to be launched, or systems are to be jammed, a human needs to make the final decision on when they are used and at whom they are aimed.

But the policy explicitly exempts "autonomous or semi-autonomous cyberspace systems for cyberspace operations." And the development efforts for those sorts of systems is now being pursued much more openly. For instance, on the same day the new directive was issued, the Defense Advanced Research Projects Agency (DARPA) solicited bids for "Plan X," an effort to create a "foundational cyberwarfare" capability that would allow DOD to better monitor, exploit, and attack an enemy's network and computer systems. (A synopsis of DARPA's Plan X project is available as a PDF document.)

As part of the effort, DARPA is examining the use of commercial tools used by security experts as part of penetration testing and hardening, including Metasploit and Immunity's Canvas, as well as a "mission runtime environment" that can run automated sets of attacks. And the focus is decidedly on "automation"—keeping "manual" human oversight and judgment in the equation is simply too slow when it comes to speeds measured by the microsecond. It also doesn't scale. Here's how the Plan X overview puts it:

The current manual approach has defined the way cyber operations are conceived and would be conducted—as asynchronous actions. Manual processes provide no capacity for real-time assessment and adjustment to adapt to changing battlespace conditions. The current paradigm is a simple progression of plan, execute, plan, execute, plan, execute... however, if the process can be technologically optimized and the time-intensive requirements minimized, commanders will be able to leverage cyber capabilities in a more flexible manner, consistent with kinetic capabilities, to achieve real-time, synchronous effects in the cyber battlespace.

With systems like Plan X, planners will instead deploy attack libraries from a "play book," "similar to a football play book that contains specific plays developed for specific scenarios," according to the proposal. Those plays may contain "checkpoints" during a mission execution where the code pauses for human input or direction, and the code will be built with checks on what sorts of things it can do without human direction. But most of the time, the code will make its own decisions while running in the wild. Though that's likely to cause spillovers and certain unintended problems, the government is willing to live with them.

Network and software attacks can potentially have the same sorts of impact as non-lethal, or even lethal weapons, plus they have much more widespread impact beyond the intended target. But the DOD is handling them under separate rules of engagement. Remember how Stuxnet managed to spread beyond Iranian nuclear research facilities? Such scenarios will likely become more common—soon.