Marines test new equipment such as the Multi Utility Tactical Transport (MUTT) in a simulated combat environment at Marine Corps Base Camp Pendleton, California on July 8, 2016. Photo : 15th Marine Expeditionary Unit

Last month, the U.S. Army put out a call to private companies for ideas about how to improve its planned semi-autonomous, AI-driven targeting system for tanks . In its request, the Army asked for help enabling the Advanced Targeting and Lethality Automated System (ATLAS) to “acquire, identify, and engage targets at least 3X faster than the current manual process.” But that language apparently scared some people who are worried about the rise of AI-powered killing machines. And with good reason.


In response, the U.S. Army added a disclaimer to the call for white papers in a move first spotted by news website Defense One. Without modifying any of the original wording, the Army simply added a note that explains Defense Department policy hasn’t changed. Fully autonomous American killing machines still aren’t allowed to go around murdering people willy nilly. There are rules—or policies, at least . And their robots will follow those policies .

Yes, the Defense Department is still building murderous robots. But those murderous robots must adhere to the department’s “ethical standards.”


The added language:

All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.

Why does any of this matter? The Department of Defense Directive 3000.09, requires that humans be able to “exercise appropriate levels of human judgment over the use of force,” meaning that the U.S. won ’t toss a fully autonomous robot into a battlefield and allow it to decide independently whether to kill someone. This safeguard is sometimes called being “in the loop,” meaning that a human is making the final decision about whether to kill someone.

The United States has been using robotic planes as offensive weapons in war since at least World War II. But for some reason, Americans of the 21st century are much more concerned about robots on the ground than they are with the robots in the air. Perhaps we all got scarred by watching movies like Terminator 2: Judgment Day—a movie that was far more realistic than we probably imagined at the time, considering that Darpa was actually trying to build something like Skynet during the 1980s.


The U.S. military used drones in the Vietnam War, in Iraq during the first Gulf War, in Afghanistan, in Iraq during the second Iraq War, in Syria in the fight against ISIS, and numerous other countries. Drone strikes in Somalia have skyrocketed under President Donald Trump. But those robots are somehow less scary to most Americans here in the year 2019.

The Department of Defense is going to keep pushing the technology behind targeting system s like ATLAS to make its weapons more agile, more intelligent, and ultimately more lethal. But don’t worry, it’s going to keep doing all of that according to a policy that’s been written down. Sleep well.


[Defense One]