In the case of humans in combat, they do not have unrestrained autonomy nor are they excused from accountability if their actions fall outside prescribed moral, legal, and ethical norms. Human combatants, and their use of the systems they man, direct, or operate remotely, are all constrained by approved rules of engagement, the laws of war, military regulations, command guidance, moral considerations, and the legal constraints of necessity and proportionality. Yet to be effective and accomplish their military mission, human combatants are granted a necessary, but not unlimited, degree of autonomy. This degree of autonomy is variable and increased or decreased based on numerous factors, some of which include the specifics of the assigned mission, the capabilities of the subordinate, assessed risks, the area of operations, and the degree to which non-combatants are present. An appropriate degree of autonomy in a dynamic and rapidly changing environment allows subordinates to “exercise disciplined initiative within the commander’s intent.”[8] Operating in these types of environments is a core competency of US military leaders and gives them, and their units a comparative advantage over more rigidly structured and less autonomous military forces.

...warbots will operate in a manner similar to humans in that they will operate under variable autonomy settings...

To maximize the effectiveness of warbots, they also require a degree of autonomy. The importance of this autonomy increases in proportion to the level of technology available relative to enemy systems. Not offering a competitive degree of autonomy to a warbotic combat vehicle will place it at a distinct disadvantage when fighting other faster-thinking machines that do not require human approval before executing firing solutions. However, the environments in which warbots will be used, their relative utility, and therefore the required degree of autonomy will vary dramatically based on the situation. An offensive across the deserts and mountains of Iran would look very different from a prolonged battle in a mega-city like Manila, and the differences would not only be in the types of forces encountered, but also in terrain, electromagnetic density, number and type of civilians present, and proximity to command nodes, to name a few. Consequently, warbots will operate in a manner similar to humans in that they will operate under variable autonomy settings. These could take a form of adjustable rules of engagement requiring higher degrees of certainty regarding targets before firing without human intervention, or instructions similar to air defense orders of “weapons hold” that allows firing only when fired upon, “weapons tight” that allows firing when a target has been positively identified as enemy, and “weapons free” when firing is allowed at anything not identified as friendly. Variable autonomy enables warbots to maximize their capabilities while also enabling their responses to be tailored as required to ensure low probabilities of collateral damage.

Who Decides?

The question of who makes these autonomy decisions is an interesting one because of the nature of military decision making itself. At what level does a decision need to be approved by a higher command? Too much centralization can both produce organizational paralysis in crisis situations while also limiting the ability of subordinates to exploit fleeting opportunities. Too little centralization and a force turns into a widely divergent organization that lacks a central nervous system and can operate akin to a disorganized mob. The US military takes a balanced approach to this challenge with centralized planning and decentralized execution, all within the commander’s intent. Leaders up and down the chain of command engage in dialogue to ensure decisions are made at the appropriate level. Just as it would be unwise to enable a lieutenant to use nuclear weapons, it would also be ineffective to require the president to approve every pull of a rifleman’s trigger. The military has rules of engagement and mechanisms in place to ensure that decisions, to include degrees of autonomy, are made and understood at different levels. This points back to the central role of the commander as the final, and accountable decision maker. Commanders must make the unenviable decisions regarding dangerous courses of action, collateral damage, degrees of acceptable risk, amount of autonomy, and the myriad of other necessary life and death choices. This has been the case with humans and technology so far, and remains true in the case of warbots. Commanders will continue to decide which weapon systems they employ, how much autonomy to give their human and robotic subordinates, and what restrictions to place on their forces to ensure larger strategic objectives are not compromised as a result of tactical missions.

Accountability

The robotic nightmare scenario might look like this: A future commander is informed that during a combat operation, multiple safeguards failed, and in the confusion of the battle, a recently fielded, AI-enabled attack helicopter mistakenly targeted a location with numerous civilians. Before the headquarters knew of the mistake, dozens were killed and wounded. The internet erupted with a number of prominent news sources quickly reporting on the carnage. The commander thought, “This is why we should never have given machines autonomy; I knew we should have kept a man in the loop...”

Yet replace “AI enabled attack helicopter” with “AC-130U gunship” and we have a brief description of the October 3, 2015 airstrike on the Medicines Sans Frontiers hospital in Kunduz.