Let’s begin with a basic fact. The capacities of autonomous weapons are designed by humans, programmers whose intentions are written into the system software. Let us remember, though, that the essence of computation needn’t be restricted to a silicon substrate or to any metaphysically privileged medium.

Imagine that rather than having a programmer’s intentions written into a robot’s silicon substrate, we instead have a group of people physically carrying out the identical set of decisions and procedures. Given such a case, the decision-procedure being realized by the group of people would be functionally identical to the decision-procedure being realized on silicon. Supposing such a case, we might then ask, would an army realizing an institutional plan that was functionally identical to the algorithms of a killer robot be any less morally problematic? Would we want to then ban that particular army arrangement as well? If so, how?

The main point here is to emphasize the serious problem policymakers would face in isolating just what exactly it is that should be banned. Furthermore, such considerations highlight the fact that the deep moral issues raised by autonomous weapons are the very same ones raised by conventional warfare.

When we start to look at killer robots this way, we see that they are the mechanical realizations of much larger sets of institutional intentions, involving the plans of designers and the decisions of people who follow such plans, individuals who accept certain jobs, develop certain software and put such software to military use.

Indeed, there is the problem of determining who exactly within such a causal chain is morally responsible for potential harm and to what degree. However, the problem of determining where specifically within a given causal chain to assign moral responsibility to individual members is a problem endemic to any collective action whatsoever, from the Challenger disaster, to the BP oil spill, to global poverty.

The current language in the killer robot debate suggests that those weapons are capable of acting without meaningful human control, and that their creation and use is somehow distinct from other sorts of collective actions. It also suggests that potential harm arising from that creation and use may be morally unattributable to those who create and use them. This is not the sort of moral detachment we should foster in our technology and military communities, especially in relation to what is perhaps the gravest and most consequential of all human activities: war.