Given the perceived military success of unmanned drones and other semi-autonomous weapons, many proponents of robotic warfare are pushing for the next phase of development: fully autonomous weapons. Once developed and deployed, these weapons — killer robots, as they have become known — would be able to select their own targets and fire on them, without human intervention.

Monday, in opposition to such developments, Canadians are launching a campaign to stop killer robots.

Joining an international movement of more than 50 non-governmental organizations in 24 countries, Canadian advocates will hold a public meeting Monday and convene with parliamentarians Tuesday, calling for a pre-emptive prohibition on the development, production and use of fully autonomous weapons.

As a concept, the killer robot represents a stark shift in automation policy — a wilful, intentional and unprecedented removal of humans from the kill decision loop.

Proponents of this stark shift in policy believe that the substitution of machines for humans is justified because robot soldiers will outperform human soldiers physically, emotionally and ethically. Robots, they say, are not vulnerable to the perils that plague humans in battlespace: bias, exhaustion, elevated emotions or the need to seek retribution for the death of a comrade. Consequently, proponents believe that robots will better comport with international standards and the ethical rules of just war, since those rules and standards can be programmed into the machines’ operations.

Is this a reasonable argument or just wishful thinking?

The underlying utopic vision for killer robots originated with Aristotle, who imagined in his Politics that we could delegate to automatons all undesirable roles in society, ultimately eliminating the need for slaves, soldiers and the like.

But it was the science fiction of Asimov, many centuries later, that gave hygiene to Aristotle’s idea.

Asimov’s brilliance lay in his recognition that robots might be universally programmed to obey humans. Taking Aristotle to the next level, the elite could not only use robots to their great advantage but the general public could also be made to trust robots as their companions and co-workers. Programming obedience into all robots, coding a kind of slave-morality that would allay people’s fears, would ensure that robots are “more faithful, more useful and absolutely devoted.” It would also avoid what Asimov called humankind’s Frankenstein Complex — “its gut fears that any artificial man they created would turn upon its creator.”

As compelling and reassuring as the Asimovian narrative may be — plain and simple — the deployment of fully autonomous weapons entails that we delegate crucial moral decisions of life and death away from robust human decision-makers in favour of relatively limited software algorithms.

Here, I suggest, the Frankenstein complex is the least of our concerns.

The concern is not a robot uprising. It is the voluntary relinquishment of human control to machines and the dependencies created thereby.

Relinquishment is an emerging topic in the field of roboethics.

However, the relinquishment question is quite different for autonomous weapons than, say, for autonomous vehicles. Whether to relinquish control of the steering wheel may simply turn out to be a matter of safety and efficacy, as determined by evidence over time.

By contrast, the rules of armed conflict require a much more challenging set of threshold determinations. First, we would have to determine that killer robots could successfully discriminate between combatants and non-combatants in the moment of conflict. Second, we would have to determine that killer robots have the ability to morally assess every possible conflict in order to justify whether a particular use of force is proportional. Third, we would have to determine that killer robots are able to assess and comprehend military operations sufficiently well to be able to decide whether the use of force on a particular occasion is of military necessity.