Song sits at the wide table flanked by five young engineers, most of whom were educated at Ivy League colleges in America, before returning to work in the lucrative South Korean weapons industry. “The next step for us to get to a place where our software can discern whether a target is friend, foe, civilian or military,” he explains. “Right now humans must identify whether or not a target is an adversary.” Park and the other engineers claim that they are close to eliminating the need for this human intervention. The Super aEgis II is accomplished at finding potential targets within an area. (An operator can even specify a virtual perimeter, so only moving elements within that area are picked out by the gun.) Then, thanks to its numerous cameras, Park says the gun’s software can discern whether or not a potential target is wearing explosives under their shirt. “Within a decade I think we will be able to computationally identify the type of enemy based on their uniform,” he says.

Once a weapon is able to tell friend from foe, and to automatically fire upon the latter, it’s a short step to full automation. And as soon as a weapon can decide who and when to kill, Robocop-esque science fiction becomes fact. The German philosopher Thomas Metzinger has argued that the prospect of increasing the amount of suffering in the world is so morally awful that we should cease building artificially-intelligent robots immediately. But the financial rewards for companies who build these machines are such that Metzinger’s plea is already obsolete. The robots are not coming; they are already here. The question now is, what do we teach them?

Complex rules



Philippa Foot’s trolley dilemma, first posited in 1967, is familiar to any ethics student. She suggested the following scenario: a runaway train car is approaching a fork in the tracks. If it continues undiverted, a work crew of five will be struck and killed. If it steers down the other track, a lone worker will be killed. What do you, the operator, do? This kind of ethical quandary will soon have to be answered not by humans but by our machines. The self-driving car may have to decide whether or not to crash into the car in front, potentially injuring those occupants, or to swerve off the road instead, placing its own passengers in danger. (The development of Google’s cars has been partly motivated by the designer Sebastian Thrun’s experience of losing someone close to him in a car crash. It reportedly led to his belief that there is a moral imperative to build self-driving cars to save lives.)