The US Department of Defense, working with top computer scientists, philosophers, and roboticists from a number of US universities, has finally begun a project that will tackle the tricky topic of moral and ethical robots. This multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — the ability to choose right from wrong. As we move steadily towards a military force that is populated by autonomous robots — mules, foot soldiers, drones — it is becoming increasingly important that we give these machines — these artificial intelligences — the ability to make the right decision. Yes, the US DoD is trying to get out in front of Skynet before it takes over the world. How very sensible.

This project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military R&D. While we’re not yet at the point where military robots like BigDog have to decide which injured soldier to carry off the battlefield, or where UAVs can launch Hellfire missiles at terrorists without human intervention, it’s very easy to imagine a future where autonomous robots are given responsibility for making those kinds of moral and ethical decisions in real time. In short, it’s high time that we looked at the feasibility of infusing robots (or more accurately artificial intelligence) with circuits and subroutines that can analyze a situation and pick the right thing to do — just like a human. [Read: Child soldiers and the future of the US military.]

As you can probably imagine, this is an incredibly difficult task. Scientifically speaking, we still don’t know what morality in humans actually is — and so creating a digital morality in software is essentially impossible. To begin with, then, the research will use theoretical (philosophical) and empirical (experimental) research to try to isolate essential elements of human morality. These findings will then be extrapolated into a formal moral framework, which in turn can be implemented in software (probably some kind of deep neural network).

Assuming we get that far and can actually work out how humans decide right from wrong, the researchers will then take an advanced robot — something like Atlas or BigDog — and imbue its software with moral competence. One of the researchers involved in the project, Selmer Bringsjord at RPI, envisions these robots using a two-stage approach for picking right from wrong. First the AI would perform a “lightning-quick ethical check” — simple stuff like “should I stop and help this wounded soldier?” Depending on the situation, the robot would then decide if deeper moral reasoning is required — for example, should the robot help the wounded soldier, or should it continue with its primary mission of delivering vital ammo and supplies to the front line where other soldiers are at risk?

Eventually, of course, this moralistic AI framework will also have to deal with tricky topics like murder. Is it OK for a robot soldier to shoot at the enemy? What if the enemy is a child? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans, or will they be held to a higher standard?

At this point, it seems all but certain that the US DoD will eventually break Asimov’s Three Laws of Robotics — the first of which is “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” This isn’t necessarily a bad thing, but it will open Pandora’s box. On the one hand, it’s probably a good idea to replace human soldiers with robots — but on the other, if the US can field an entirely robotic army, war as a diplomatic tool suddenly becomes a lot more palatable. The commencement of this ONR project means that we will very soon have to decide whether it’s okay for a robot to take the life of a human — and honestly, I don’t think anyone has the answer.