Relatedly, we already hear criticisms that the use of technology in war or peacekeeping missions aren't helping to win the hearts and minds of local foreign populations. For instance, sending in robot patrols into Baghdad to keep the peace would send the wrong message about our willingness to connect with the residents; we will still need human diplomacy for that. In war, this could backfire against us, as our enemies mark us as dishonorable and cowardly for not willing to engage them man to man. This serves to make them more resolute in fighting us; it fuels their propaganda and recruitment efforts; and this leads to a new crop of determined terrorists.

Also, robots might not be taken seriously by humans interacting with them. We tend to disrespect machines more than humans, abusing them more often, for instance, beating up printers and computers that annoy us. So we could be impatient with robots, as well as distrustful--and this reduces their effectiveness.

Without defenses, robot could be easy targets for capture, yet they may contain critical technologies and classified data that we don't want to fall into the wrong hands. Robotic self-destruct measures could go off at the wrong time and place, injuring people and creating an international crisis. So do we give them defensive capabilities, such as evasive maneuvers or maybe nonlethal weapons like repellent spray or Taser guns or rubber bullets? Well, any of these "nonlethal" measures could turn deadly too. In running away, a robot could mow down a small child or enemy combatant, which would escalate a crisis. And we see news reports all too often about unintended deaths caused by Tasers and other supposedly nonlethal weapons.

International humanitarian law (IHL)

What if we designed robots with lethal defenses or offensive capabilities? We already do that with some robots, like the Predator, Reaper, CIWS, and others. And there, we run into familiar concerns that robots might not comply with international humanitarian law, that is, the laws of war. For instance, critics have noted that we shouldn't allow robots to make their own attack decisions (as some do now), because they don't have the technical ability to distinguish combatants from noncombatants, that is, to satisfy the principle of distinction, which is found in various places such as the Geneva Conventions and the underlying just-war tradition. This principle requires that we never target noncombatants. But a robot already has a hard time distinguishing a terrorist pointing a gun at it from, say, a girl pointing an ice cream cone at it. These days, even humans have a hard time with this principle, since a terrorist might look exactly like an Afghani shepherd with an AK-47 who's just protecting his flock of goats.

Another worry is that the use of lethal robots represents a disproportionate use of force, relative to the military objective. This speaks to the collateral damage, or unintended death of nearby innocent civilians, caused by, say, a Hellfire missile launched by a Reaper UAV. What's an acceptable rate of innocents killed for every bad guy killed: 2:1, 10:1, 50:1? That number hasn't been nailed down and continues to be a source of criticism. It's conceivable that there might be a target of such high value that even a 1,000:1 collateral-damage rate, or greater, would be acceptable to us.