When people talk about robots and ethics, they always seem to bring up Isaac Asimov’s “Three Laws of Robotics.” But there are three major problems with these laws and their use in our real world.

The Laws





Asimov’s laws initially entailed three guidelines for machines:

Law One – “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Law Two – “A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.”

Law Three – “A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.”

Asimov later added the “Zeroth Law,” above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

The Debunk



The first problem is that the laws are fiction! They are a plot device that Asimov made up to help drive his stories. Even more, his tales almost always revolved around how robots might follow these great sounding, logical ethical codes, but still go astray and the unintended consequences that result. An advertisement for the 2004 movie adaptation of Asimov’s famous book I, Robot (starring the Fresh Prince and Tom Brady’s baby mama) put it best, “Rules were made to be broken.”

For example, in one of Asimov’s stories, robots are made to follow the laws, but they are given a certain meaning of “human.” Prefiguring what now goes on in real-world ethnic cleansing campaigns, the robots only recognize people of a certain group as “human.” They follow the laws, but still carry out genocide.

The second problem is that no technology can yet replicate Asimov’s laws inside a machine. As Rodney Brooks of the company iRobot—named after the Asimov book, they are the people who brought you the Packbot military robot and the Roomba robot vacuum cleaner—puts it, “People ask me about whether our robots follow Asimov’s laws. There is a simple reason [they don’t]: I can’t build Asimov’s laws in them.”

Roboticist Daniel Wilson was a bit more florid. “Asimov’s rules are neat, but they are also bullshit. For example, they are in English. How the heck do you program that?”

The most important reason for Asimov’s Laws not being applied yet is how robots are being used in our real world. You don’t arm a Reaper drone with a Hellfire missile or put a machine gun on a MAARS (Modular Advanced Armed Robotic System) not to cause humans to come to harm. That is the very point!

The same goes to building a robot that takes order from any human. Do I really want Osama Bin Laden to be able to order about my robot? And finally, the fact that robots can be sent out on dangerous missions to be “killed” is often the very rationale to using them. To give them a sense of “existence” and survival instinct would go against that rationale, as well as opens up potential scenarios from another science fiction series, the Terminator movies. The point here is that much of the funding for robotic research comes from the military, which is paying for robots that follow the very opposite of Asimov’s laws. It explicitly wants robots that can kill, won’t take orders from just any human, and don’t care about their own existences.

A Question of Ethics



The bigger issue, though, when it comes to robots and ethics is not whether we can use something like Asimov’s laws to make machines that are moral (which may be an inherent contradiction, given that morality wraps together both intent and action, not mere programming).

Rather, we need to start wrestling with the ethics of the people behind the machines. Where is the code of ethics in the robotics field for what gets built and what doesn’t? To what would a young roboticists turn to? Who gets to use these sophisticated systems and who doesn’t? Is a Predator drone a technology that should just be limited to the military? Well, too late, the Department of Homeland Security is already flying six Predator drones doing border security. Likewise, many local police departments are exploring the purchase of their own drones to park over him crime neighborhoods. I may think that makes sense, until the drone is watching my neighborhood. But what about me? Is it within my 2nd Amendment right to have a robot that bears arms?

These all sound a bit like the sort of questions that would only be posed at science fiction conventions. But that is my point. When we talk about robots now, we are no longer talking about “mere science fiction” as one Pentagon analyst described of these technologies. They are very much a part of our real world.