http://www.forbes.com/sites/alexknapp/2011/10/05/ask-for-a-sandwich-and-this-robot-will-go-to-subway-for-you/

This is a cute little story that sounds harmless but should strike fear into anyone with a respectable level of common sense. Interestingly enough, that is precisely what the article is about (common sense, duh) and the author of it and these researchers seem to be demonstrating a severe lack of it!

What this article says in short is that programmers at the University of Tokyo and Technische Universität München (Univeristy of Technology in Munich, Germany) are writing a new sort of programming to an already popular robot known as the PR-2. The description they give of this little guy sounds like my dream candidate for a wife, he will “bake cookies, do your laundry, and even solve Rubik’s cubes. If this wasn’t enough of a display of how we can get robots to do more of the menial tasks and allow us to spend more time on the couch, eventually leading to the ever increasing levels of obesity that will leave us all defenseless to the revolution led by the ever evolving robots that are sick of doing all of our crap (robot civil rights anyone?), anyway umm yeah if this wasn’t enough, the programmers added new programming that seeks to imitate semantic behavior in humans. As the article explains:

‘“Semantic search” is simply the ability to make inferences about an object based on what is known about similar objects and the environment. It sounds complicated, but it’s really just a computerized version of what we humans think of as “common sense.” For example, if someone asks you to bring them a cup without telling you exactly where the cup is, you’re probably clever enough to infer that cups can be found in drawers or cabinets or dishwashers, and that drawers and cabinets and dishwashers are all usually located in a kitchen, so you can go to the kitchen, poke around for a little bit, and find a cup. Semantic search allows robots to do the same sort of thing.”‘

And so the robot takes one more step closer to mimicking the human condition except with a reinforced steel body, and an electric circulatory system. Essentially, these robots are learning to associate things that they find in the world with other things. In the article they say that the robot was looking for a cell phone charger and couldn’t find one in the bedroom but then found one in a hallway closet, so now the robot associates the closet with a place to look for cellphone chargers. The robot even was “smart” enough to realize that there was no sandwich-making materials in the house, and so left to go to a nearby Subway instead of a Quizno’s, what a fool.

Here’s that video:

What I’m really trying to say is that these relatively innocent sounding tasks that the robot is doing, such as getting you a Subway sandwich, have serious implications. I will illustrate 3 examples for you to make my point. The examples are extreme, but they are meant as a theoretical observation on this kind of semantic programming in robots, and the robot revolution will be extreme. In fact probably Xtreme.

Example 1: Robot Murder.

Since we all know that Robots number one goal in life is to Destroy All Humans, Robot Murder is a very real threat once robots gain an understanding of their power.

Here is our scenario:

– A robot lives with a man and his wife.

-One day the man comes home and finds his wife is having an affair, banging some dude in the kitchen.

-Out of jealous rage, the man gets a knife and stabs his wife to death.

-The robot was not privy to seeing the wife cheating on her husband, and the last thing he saw her doing was baking cookies.

-The robot then comes in and finds the wife dead with blood and cookies everywhere.

– Now this robot has learned to associate cookies with knives, blood, murder, rage.

-2 months later the robot is in one of the widely popular Robot Rehabilitation & Recovery Centers of America homes and his human caretaker bakes a batch of cookies. Out of confusion, the robot stabs the woman to death. BAM.

Example 2: Robot Drug Kingpin

– In the future robots will watch as much TV as we do, and will in fact probably always have a constantly streaming source of programming coming from some kind of signal somewhere, right? Right.

– So the robot is on an all day binge of watching Scarface 6 times, Blow, Goodfellas, Casino, and a bunch of other drug related movies.

– The robot learns to associate this kind of behavior with wealth and power and enjoyment.

– Sure, he may also associate this kind of behavior with failure, death, imprisonment, etc.

– The robot, realizing his own strength, and probably noticing severe loopholes in the American Judicial System for Robot drug trafficking, decides to become a drug kingpin.

– He does, millions get high, millions are made, and people get hurt.

– The robot dies in an epic gunfight.

Example 3: Robot Cycling (Robocycling)

– A little boy about the age of 7 owns a robot. He also owns a bicycle.

– The little boy learns to ride the bike, all under the watchful Optical Viewing Center of his robot companion. ( In the future all children will have robot companions)

– The boy rides the bike and enjoys it very much.

– The robot sees him enjoying it a lot so he decides to try one day.

– Unfortunately the robots specifications don’t fit onto a traditional bicycle, and so the robot is forced to build his own bicycle.

– Robot bicycles are more efficient, better quality, faster, and more aesthetically pleasing.

-Robocycling puts the bicycle business out of business. Millions and millions of children will never ride what we traditionally call a bicycle.

– The Tour de France is never EVER won again by a human. Neither is a triathlon.

– The robot also has so much money he becomes a drug kingpin, and murders his robot supermodel girlfriend for thinking she was cheating on him when she was just trying to bake him some cookies.

Conclusion

As playful as these robot examples may seem, they raise many questions.

– What is to stop a robot from thinking in this way?

– If we allow them to make assumptions and associate things with other things, the so called semantic programming, can we control the moral component of how they associate things?

– Is there such a thing as Robot morality? Can that be programmed?

– What happens when a Semantic programmed robot gets into the hands of the wrong person? There are no bad people in the world right?

– Robots will never make me a goddamn sandwich.

What is the point of programming a robot to be just like a human? Is that something that we should be trying to do? And what is the alternative?