Continue Reading Below Advertisement

I needed to include this, distinct from crime, because calling it crime doesn't do it justice and isn't really what I mean at all. Cartoon supervillainy is really the best way to evoke the right feeling of where this is going to go wrong. And when I say wrong, I mean terribly, horribly wrong. Bad things are going to happen, in your lifetime, because of robots.

Humans want to be liked and respected and loved. Maybe sometimes feared, depending on your own personal quirks. We want to be important, and we have a deep driven tendency to anthropomorphize everything. Every pet owner will tell you unequivocally one of the best things about having a pet is unconditional love. But can an animal even feel love the way you understand it? No doubt that animal does feel something about you -- if you're a good pet owner, it associates you with security and sustenance, but can you ever know if it's love? And what about those people who name their Roombas and apparently rearrange their furniture or pre-clean the house to make it easier on the robot vacuum cleaner? If that's still too refined an example, go look at a cloud until you see a face. Humans want to see that in everything, something real and relatable, even though we're often only just projecting. The same thing happens with robots.

Continue Reading Below Advertisement

A professor at MIT faced the unsettling reality of what our mind can do with a robot one day when she realized she wanted a robot programmed to do nothing more than turn toward the sound of someone's voice to pay attention to her and not someone else in the lab. It wasn't really paying attention to anyone, and she knew that when she thought about it. The problem was that she was expecting it, because of its ability to seem like it was interested in her, to be interested in her. To return her emotions the way we expect that Roomba or a budgie to. But it won't because it can't.