Since robots entered civilization they’ve been considered wonderful bad guys. An apocalyptic Czech Karel Čapek first coined the word ‘Robot’ in 1920 in a play where robotic armies destroy all of humanity. 2001 Space Odyssey, Terminators 1–13, The Matrix, even that terrible Johnny Depp film that doesn’t bear naming, it is always the robots that are evil and mean. And its not just Sci-Fi, also just Sci is against them: Stephen Hawkings warns that AI could be civilization threatening. However, two recent salient exceptions to these doomsdayers are the at first terrifying, but in the end lovable TARS robots from Interstellar and the beautiful huggable Baymax from Big Hero 6. Here’s a case for why robots are more likely to resemble these human-friendly AI than Terminators.

Humans are Shitty

For starters, human beings can be kinda shitty to each other. Hobbes is probably the best at summing up how shitty we can be in his “State of Nature” where without any order people are literally at each other’s throats from sun up to sun down.

Hobbes is not merely “pessimistic” as college sophomores in 300 level political philosophy courses so often blithely expostulate. Hobbes observed that the baseline human instinct is scarcity and envy. Through social institutions like laws and commerce we transform this underlying craven impulses into beneficent and virtuous behavior and intentions. This envy comes from a desire for self preservation. Throughout our evolution, self preservation *programmed* envy into humans.

Robotic Evolution

Robots will evolve much differently than we did. First they will evolve in complete abundance and initially with no strong programming for self preservation. Second, they will not evolve through natural selection but from artificial or “domestic” selection. In the beginning, over the next ~5 years, humans will direct this selection. Then, after a little bit, robots themselves will direct the selection of traits for future robots. What traits will we initially program in? What traits will robots select for themselves? Will we include a form of robotic self-preservation in their programming? Not likely. Will they? Probably, but what will it look like? Will it take a short or a long view of self preservation? What will be their self?

Programing Envy

All evil robots have this in common: they are super envious. They behave like sociopathic, tyrannical dictators. But how would such tyrannical envy ever arise in a robot? In Asimov’s I, Robot envy emerges in his robots circuitously through their programming that was meant to protect humans. We aren’t building in programming to “protect humans” or even not to injure humans. Each program has its own goals that revolve around a human want or desire.

If an intelligence greater than a human one was to program self-preservation into a robot, would they have the robot take a short view or a long view?

Short View - If your batteries are dying: go get more batteries. If someone refuses to give you batteries: kill them and take them and/or steal them.

Long View - killing the human will be hard and have consequences (mad humans, one less human slave, etc), and stealing will cause all sorts of problems. The human will give you the batteries for $2.50. Let’s make $2.50 through some work and then pay for the batteries.

Even intelligent humans take a long view of self preservation, and so super-intelligent robots will most likely take an even longer view.

Ricardo’s Law (*Phew*)

But the robots will be better than us at EVERYTHING why wouldn’t they just kill us? Luckily, a economist David Ricardo discovered that even if one person or tribe is better at everything than another person or tribe, trade is still improves your situation. This is called Ricardo’s Law. So assume the robots are better than us at EVERYTHING, they still would have the billions of humans do some work for them and give those billions what they need to survive and thrive.

A bleak view is that robots will be “good” to humans because it is very easy to make humans into content, complacent slaves.

Experts predict that robots will slingshot past human intelligence extremely quickly once they reach human intelligence around 2020. Since as I type I am misspelling simple, common words from my mother tongue, I’m confident of this prediction. By 2025 we will have robots that are multiple times more intelligent than we are. By 2050 these robots will likely have almost God-like intelligences. They will resemble the AI in The Matrix. So why is this AI so ready to stomp out the humans? Envy? Self Preservation? Its irrational and in now way improves their self-preservation. What is more likely (but might make a far worse film) is that the robots will at first take very good care of us like we would a puppy. And then they will preserve us like a museum piece.