How Humanity Can Build Benevolent Artificial Intelligence

We don’t need to follow Hollywood’s depictions of killer robots.

Artificial Intelligence gets a bad rap. Any time an AI appears in a movie, we can safely predict that it will turn malevolent in the second act. Twenty minutes into Westworld or I, Robot, we all knew what was coming — the AI will turn evil, forcing humans to fight them as enemies.

Pop culture is a one-sided coin

The press likes headlines about the threat of strong AI. Nick Bostrom has a nuanced view on the promise and peril of artificial intelligence, but editors tag it with headlines like Artificial Intelligence: We’re like children playing with a bomb. Elon Musk supports artificial general intelligence enough to invest in it, yet the media choose to focus on the negative, with headlines like Elon Musk: Artificial intelligence is our biggest existential threat.

In a sense, the media is absolutely right to warn us. There is indeed some cause for concern; what’s wrong is the focus. It would be very hard to argue that intelligence is inherently evil, and it follows that there’s nothing inherently bad about creating something more intelligent than humans.

Contrary to what Hollywood tells us, on balance, AI has proven massively beneficial.

Machine learning helps us remove tumors, assemble cars, and vacuum our floors. You, my reader, have probably never been harmed by anything enabled by AI, and likely have enjoyed considerable benefit from them.

On the other hand, there are killer drones, Palantir’s AI for mass surveillance, Google’s AI to sell us things we don’t need, and increasingly sophisticated AIs to manipulate public opinion for political ends. The machines we build reflect the mixed bag of human decency and human nastiness.

It takes a village to raise an AI

When we raise a child, we are unable to give any foolproof guarantees that the child will grow up to be a benevolent, caring adult. We can influence it, but not control it. Some mother’s son turned into John Wayne Gacy, another into Edward Snowden, and Mahatma Gandhi, Adolf Hitler and Oscar Schindler.

If a superhuman Artificial General Intelligence of godlike powers is built in the coming decades (and we must take seriously the possibility that it will be), how do we raise this machine-baby to grow into a good person? What development methodologies now will give it the greatest possible chance of benevolence?

This is one of the crucial questions facing humanity. It could be the difference between heaven and hell.

We can’t use simple safeguards like Asimov’s three laws. AIs are not wind-up automatons who execute the laws we put into them — no more than children follow the rules their parents give them.

And besides, trying to defend against a more-than-human intelligence with any simple tactic is like playing chess against a smarter opponent and saying, “I don’t have to worry; I’ll defend by moving this rook here.”

Part of what it means to be smarter is to be able to think around the defenses of a dumber opponent.

The truth is that we don’t know whether Artificial General Intelligence will be nasty or nice. Like rearing a child, we have a non-deterministic influence on the outcome. As David Hanson of the SingularityNET team says, the aim is to create “super-benevolent super-intelligence.” We must focus not just on making AI smarter, but also nicer.