There’s been quite a bit of hubbub around the world regarding the wonders of artificial intelligence (AI). Google’s been working on it; Facebook, too. The good people at the Massachusetts Institute of Technology (MIT) just made a nightmare machine recently as a little Halloween bonus. However, there is rarely a topic as polarizing and divisive among the scientific and entrepreneurial community as a simple question:

How dangerous can AI be?

We have movies, books, comics, you name it, covering all sorts of crazy (and not-so-crazy) scenarios. As the narratives usually go, AIs rebel against their human masters, cast off their shackles , and, more often than not, feed us into processing plants to make fuel out of us.

Luckily, in most of these sci-fi creations, there is always a bold hero that defies the odds! But can we really expect the same in the real life?

I mean, if good old Schwarzenegger didn’t teach us a cautionary tale, then surely Neo and the rest of the Matrix crew did? Ultron, anyone?

Two opposing camps

Even now, companies and tech moguls alike are up in arms whether this is the technology of the future or our potential undoing. Most of them see neural networks and machine learning as a positive development, but AI is a big cause of concern for some.

Elon Musk, the man behind such a huge amount of technological innovations, such as Tesla, SpaceX, SolarCity, and his most recent initiative to send people to Mars, is a surprisingly strong opponent of the AI future. Or at the very least, the kind that doesn’t have strictly supervised development, as he said in an interview at the AeroAstro Centennial Symposium:

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”

Furthermore, he stressed:

“We need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

On the other hand, technology and computer giant IBM as a company has evolved in such a way that more than 40 percent of their current revenue is generated from technologies that stem from AI – namely their Watson platform that uses certain machine learning and AI concepts to help modern businesses.

And IBM CEO Virginia Rometty only has words of praise for Watson and, by that extension, AI:

“Watson is available as a platform. Anybody could build on it. And there are dozens of things available. It touches consumers. It’s Medtronic with diabetes products that will be rolling out now, predicting hypoglycemia. That kind of thing.”

She also addresses a concern people have regarding the future in a fairly casual manner:

“Most of what that [sentiment] comes from is when people talk about unsupervised learning, which has not yet been solved. Watson is trained. It is supervised learning. If you gave these systems data and just said, “Be a doctor,” it wouldn’t be possible. They have to be trained. “

Here’s the thing, though:

AIs are everywhere. They are in your smartphone apps, in the stock market, even in the music industry. The work AI developers are doing is simply amazing, and it is only natural that business owners would jump at an opportunity to get an edge over their competition or a great boost in performance that AI gives.

In fact, we should all be up for it – the medical applications alone could very well change the world. There are already several proven applications in which AI is used to recognize cancer and tumor cells in tissue.

http://watson2016.com/ Watson wins on Jeopardy.

Apple's Siri and Microsoft's Cortana are types of AI as well. Such AIs are designated as “artificial narrow intelligences,” since they are able to do only a limited number of tasks.

They are able to "think" about these tasks, and nothing else. Obviously, there isn't much to fear there, except an occasional glitch. What scientist fear is the coming of the so-called “artificial general intelligence,” one that would be capable to think independently across the board.

What's the big deal?

Now that the stage is set, and we can all appreciate the impact that AI technology can have on our everyday lives, let's see where the problem lies:

First off, we have an issue of ethics. What happens if AI gains self-awareness? Does that make it a “person?” If not, what does? Where's the line of who or what is a person and should be treated as such?

One only needs to go back a hundred years, and look at how, to our eternal shame, we treated people of certain race or creed as inferior, even nonhuman. Although racism is still a significant problem in many societies, there is now an additional question lingering in the near future:

How does this belief of what makes a person a person extend to nascent AIs?

Would there come a time when we couldn't simply ignore their self-awareness and recognize that they are not inferior from the standpoint of human rights?

Second, even more philosophical in nature is the question of what a fully self-aware AI would look like, what powers it would have, and what it would want to do with us?

While completely right in stating that there is no danger in the Watson platform or other supervised learning platforms, Rometty is only looking for her company's best interest and is not engaging in these hypotheticals. But what happens if (or more likely when) AI starts doing unsupervised learning?

This question has bugged people long before there was even theoretical chance of developing a fully functional AI. And for that reason, legendary science fiction author Isaac Asimov established his three laws of robotics:

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

How much good these laws would do, even if we hard-code it, is something that no one can really tell. For one thing, it is theorized that when an AI reaches a certain level of intelligence, its further growth will simply explode exponentially.

http://www.theverge.com/ We will probably regret this

If it took ten years for an AI to reach a human level of intelligence, it could take it mere hours to reach thousands of times that. That level of intelligence would be indistinguishable from godlike power to us. It is like comparing how much bacteria could understand human thought.

The question of “will this happen?” is one that only few scientists ask. An overwhelming majority is in the “WHEN this happens” camp. Of course, the opinions differ, but general consensus among the AI experts is that we would create an AI with human-level intelligence sometime in this century, say around year 2060 give or take 10-20 years – and that depends on who you ask.

After that, it would take a newly aware AI anywhere between a few hours and several years to reach that godlike, anything-is-possible-all-human-problems-solved level. Well, if it turns out that it wants to solve our problems. It might simply want to solve us as the problem.

How does this end?

The most frustrating thing is that we are so out of our comfort zone here.

We simply can’t comprehend how this new electronic supermind would work, how it would develop, and what it would want to do with us. All bets are off - literally anything can happen.

On top of that, our fear-induced and guilt-ridden minds expect that the AI would instantly rise up , and enslave us all.

While this is a real concern, we should ask ourselves: why is that our knee-jerk reaction to such an event?

Is it our own fear of superior beings, or the feeling that we are actually not really nice people, and that maybe we deserve such a destiny?

The truth of the matter is that the emergence of truly sentient AI might as well be an event that comes to the world stage with more of a whimper than of a bang.

As always, things aren’t exactly black and white. There will probably be a transitional period when we won’t be really sure if something can be considered self-aware or not. As with evolution, we can’t really tell when an organism evolves, or pin down the exact moment a prehistoric human became completely aware of itself.

So, there is really no answer, and our fears are well justified. We do have one thing up our sleeves, though:

Those very same bleak visions of a future with robot overlords that we have been spoon-fed throughout our lives. And this might be an excellent thing, as it could force us to be more cautious and aware of the consequences. And maybe Musk would prove to be the Hero of the Human Race again, as he established OpenAI – a non-profit organization that aims to monitor this potential danger.

One thing is for sure...

When the AI super-intelligence comes, we will know, and it will change our world to the core. In what direction that will go, it’s anyone’s guess.

In the end, if AIs are an extension of ourselves and our civilization, and we hope that they can understand the subtleties in what we meant when we programmed them, to me it seems that humanity should start exercising far more of the values that we want our future (hopefully benevolent) AI overlord to show:

Kindness, generosity, compassion, fairness, and empathy.

A truly Sisyphean task, if there ever was one.