Last week we covered the past and current state of artificial intelligence — what modern AI looks like, the differences between weak and strong AI, AGI, and some of the philosophical ideas about what constitutes consciousness. Weak AI is already all around us, in the form of software dedicated to performing specific tasks intelligently. Strong AI is the ultimate goal, and a true strong AI would resemble what most of us have grown familiar with through popular fiction.

Artificial General Intelligence (AGI) is a modern goal many AI researchers are currently devoting their careers to in an effort to bridge that gap. While AGI wouldn’t necessarily possess any kind of consciousness, it would be able to handle any data-related task put before it. Of course, as humans, it’s in our nature to try to forecast the future, and that’s what we’ll be talking about in this article. What are some of our best guesses about what we can expect from AI in the future (near and far)? What possible ethical and practical concerns are there if a conscious AI were to be created? In this speculative future, should an AI have rights, or should it be feared?

The Future of AI

The optimism among AI researchers about the future has changed over the years, and is strongly debated even among contemporary experts. Trevor Sands (introduced in the previous article as an AI researcher for Lockheed Martin, who stresses that his statements reflect his own opinions, and not necessarily those of his employer) has a guarded opinion. He puts it thusly:

Ever since AGI has existed as a concept, researchers (and optimists alike) have maintained that it’s ‘just around the corner’, a few decades away. Personally, I believe we will see AGI emerge within the next half-century, as hardware has caught up with theory, and more enterprises are seeing the potential in advances in AI. AGI is the natural conclusion of ongoing efforts in researching AI.

Even sentient AI might be possible in that timeframe, as Albert (another AI researcher who asked us to use a pseudonym for this article) says:

I hope to see it in my lifetime. I at least expect to see machine intelligence enough that people will strongly argue about whether or not they are ‘sentient’. What this actually means is a much harder question. If sentience means ‘self-aware’ then it doesn’t actually seem that hard to imagine an intelligent machine that could have a model of itself.

Both Sands and Albert believe that the current research into neural networks and deep learning is the right path, and will likely lead to the development of AGI in the not-too-far future. In the past, research has either been focused on ambitious strong AI, or weak AI that is limited in scope. The middle ground of AGI, and specifically that being performed by neural networks, seems to be fruitful so far, and is likely to lead to even more advancement in the coming years. Large companies like Google certainly think this is the case.

Ramifications and Ethics of Strong AI

Whenever AI is discussed, two major issues always come up: how will it affect humanity, and how should it we treat it? Works of fiction are always a good indicator of the thoughts and feelings the general population has, and examples of these questions abound in science fiction. Will a sufficiently advanced AI try to eliminate humanity, a la Skynet? Or, will AI need to be afforded rights and protection to avoid atrocities like those envisioned in A.I. Artificial Intelligence?

In both of these scenarios, a common theme is that of a technological singularity arises from the creation of true artificial intelligence. A technological singularity is defined as a period of exponential advancement happening in a very short amount of time. The idea is that an AI would be capable of either improving itself, or producing more advanced AIs. Because this would happen quickly, a dramatic advancements could happen essentially overnight, resulting in an AI far more advanced than what was originally created by humanity. This might mean we’d end up with a super intelligent malevolent AI, or an AI which was conscious and deserving of rights.

Malevolent AI

What if this hypothetical super intelligent AI decided that it didn’t like humanity? Or, was simply indifferent to us? Should we fear this possibility, and take precautions to prevent it? Or, are these fears simply the result of unfounded paranoia?

Sands hypothesizes “AGI will revolutionize humanity, its application determines if this is going to be a positive or negative impact; this is much in the same way that ‘splitting the atom’ is seen as a double-edged sword.” Of course, this is only in regards to AGI — not strong AI. What about the possibility of a sentient, conscious, strong AI?

It’s more likely that potential won’t come from a malevolent AI, but rather an indifferent one. Albert poses the question of an AI given a seemingly simple task: “The story goes that you are the owner of a paper clip factory so you ask the AGI to maximize the production of paper clips. The AGI then uses its superior intelligence to work out a way to turn the entire planet into paper clips!”

While an amusing thought experiment, Albert dismisses this idea “You’re telling me that this AGI, can understand human language, is super intelligent but doesn’t quite get the subtleties of the request? Or that it wouldn’t be capable of asking for a clarification or guessing that turning all the humans into paperclips is a bad idea?”

Basically, if the AI were intelligent enough to understand and execute a scenario that were harmful to humans, it should also be smart enough to know not to do it. Asimov’s Three Laws of Robotics could also play a role here, though it’s questionable whether those could be implemented in a way that the AI wasn’t capable of changing them. But, what about the welfare of the AI itself?

AI Rights

On the opposite side of the argument is whether artificial intelligence is deserving of protection and rights. If a sentient and conscious AI were created, should we be allowed to simply turn it off? How should such an entity be treated? Animal rights are a controversial issue even now, and so far there is no agreement about whether any animals possess consciousness (or even sentience).

It follows that this same debate would also apply to artificially intelligent beings. Is it slavery to force the AI to work day and night for humanity’s benefit? Should we pay it for its services? What would an AI even do with that payment?

It’s unlikely we’ll have answers to these questions anytime soon, especially not answers that will satisfy everyone. “A convincing moral objection to AGI is: how do we guarantee that an artificial intelligence onpar with a human has the same rights as a human? Given that this intelligent system is fundamentally different from a human, how do we define fundamental AI rights? Additionally, if we consider an artificial intelligence as an artificial lifeform, do we have the right to take its life (‘turn it off’?). Before we arrive at AGI, we should be seriously thinking about the ethics of AI.” says Sands.

These questions of ethics, and many others, are sure to be a continuing point of debate as AI research continues. By all accounts, we’re a long way away from them being relevant. But, even now conferences are being held to discuss these issues.

How You Can Get Involved

Artificial intelligence research and experimentation has traditionally been the domain of academics and researchers working in corporate labs. But, in recent years, the rising popularity of free information and the open source movement has spread even to AI. If you’re interested in getting involved with the future of artificial, there are a number of ways you can do so.

If you’d like to do some experimenting with neural networks yourself, there is software available to do so. Google has an in-browser playground for tinkering with basic neural network techniques. Open source neural network libraries, like OpenNN and TensorFlow are freely available. While these aren’t exactly easy to use, determined hobbyists can use them and expand upon them.

The best way to get involved, however, is by doing what you can to further professional research. In the US, this means activism to promote the funding of scientific research. AI research, like all scientific research, is in a precarious position. For those who believe technological innovation is the future, the push for public funding of research is always a worthy endeavor.

Over the years, the general optimism surrounding the development of artificial intelligence has fluctuated. We’re at a high point right now, but it’s entirely possible that might change. But, what’s undeniable is how the possibility of AI stirs the imagination of the public. This is evident in the science fiction and entertainment we consume. We may have strong AI in a couple of years, or it might take a couple of centuries. What’s certain is that we’re unlikely to ever give up on the pursuit.