Artificial intelligence (AI) systems have not only led to improvements in many areas of research such as medicine, autonomous driving and text processing but also steadily entered our daily lives, for example through voice assistants such as Siri. Although many of us have a latent feeling that AI is somehow threatening, we don’t often base this feeling on technical facts. In this post, I want to lay out, in layman’s terms, what AI’s risks are and why our future depends on understanding them.

Achievements of current AI systems

AI is everywhere, and often dressed in sheep’s clothes. The Snapchat filter that adds dog’s ears to your selfie? That’s AI, right there on your smartphone, and called face detection. Whether it’s social media or medical imaging, most of us, if we like it or not, have grown dependent on AI systems. The main drivers of recent progress in AI technology are decades of exponential growth in computing power, availability of large data sets which are used to train learning systems, advances in the implementation of learning algorithms and increasing investment from industry.

Has it had a largely positive net effect? I would argue yes. So far the advantages have largely outgrown the disadvantages. We should be happy that a lot of smart people are doing amazing things with these systems. But even though potential risks of AI might seem unclear and somehow far away, I will in the next sections explain why they are in fact very imminent and require much more thought and resources than we allocate to it at the moment.

Superintelligence

The AI community makes an important distinction between the concept of narrow AIs and general AIs, also called Artificial General Intelligence.

AI systems that we know today such as the ones deployed in self-driving cars or in your smartphone are all instances of narrow AIs. The term “narrow” refers here to the ability to only perform a specific task that the AI was designed for. So for instance, a narrow AI for self-driving cars can steer your car over the highway but fails at investing your stocks. But what would happen if systems become capable of all tasks that humans can perform? As a loose definition of general AI we can say they are

AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.

Most experts believe that such a technology is at least a few decades away. This can mean that it comes in a lifetime or two, depending if one is bullish about mid-term technological progress. This is a very small timeframe in the large scale of things.

There are strong reasons to believe that a system which reaches our level of cognitive ability does not stop becoming better. If we were able to improve intelligent systems, we can employ AI agents that work on new AI technology. At some point these agents will probably come up with an AI system that is superhuman, so is at least as good as humans in all tasks and better in some.

Imagine having a few Albert Einsteins in silicon. Such an armada of intelligent agents will probably produce fast progress in a lot of fields, including medicine, physics and genetics. We should be happy, right?

The problem we ignored so far is that a general AI will probably do its own moral reasoning. As Oxford philosopher Nick Bostrom argues, a superintelligence will likely even surpass us in moral thinking. These moral motivations should be encoded by the creators a priori, because a superintelligence might be so powerful that it can’t be stopped by humans. So as a society, we have to think about which values we want to have encoded in these systems and if we are willing to accept them after a point of no return.

Automation and mass unemployment

One of the main points that politicians and media outlets mention about AI is its potential to make redundant a large number of jobs. Why is this a problem? One might argue that the destruction of jobs will necessarily lead to the creation of jobs in other areas, potentially higher up the value chain and requiring more abstract thoughts that machines are not (yet) capable of. Even if we suppose that this argument is true, it will still lead to a period of societal transformation with large potential for political and economical instability, where resentments among the population might grow and populist parties gain even more traction.

To quote U.S. Treasury Secretary Larry Summers:

I expect that more than one-third of all men between 25 and 54 will be out work at mid-century.

Some thinkers go even so far as to predict a time where most tasks will probably be solved by a general AI and we as human can largely just enjoy the fruits. As I see it, this is largely a question of timeline. Before we have created a superhuman intelligence, we probably have to do most of the work ourselves but many jobs will be replaced by very focused, specialized systems. Then, when we have created a superhuman system, most of the jobs will be taken over by AI except the ones that really require human-to-human connections.

In any case, like the industrial revolution, AI will reshape the relationship between capital and labor in the world economy. It is possible that an edge in technology of a country is more important in the future than population size for international power. Governments and organizations have to prepare for this next industrial revolution and it won’t be easy.

Interpretability of neural networks

One of the main drivers of current advances in AI systems are artificial neural networks. Those are loosely inspired by what we know how the human brain brain works. More specifically, they are a family of algorithms that can — with little manual engineering — learn how to solve a variety of problems. Even though we are able to optimize them with some human intuition, it would be overconfident to say that we really can understand how they come up with their predictions.

There is a whole area of research which deals with interpreting the decision boundaries of neural networks. This shows that they are largely a black box that ‘magically’ transforms input to output. In the end, we are just happy that they work so well.

Simpler algorithms can somehow be interpreted and are somehow intuitive to our understanding of making decisions. If we bet the future of the field on neural networks and reinforcement learning algorithms however, we must place great importance on ensuring the safety of these systems.

Technical accidents

Okay, let’s assume we have created a benign AI that we’ve aligned with our good human values. Nothing can go wrong, right? I wouldn’t be so sure about that. In fact, alarming messages are coming from AI researchers who explain that current algorithms are prone to many risks which arise from technical design issues.

As these authors from Google Brain and Stanford point out in their paper:

With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat [..].

Besides the issues specific to constructing learning agents, there are all the issues arising from insecure IT systems that we have seen increasingly in the last decade. Let’s say, there is a swarm of autonomous armed drones flying around for military use and someone finds weaknesses in the algorithms. Could we potentially end up with an army of drones that was benign before but can be turned against their creators? Before we “solve” IT security, we can’t be confident in our ability to control intelligent systems fully.

Conclusion

We have touched upon several aspects of potentially dangerous AI systems and the question remains: What do we do about it? Should we be paralyzed by our fear and just stop developing AI systems altogether?

In my point of view, we shouldn’t let ourselves be forced into panic. AI will probably make a huge positive impact on us individually and society at large. The transformation towards more health and life quality is going to be significant.

Nevertheless, we should put time and money into ensuring that AI systems of the future will ultimately benefit us. The Machine Learning community, which I’m part of, has to work really hard to make sure our algorithms do what we want them to do and are aligned with human values.

If you’re interested in AI and Machine Learning, stay tuned! More content to come soon on my Medium blog. Thanks to all the people who helped with proof-reading my drafts.

Further Reading:

Nick Bostrom. 2014. Superintelligence: Paths, Dangers, Strategies (1st ed.). Oxford University Press, Oxford, UK.