I am delighted to address you at the city where I live and that I came to adore over the years I’ve spent here. In spite all the French overregulation and taxes, mind you.

Although the main focus of this summer university is to look back at the past 30 years, we cannot afford ourselves the luxury of overlooking the new realities. We may be still deeply involved in struggles arising from the past technological landscape but it would be nice if we could preempt the upcoming challenges and be better-equipped to answer them. Especially because in the current over-regulated landscape a groundbreaking broad-purpose technology that is widely feared or resented may face resistance previous such technologies (such as electricity or the Internet) were spared. But we also may not just dismiss out of hand the idea that a promising technology can be badly misused or even go haywire by itself.

I will try to handle a major challenge today. Even though my formal education is in economics (I completed a PhD in economics here in Aix last year), I will try to speak on a topic in which I’m not at all formally an expert. The topic is “Artificial intelligence: Threats and Opportunities for Liberty.” So take everything I will say today with a grain of salt.

In my spare time, I’ve long been interested in many topics in philosophy and one of them has long been the nature of understanding, or conceptual understanding of the world that we as humans seem to possess. I also recently took part in a seminar on AI organized for young people in the region by the local Rotary branch. There, I had the pleasure to hear high-level experts on the topic explain to us the functioning and limitations of AI. I even had the opportunity to see how the code of a neural network-implementing program looks like and what the code does.

That’s why, when Pierre Garello told me that they were planning to have a session on AI here and needed a speaker, I was happy to answer the call. I hope that although I can’t replace a distinguished expert in the field, you won’t be too disappointed with my talk.

AI background

AI has been on a roll lately. Some of the recent AI’s success stories include the victory against the top human player in Go, self-driving cars (at least in favorable conditions), speech recognition, vastly improved automatic translation, outperforming doctors at medical diagnoses, and so on. The key feature of the AI tools is that they are, for the first time, capable or “learning,” or improving their performance with more training.

At the same time, there have already been some signs that AI could represent a threat in the future. I’m talking in particular about the Chinese social score program and its potential totalitarian implications. Moreover, some of the most famous opinion leaders in the world today, have been claiming that we should consider AI as a serious if not an existential threat. This tweet by Elon Musk is representative of this gloomy attitude, although I don’t know whether he was stoned when he published it.

The three major fears about AI that I am going to address today are the following:

Skynet-like scenarios. You’ve probably all seen the Terminator movies. One of the key characters there is Skynet, originally a defense software that came to be conscious and consider humans as an existential threat, and proceeded to massively attack them. Thus, the idea here is that an AI can become so powerful that it may attempt to eliminate or enslave its human creators. The second scary scenario is AI being used by would-be or actual dictators for obtaining or retaining authoritarian powers. I like to call this scenario Deep Bismarck, which combines deep learning and the famous Prussian Chancellor who transformed Prussia into the massive German imperial state that caused the world so much harm. Finally, perhaps the most plausible of the AI fears is that mass adoption of AI will result in such a fast replacement of human labor that job creation will not be able to keep pace and mass unemployment will result.

Can AI really understand something? Addressing the Skynet scenario

I believe that there are two key questions about AI that need to be answered, and answering them will allow to throw light on both the threats and promise that AI entails. The first question is whether a computer armed with a software capable of “learning,” or getting better at what it is supposed to do with training, is in principle capable of human-like understanding.

Famous American philosopher John Searle, back in the 1980, when modern deep learning was held back by limitations to computational capacity and data storage, proposed a thought experiment that showed that a computer cannot have genuine conceptual grasp of reality, or what Searle called “semantics.” He asked people to imagine a human who does not know any Chinese locked from the outside world in a room but being able to communicate with people outside using Chinese hieroglyphs in response to them showing Chinese hieroglyphs to her. In the thought experiment, the human is equipped with instructions in her language that tell her how to respond to the Chinese speakers outside. Suppose that the latter are fooled by the scheme just described into believing that the person inside the room understands Chinese. Can we say that she actually does? Of course, not, because all she does is manipulate symbols that are meaningless to her. And that is exactly what computers do, they handle complex patterns of zeros and ones. In the words of Searle, they have syntax but no semantics. As an aside, Searle later much less famously argued that computers actually don’t have syntax, either, but this is beyond the scope of my today’s talk.

Over the years, many AI enthusiasts and believers that human mind is, fundamentally, a computer tried to provide a solid response to Searle’s argument but, in my view, they haven’t succeeded. But could computers capable of learning, which were not a thing at the time Searle wrote his famous paper, perhaps change the picture dramatically?

I believe that the thought experiment can be easily modified to show that the conclusion still holds. Imagine that in addition to the instructions on how to manipulate hieroglyphs, the subject has a separate set of instructions about how to modify the approach to responding to the Chinese speakers outside based on how they respond. Suppose this allows her to communicate with them even better. She still understands no Chinese whatsoever.

Realizing that even a computer capable of “learning” cannot genuinely understand anything allows us to put to bed the biggest fear about AI, namely, a Skynet-like scenario. One can, in principle, imagine, say, a defense software turning against its creators. But since computers running it don’t have an understanding of anything, humans will defeat it.

Look at this GIF with a dog-like robot from Boston Dynamics slipping on a banana peel. It is an allegory of what I am talking about. A computer that lacks understanding can always be defeated, sometimes with a simple trick.

AI (DL) as statistics on steroids

So, by now, we have seen what AI is not or cannot be. It cannot become truly intelligent. Now the second of the questions I mentioned a couple of minutes ago is what the tools on everyone’s mind today actually are.

To go straight to the crux of the matter, the current iteration of AI is merely statistics of steroids. A neural network is, at bottom, an extremely complex function that has sometimes millions of coefficients that are adjusted as the model is trained to produce better and better results. It also has to be highly task-specific. There is, for instance, no neural network that is simultaneously capable of both playing chess and recognizing speech. It also usually has to be trained on a truly enormous number of examples, in many cases, millions of them. It also has to have a very precise measure of success.

We can see here that the modified Searle’s argument seems correct at least with regard to this particular AI implementation. It is clear that a statistical tool, no matter how complex, may not be capable of genuine understanding. If you still doubt it consider a couple of examples. Researchers studying various neural network models have discovered many cases in which they can be badly fooled even in the tasks which they were trained to handle. More importantly, they can be fooled in ways that will never affect the human performance at the same tasks. To give you just one example, one of the neural networks was trained to recognize images of peacocks. However, researchers then created images that look nothing like peacocks but in such a way that the model kept recognizing them as such with high confidence. In a different case, researchers made an AI stop recognizing other types of pictures by changing just a few pixels, something the human eye can’t even register. And there are reasons to believe that those adversarial examples aren’t freak accidents.

The fragile statistical nature of AI implies that the other two catastrophic scenarios involving AI are probably unfounded or heavily exaggerated. The Deep Bismarck scenario is unlikely for several reasons. There are probably not enough past examples to train such an AI, social realities change a lot, sometimes dramatically, unlike the images of peacocks. And it is difficult to devise a clear measure of success.

Nor is AI likely to lead to mass unemployment. Not only because it also creates new jobs while making others obsolete. But also, while it may be more efficient than human employees on average, it cannot be completely relied upon in most cases. Consider that a hypothetical AI-armed robot can memorize the orders in a restaurant better than an average waiter. But if it just once fails to register that a client has a potentially lethal nut allergy and gives the client a dish containing nuts, the consequences for the restaurant would be catastrophic.

The promise of AI

But enough about risks. Does AI, in its current iteration, also present opportunities that libertarians can rejoice at? It certainly does. As we’ve seen, AI is a set of extremely sophisticated statistical tools that can be adapted to the task at hand.

While I think statistics is often misused these days in some fields, there are, doubtless, tons of contexts where the reality contains repeating patterns that are too complex for the human mind to handle directly. Examples include everything from the interactions of complex molecules to traffic outcomes involving tens of thousands of cars, to the exact internal structure of materials to the ways a sophisticated robot may interact with its environment to how the Earth’s climate responds to more CO2. Just like with other major transformative technologies, mass adoption of AI may bring about huge productivity gains.

To give one example, consider that there is a strong case to be made that modern medicine is so expensive today in large part because it is so labor-intensive, and the labor it has to employ is highly qualified. Economists call this the Baumol Effect. This feature is aggravated by the obstacles governments create for the entry into the medical profession. If AI can replace most doctors, this has the potential to greatly reduce medical costs and significantly benefit a large part of the population. It can also help reduce their anxiety about being potentially faced with the need to incur large medical expenses or, in countries with socialized medicine, about having to endure long waits for treatment.

And reduced anxiety about the world will likely lead to less support for grand interventionist schemes, whether they come from the Left or from the Right. Which is already a win in my book. That’s all from me, thanks for your attention!