This article was contributed by Illya Nuzbrokh, a second year student at the University of St Andrews reading Philosophy and Mathematics.

Norman is a namesake of the lead character of the 1960s film Psycho. Norman is not a she or a he, it’s an AI or more precisely a machine learning algorithm. Both Norman’s creators and various news outlets call Norman a psychopathic AI. In this article I would like to examine what it is the researchers actually meant when they called it “the world’s first psychopath AI”, whether this anthropomorphization of code is warranted and hopefully paint an overall picture of where we are at with Artificial Intelligence.

Before looking into the details, it is possible to criticise the A.I. pursuit as a whole, using Searle’s Chinese Room thought experiment summed up by him in the following paragraph:

“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.” (Searle, 1999)

Of course if one were to agree with Dennett’s functionalism, the room itself including the book, the boxes and the native English speaker has mental states and is functionally conscious. However, the current Computer Science research is not aimed at achieving real consciousness. It cannot be as there is not even an agreed definition of consciousness hence, the researchers would not know what to strive towards.

What is the definition of AI as seen by Computer Science then? Personally, I found this definition rather broad yet careful:

“Artificial intelligence is an entity (or collective set of cooperative entities), able to receive inputs from the environment, interpret and learn from such inputs, and exhibit related and flexible behaviors and actions that help the entity achieve a particular goal or objective over a period of time.”

It is by no means the only one out there, however it is one of the better attempts that I have managed to find. Notably, it does not mention minds or consciousness and drives the point that AI is something that can change its outputs based on its experience. Another interesting aspect of this definition is that it can allow for a very specialised piece of software, e.g. a thermostat that learns what temperature you want and when, to be an AI. And it would be, currently the conception of artificial intelligence is split into two areas: Narrow Artificial Intelligence (NAI) and Artificial General Intelligence (AGI). The former are all the algorithms that are able to learn some very specialised narrow task and perform it as well as or better than humans are able to. An AGI, on the other hand, would be a machine that could perform any intellectual task that a human being can, it is also referred to as strong or full AI.

Norman, our AI in question, is an example of a NAI. It was created on the basis of a technique called Deep Learning. It uses precoded nodes which are arranged into layers which communicate between each other when analysing data. For example one level of Norman’s nodes could be responsible for identifying outlines, and then the next would use those outlines to examine the whole picture. This is where Deep Learning gets its second name: Hierarchical Learning, due to the nodes being arranged into a hierarchy . This technique aims to model the way that human brains interact with information, hence the nodes are called neurons.

By no means is Deep Learning the only way to tackle Machine Learning – the technique where instead of coding an algorithm to carry out specific task, one instead sets up a data analysing algorithm which then, hopefully, “learns” the desirable pattern. While some of the problems tackled by Machine Learning could be approached with a task-specific algorithm theoretically it would more often than not involve too much complexity and just too much pure coding to be feasible.

Now that I laid out some basic facts, it is possible to look into what Norman actually does. It was coded to analyse a data set consisting of pictures associated with description taken from an especially gore-filled subreddit. As Norman is basically a function that outputs a label when you feed it an image it is now very likely to output a distressing label for any picture that is fed to it, simply due to the fact that the vast majority of the label that it received as a training sample were distressing. In Computer Science this is referred to as the Problem of Bias. If the data that is fed into a Machine Learning algorithm is biased, its outputs after training will be biased too.

The MIT researchers responsible for creating Norman knowingly took a biased data set and fed it as a training set to an image captioning algorithm. This is why I disagree with calling it a psychopath: if it were fed images of ice cream with labels consisting of different types of ice cream it would output an ice cream type when fed a Rorschach Inkblot. It would clearly be humorous to call Norman “the first ice cream aficionado AI” in that case, and I would argue it is just as informative to call it “the first psychopath AI” in this case. Why is it being called a psychopath? Because it grabs the public’s attention, it feeds into our very natural fear of the machines acting without regard for humans. Historically, humans tend to anthropomorphize everything that they do not understand completely, the examples ranging from the weather deities in polytheistic systems to images of self-sufficient robots based on steam engines.

Hopefully, the conclusion of the above paragraph makes it clear why it is pointless to see human qualities in NAI. More than that, it brings up the question of NAI even being called an artificial intelligence as it is simply a data analysis tool in a similar way that a steam engine ship was a tool for getting from point A to point B.

This brings the conversation back to AGI – something that due to its very definition is supposed to mirror a human to an extent. Can we at least see human qualities in a true AI? Will it have emotions, regard for human life, needs? The short answer is: we do not know. It would be just as fruitful to read Azimov’s Foundation as all we can do is guess and that is the realm of science of fiction. The truth is, a Computer Scientist of sixty years ago would do just as well at creating an AGI as the current ones as the study of AGI has not progressed since then. It is possible that NAI would be used inside an AGI, but knowing that does not bring us closer to the answer. We know rather well how visual centers in out brain interpret the data received by our retina, but we do not know what it is that allows us to be conscious human beings. For AGI the situation is even more dire as it will not necessarily model our brain structure and while we have a potential visual interpretation module for the future AIs in the form of Machine Learning, the future Computer Scientist might just as equally find this notion laughable.

So if we do not need to worry about psychopathic Skynet style software taking over the world just yet what do we need to keep in mind in the foreseeable future? A lot. It is already possible to create a drone that would be able to recognise humans. Even easier would be adding a gun to it and writing software that would make the triggering a question of whether the NAI inside the drone recognises the human as an enemy. Another example would be whether a self driving car should kill its passengers in order to save a greater number of bystanders if an accident were to occur. Much like many other scientific advancements, the NAI technology allows us to increase our capabilities, and those more often than not include capabilities for causing harm and violence.

While the above sounds scary, we already have drones that still cause collateral damage while operated by humans. The self-controlled cars which present us with ethical dilemmas, will still ultimately lead to fewer accidents, due to the simple rules of statistics. However the point of NAI, is that it can let humans see patterns in information, that we previously could not spot. With the latest scandals being centered around Big Data, Cambridge Analytica and the like it is not very hard to imagine how the development of AI might become or already is inversely proportional to our ability of leading a private life.

To conclude, the anthropomorphic language used by the media when referring to NAI does not actually help us understand how it works. And that is a shame because when it comes to machine learning algorithms humanity’s achievements in the area are impressive or worrying depending on one’s level of optimism. A true artificial intelligence, however, is still in the realm of science fiction and the same way that steam engines were never going to allow us to build an android, computers might someday be dwarfed by more cutting edge technology that would allow us to build an AGI, or maybe we just need more sophisticated software and faster hardware. Who knows? And no, Norman the AI is not a psychopath.

Bibliography:

“19 A.I. Experts Reveal The Biggest Myths About Robots.” Business Insider. N.p., 2018. Web. 27 June 2018 . <http://www.businessinsider.com/myths-misconceptions-about-artificial-intelligence-2015-9#yann-lecun-says-we-have-robot-emotions-all-wrong-2>.

Faggella, Daniel. “What Is Artificial Intelligence? An Informed Definition -.” TechEmergence. N.p., 2018. Web. 27 June 2018 . <https://www.techemergence.com/what-is-artificial-intelligence-an-informed-definition/>.

“Norman By MIT Media Lab.” Norman-ai.mit.edu. N.p., 2018. Web. 27 June 2018 . <http://norman-ai.mit.edu/>.

“Philosophy Of Artificial Intelligence – Bibliography – Philpapers.” Philpapers.org. N.p., 2018. Web. 27 June 2018 . <https://philpapers.org/browse/philosophy-of-artificial-intelligence>.

Searle, John – ‘The Chinese Room’ in Robert A. Wilson and Frank Keli (eds.), The MIT Encyclopedia Of Cognitive Sciences. [Cambridge, Mass.]: Massachusetts Institute of Technology, 1999. Print.