Personal Identity of AI

The personal identity is a well known problem in philosophy which is being taken in to argue for a long time. It deals with questions that arise about ourselves like, What am I? When did I begin? Or What does it take for a person to persist from moment to moment?

But the purpose of this article is not just talk about the identity problem of humans. It is to talk about the concept of personal identity is related to AI. Strong AI or sentient AI can be considered as a type of organisms that possess some similarities and also some different properties to the humans. In this article I like to discuss some of these properties that are important to the problem of personal identity and ask the question, how will the AI assume a personal Identity?

Personal Identity in Philosophy

The problem of what holds the personal identity is a philosophical question which is being taken in to argue for a long time. There have been many theories proposed by many philosophers but none of them seems to solve the problem completely. These are some of the theories regarding the personal identity.

Continuity of Bodily Substance

One view of persistence in personal identity over time is to have continuous bodily existence. But the main difficulty in this theory is the difficulty in determining whether one’s physical body at one point of time is the same thing as a physical body at another point. Especially since humans grow and replace and gain matter over time when sufficient enough time has passed, we won’t have any of our original matter left. The thought experiment, Theseus’ paradox begs the same question about this theory of personal identity. It was presented in Greek legend and was reported by Greek historian Plutarch,

“The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their place, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.”

And later on Thomas Hobbes adds another puzzle to this question. He asked the question, what happens if the old planks are gathered and made a second ship, Which ship, if either, is the original Ship of Theseus?

Derek Parfit also introduces a similar thought experiment which including a human to explain the problems in corporeal continuity in his book Divided Minds and the Nature of Persons.

“Suppose that you enter a cubicle in which, when you press a button, a scanner records the states of all the cells in your brain and body, destroying both while doing so. This information is then transmitted at the speed of light to some other planet, where a replicator produces a perfect organic copy of you. Since the brain of your Replica is exactly like yours, it will seem to remember living your life up to the moment when you pressed the button, its character will be just like yours, and it will be in every other way psychologically continuous with you.”

Donald Davidson’s Swampman thought experiment is also another thought experiment that begs the same question.

Psychological Continuity

Another idea about personal identity is to define it as a matter of psychological continuity. This was mainly discussed by philosopher John Locke in 1689. According to this view, in order for a person X to have a personal identity through a particular time, it is necessary and sufficient that there exists a person Y by whom psychologically evolved out of X after that particular time. This psychological evolution is defined by the evolution in cognitive connections between beliefs, desires, intentions, memories, character traits, and so forth. This view can explain the personal identity issues in situations like as body swaps or tele-transportation. But this view becomes problematic when considering situations like cloning a person. For example, let’s say that in Derek Parfit’s thought experiment you are not disturbed while scanning and it will end up with two copies of yourselves. It makes two future persons to be psychologically continuous with a presently existing person. But if the personal identity is unique how can one really become two? Also, there are other objections into this view, such as Thomas Rei’s Brave Officer Paradox.

What about AI?

Personal identity will not be a problem for the weak AI since they are mere devises used for solving specific problems which shows some aspects of intelligence. But for strong (conscious) and sentient AI which are independent it is another AI story. Strong AI has self awareness which make them aware about themselves. And that awareness is linked with personal identity since the self awareness and intelligence can make them to ask the same questions that we humans ask about ourselves. Other than that personal identity has a role in morality too.

As I said in the beginning the personal identity problem gets more difficult when it comes to AI. Although problems and thought experiments like swapping bodies and cloning are logical scenarios, They are not really applicable to humans (at least not yet). But when it comes to AI they are quite applicable. Since AI is considered as software (may be combined with sophisticated hardware) they can be copied, changed and transferred into new bodies. So that makes the problem of personal identity more difficult and much more interesting.

Fist of all we can say that corporeal continuity won’t be a good way to determine the personal identity of AI. Unlike humans adding changes to the body or transferring mind into a new body (or hardware) will be a simple task for AI. So they won’t hold on to a body for permanently. Since that best way for AI to determine the persona identity is to use psychological continuity instead of continuity in the body.

Simple psychological continuity won’t be applicable either. For an example, let’s consider a scenario where an AI’s content is copied into another AI. This memory duplication can be useful in the fast development of AI. For example, if you have managed to teach or train a single AI you can make a population of AI by duplicating it. Now how can we distinguish their personal identities? Since both of them have same mechanisms and memories it is very difficult to answer this question using psychological continuity or the evolution of the consciousness or memories. The way to get around this is to make the new and the old AI conscious about coping the process. So the new AI’s will know which memories are their own (memories after the copping) and which are not. At least the new AI will know the fact that memories before a specific point in time came from another AI and memories after that point in time are its own. For new AI it will be sort of like reading someones experience from a book, you have the knowledge, but you know that you didn’t gather them. But this is also quite different to the book example, since the memories or knowledge includes procedural memories and memories like the feeling of being someone (first person experiences) which cannot be read in a book. But there is another aspect in this process. Memories won’t be the only things that are going to be copied. Things like personality traits, beliefs and desire will also be copied. So It will be hard to say what would it be like to have a certain characteristic and also knowing that it came from another AI. For example, if the new AI is friendly (has the characteristic of friendliness) and it knows that this friendliness isn’t its’ own quality and it came from another AI. Will the new AI change those characteristics? Well.. Probably not unless it inherited a characteristic of not liking to be a copy of someone, from old AI. May be that problem itself can change the characteristics of the new AI (knowing that fact (I am a copy of someone else) can generate some new emotion). Also experiences which are gathered after the coping can also make the new AI unique from old one.

Let’s try another scenario. What will happen if the AI can automatically shear or synchronize memories with other AI. Synchronizing memory is a good way of AI to learn fast and get improved fast (actually I took this synchronized memory idea was from the Tachikomas in Ghost in the Shell ). But this would be a problem when it comes to maintaining a unique personal identity using psychological continuity, especially if the synchronizing process is unconscious. In here unconscious means that AI doesn’t know when did the coping happened and from which AI did the memories came from. But it will know that it gets memories from others. Since AI is unconscious about copping process solution in the above scenario won’t work. A way to get around this problem is to add a section to the AI which won’t be synchronized with others. That kind of a separate section can hold the information on its psychological characteristics and maybe some selected memories. So in this case personal identity will be a matter of this psychological characteristics and some unique memories, not its whole collection of memories (By psychological characteristics I meant things such as personality traits desires, emotions, beliefs, etc.). One could also argue that it is possible to tag all the memories and characteristics, according to the AI they belong to and that will also solve this problem. But the memories won’t always be limited to things like mental images or structured information or descriptions. There are also things like procedural memory and emotions which comes down to some firing sequences of (artificial) neurons or sets of weights or parametric values. So we cannot be certain about the practicality of tagging them.

In all the previous paragraphs the psychological characteristics of the AI are considered to be one aspect of defining personal identity. But these characteristics can change over time. The characteristics like personality traits can change over time through learning and experience and also may be by consuming other AI’s memory and experiences. When considering a single AI psychological continuity won’t be a problem since the AI will remember the evolution of its’ own characteristics (I remember what kind of a person I used to be). But when the AI is getting synchronized we cannot be sure that this memory of the evolution would be there permanently in the AI’s mind. That memory can be deleted or get replaced by other AI’s memories. So if the identity is defined through these psychological characteristics, then when they get change by new experiences the identity of the AI is also going to change. And the AI won’t be able to hold a unique personal identity. To avoid this, instead of using only the present characteristics, the memory of the evolution of these characteristics must be used. That means that the AI must remember all the characteristics it has throughout all its lifetime (what kind of a person was I before now and hat kind of a person I’m now). Other than that AI must be designed not to share this characteristic memory with other AI.

The second aspect I discussed in paragraphs is the selected unique memories (apart from characteristic memory). In this aspect also there are some questions that need answering. For instance, what kind of memories must be used in preserving identity will also be is a bit of a question that we need answers. We can assume that we can share procedural memories which are about how to do some tasks and some declarative memory like Semantic memory (facts, concepts, names, and other general knowledge information) and keep episodic memories like past events. But sharing episodic memories can maybe useful too. So I think the best way is to give AI the chance to decide what going to be sheared or not. Another issue concerning here is the connection between memories. May be a new AI can receive memories that are not connected to other memories (How do I know how to ride a bicycle when doesn’t remember ever doing it?). But since the AI knows that it can have others memories it will understand this situation.

AI can evolve into refusing this notion of personal identity. When we consider humans, like philosopher David Hume or psychologist William James said humans don’t have an unchanging mind or consciousness. So there is nothing unchanging for personal identity to hold on. But still, for humans the personal identity or the self is strongly connected with the way of thinking. For example, we worry about our future because we have an idea that we are going to be in the future too (we have a notion that myself now and the person in the future are the same, if now me and future me are two different people (or selves) bother planning for the future?). But maybe AI will have another mechanism for surviving future, a new way of thinking that doesn’t include the concept of personal identity. That will make them not to bother about personal identity.

So there can be many scenarios regarding the AI and their personal identity. And that wide range of possibilities will make it hard to guess what will AI actually do. But it will also make the problem much more interesting to think about.