Consciousness in Humans and AI

Consciousness is a very hard term to define. And also it is hard to convince your definitions to someone since everyone has some sort of an intuitive idea about it (Daniel Dennett talks about this in one of his TED talks, “it’s very hard to change people’s minds about something like consciousness and I finally figured out the reason for that… the reason for that is that everybody is an expert in consciousness…”). Although an extremely simple way of explaining the meaning of it would be, the consciousness is the awareness and control over external objects and also ones own mental content. Or it can be also defined as sentience, having a self-hood, subjectivity or the ability to experience or to feel.

One of the first questions about consciousness in philosophy is the mind-body problem. And the first influential philosopher talk about this was Rene Descartes (1596). According to him consciousness (or mind) is made of mental substance (res cogitans) which is one of the two substances that the universe is made of (other one is physical substance or res extensa). This is known as the Cartesian Dualism. But later other theories which only believe in one substance were also proposed (Monisms). Three types of these theories (monisms) are, physicalism (holds the view that mind is comprised of matter), idealism (holds the view that the matter or the physical world is an illusion and what only exists is the mind) and neutral monism (holds the view that both mind and matter are made of another distinct essence that is itself identical to neither of them).

One of the earliest philosophers who tried to define the actual term consciousness was John Locke (1960). He defined it as “the perception of what passes in a man’s own mind”. Another important view on consciousness is given by the philosopher and psychologist William James (1890). He is the one who proposed the idea, the flow of consciousness or the consciousness flows like a stream. And he defined psychology as the description and explanation of the state of consciousness. Also, he stated five characteristics of consciousness, Personal subjectivity, Constant Change, Continuity despite the change, Intentionality and Selective Attention.

One interesting topic is what would it be like not being conscious (actually this question itself is meaningless, but now let’s just go with it). I like to use an example for this. Let’s say that we have some kind of a device, attached to our body. And it can take inputs add/or it can give outputs (like for an example, someone can press a key on the device and it will type or display a character like a typewriter may be). But we are unaware of this (we don’t feel anything when someone presses the keys). So if we take the whole system, devise and the person , then one we can say that typing process is one of an unconscious process of the system. So now imagine that all of our sensory organs are like that device (they can be interconnected to and do some processing and give output). Then what would it be like? (Not having external awareness). In this situation we still take sensory inputs and also even process and output, but we aren’t aware of it. Although we are taking inputs it doesn’t necessarily mean that we are always aware of the process. But of course in the above example, we would still have our awareness of our mental content since we are humans. Not having even that would be like deep sleep (without dreams) or coma (I think) or someone struck you in the head and render you unconscious. But as I said before this question is not actually correct. Since as Thomas Nagel (1960) said the ‘it be like’ part, itself is the consciousness (Nagel’s definition of consciousness is that if an organism is conscious then there is something that’s like to be that organism). Or otherwise to feel is to be conscious.

According to some philosophers, consciousness is an illusion. And that it is only a product of the information processing in the brain. Or in a another words a virtual self. That theory explains that, our decision making and other mental processes are always unconscious and happens as parallel processes, but when we introspect or ask our self what we are thinking now, a content emerges according to the current processes going on in the brain as an answer to the question. And that means that consciousness only exists when we look into our minds. This isn’t an unreasonable argument. But I think even consciousness is an illusion it still useful (and easy) to have consciousness since it gives the notion free will, emotions, beliefs, sense of self, etc.. Also, it is important that we can actually ask ourselves questions or introspect.

According to two-factor theory (Schachter-Singer theory) of emotions are based on two factors, physiological arousal and cognitive label or experience of emotion. So the emotion is a result of being aware of the physiological effect or the conscious experience of the physiological effect. So, according to this the emotions are a result of consciousness. According to the James–Lange theory which is another theory of emotions the conscious experience is secondary to the physical effect. But still there is the conscious experience.

Beliefs which are also like emotions, needs consciousness. As all conscious experience, a belief has a subject (the believer) and an object of belief (the proposition). And also like in the emotions we should be aware of the belief, although the belief is not a physical effect but a content of our own mind. So, according to this, the beliefs also need consciousness.

And when we talk about AI (not strong AI) or autonomous agents, they are more like the devise example except they do not have the human in it (so they do not have internal awareness). So what do we need in order for consciousness to emerge from their information process? (If the consciousness isn’t a non physical entity) Or what is the difference in their information processes that aren’t conscious and ours which is conscious? Let’s look at this from another angle. What is the difference between our unconscious information processes (for an example if we accidentally touch a burning object we immediately take away our hand without consciously deciding to do it, the thoughts about it comes later) and our conscious processes (for example I make a conscious decision to lift my hand and the hand goes up). Since both conscious and unconscious processes of ours happens in our brain and the rest of the nervous system, the difference between unconscious and conscious information process must be the architecture, speed and complexity of the parts of the brain and nervous system that contributes to each of these different processes. And if we get back to the AI, I think that weak AI (autonomous agents) is more similar to our unconscious processes (not exactly the same, but what happens is kind of the same). So like in the brain, for an AI information process to be conscious it must have an appropriate architecture with relevant complexity and also the speed. The speed and complexity reply to the Chinese Room argument by Paul and Patricia Churchland (Churchland’s luminous room thought experiment) gives the same idea.

Another problem about consciousness which especially comes in AI is that, how do we know if someone is conscious? And this problem is known as the problem of other minds (given that I can only observe the behavior of others, how can I know that others have minds?).There are two answers given to this problem, type physicalism and philosophical behaviorism. The type physicalism suggests that a given brain state is responsible for a corresponding mental state, and then if someone has a given brain state than he is in the corresponding mental state. But the problem in this approach is that we cannot be certain that the brain state is causing the mental state (What if both the brain state and the mental state are cause by something else). And also another problem in this approach is, can another type of brain state can give the same mental state? (Especially in AI). The other approach the philosophical behaviorism (or logical behaviorism) state that to have a certain mental state is just to behave in certain ways. So to know someone is in a mental state you only has to observe his or her behavior (since mental state is the behavior). But the problems with this approach are that, someone can feign a mental state and how to capture qualitative nature of an experience.

Something I saw when I go through the feedback for my earlier articles is that people are sort of uncomfortable when they talk about consciousness. I guess mostly because it is hard do define consciousness. Especially people who works in or interested in AI tent to believe that consciousness an illusion and/or AI doesn’t need consciousness because they are just tool (mostly because that makes their life easier, I guess, since you can not build something that you aren’t fully understand). And people who don’t work or interested in AI, but interesting in philosophy just through the Chines Room and close the case when asked about consciousness in AI. But for me consciousness is interesting because it’s hard to define. It’s kind of a mystery (if nothing else, the fact that the human evolution can create something like consciousness is amazing). And it is also sort of an unattainable goal (like the unattainable object of desire or as the Lacan said Objet petit a). Maybe people will figure this out in the future or may consciousness will turn out to be just a mirage. But I sure like to enjoy the journey towards the answer.