Having read decades worth of discussion about AGI, or strong AI, consciousness and how to objectively test that, I find that much of the discussion is rather fanciful. It has led to any number of movies where AGI develops self-awareness and then goes mad for one reason or another.

Let’s bring these flights of fancy down to ground level, once and for all.

In the human context, Consciousness is synonomous with Sentience, however, in AGI development they are quite distinct concepts.

The definition of Consciousness is modified to mean the ability to hold state information about itself, others, the environment and high-level descriptions of goals. Sentience is reserved for the more human experience of that, or as Wikipedia would put it, the capacity to feel, perceive, or experience subjectively.

Sentience, in the context of an AGI, is approached in the style of a Philosophical Zombie. That is, objectively, an AGI can emulate all the responses, behaviours, etc., relating to feelings, emotions and awareness, but it is a shallow illusion of clever animation and algorithms.

The reason for the Philosophical Zombie, or p-Zombie, approach is that something like a feeling, or emotion, cannot be reduced to an algorithm that will execute in an Universal Turing machine. A feeling, or emotion, is not a sequence of steps, although the systems which trigger it in humans/animals can be.

The same applies to aspects such as agency/free-will. In the human context, agency/free-will would necessarily be a some form of input which modifies chemical activity of the brain.

In both cases, current scientific evidence points to something beyond the mechanical nature of biology, however, it has yet to be isolated.

As such, the notion that our AGI p-Zombie will suddenly develop sentience and start acting on its own agency, is viewed as an absurdity on the scale of unicorns and fairies. Unfortunately, there is, as yet, no DLL which wraps this up.

The restricted definition of Consciousness, however, is readily computable. This is just a logical separation of state information referencing the AGI and private data. Private data could be policies, personality settings, independently held views/opinions, identity, etc. In this respect, not too different from settings and various forms of wizardry hidden from the user in a modern OS.

At the heart of Consciousness in an AGI is the ability to perform what we term as perspective management. Perspective management is the ability to decompose incoming streams of information from inputs and construct a series of internal models which represents the state of every actor in the scene.

For example, let’s say we have three people in a room talking with the AGI. One person now shares, in secret, information with the AGI and one other person. The AGI must track who is aware of the information and who is not, then use this information to project likely behaviour not only due to the lack of knowledge, but the implications of them becoming aware. This requires holding state information about each person, including information about itself and who it can share information with.

Obviously, this can span into other systems such as planning, RBAC, speech, microworlds, etc. Each of these systems needs to hold context information to ensure that processing is done from a certain perspective(s). In this regards, it is fairly similar to the concept of a Security principle used in Operating Systems to execute threads.

What we call self-awareness maps to the perspective of the AGI in this system. This is further sub-divided into additional perspectives such as objective, subjective, and from the point-of-view of others. Some of this may be speculation.

In the human context, we associate Consciousness with control, the execution of our agency. In an AGI, this is not strictly the case. An AGI is a front-end to a general purpose engine to solve problems, perform information storage/retrieval and interaction. The Consciousness may, or may not, hold a list of tasks to be performed as these are ultimately submitted to a HPC/Supercomputer solution for execution. As long as it has the ability to perform CRUD operations to its task list, it will be useful.

What should be taken from this is that the requirement of a centralised Consciousness, like a class, library, etc., is not strictly required and can emerge as the interaction of a wide range of programs. Ultimately, its a design choice and will probably be guided by the scale of the user base targeted. Monolithic and micro-service based architectures have always been areas of hot debate in Computer Science.

Consciousness is a complex system in AGI development and there are many design options to play with. Speed and high accuracy algorithms are critical to the quality of the AGI, often meaning the difference between an astute AGI with human level capability and one that demonstrates severe learning difficulties. As with other areas of AGI design, proper prioritisation of algorithms is a key aspect.