Now that we have the 10000 foot view of an AGI, I can begin to explore some aspects of the system in greater detail. I will begin this by describing the Memory block of the AGI Hyperscale diagram. In traditional systems, memory is just a chunk of RAM, objects and code are allocated a portion when created. Memory in an AGI is a bit more complex.

Memory in an AGI is a software solution which can be hardware accelerated. This software solution manages raw RAM abstracting it to a format more suitable for an AGI. If you have ever used Redis or even a RAM disk, you will be familiar with the basics of the concept. In an AGI, however, multiple abstractions are used. In this article, I will cover some of the major elements in that Memory

The first abstraction we will discuss is the stream memory. The stream memory is essentially an infinite stack into which we push the latest state updates coming from the edge classifiers. Each frame in this stack is datetime stamped and, as the raw data in the system, its ultimate destination is a data lake. The stream memory is spread across multiple underlying technologies which get colder as a frame ages. For example, in the first hour a frame may exist in RAM, in the second hour it may get pushed to SSD and in the third to a data lake. The data lake may be a mixture of disks and tape solutions, with the oldest data pushed towards the slowest/cheapest storage medium.

The stream memory is immutable once written and can include blockchain-like validation techniques to ensure the integrity of the data.

Algorithms can act over the entire length of the stream memory and the ultimate physical infrastructure is a matter of budget. Each frame is really collection of NOSQL objects describing the state changes that have occurred. Depending on the quantity of classifiers used, the frames can be rather large as they can describe video, text, subtext, expressions, sentiment, colours, events, ethnicity, relative positions, etc. The more the better as a richer description of a given scene results in a more comprehensive analysis.

The next major Memory component is the Microworld memory. The microworld memory is where state from the stream memory is integrated into a representation/model of the world. Algorithms parse the stream memory using a mixture of approaches and add/remove/update element to the microworld.

The microworld memory is based upon a human experience of the world. A human hallucinates/dreams the real world, generating an internal model which is updated and kept in sync with the objective world by means of sensory information. All our decisions come from this internal model, rather than objective reality. It is the same for an AGI.

The microworld is again an infinite stack, however, it permits the addition of revisions/versions of frames. This is required as subsequent information from the stream memory could revise the state of the microworld at any point. For example, let’s say we learn that in a given event in 1970 a third person was present in a room that the AGI was previously unaware of. Not only do we need to create an updated version of that scene (and subsequent scenes), but we must also retain the memory of the stream prior to this updated information being available. Further, we need to be able to capture a metric of the quality or reliability of these revisions to determine the likely truth of a sequence of memories.

The microworld memory is stored as a stream of scene graphs, or deltas to a scene graph. Unlike a scene graph used in 3D applications, this scene graph contains additional state such as people’s opinions, expressions, emotional state, colours, etc. This scene graph is the master summary of all that is known. In addition, it will have links to additional scenes. For example, someone may mention a father’s death, which results in an emotional reaction. Both the events in a scenegraph may be related to an earlier scenegraph where the death occurred, or the information was obtained.

Obviously, not all of this is done in real-time. Much of the richness of a scenegraph is added later by algorithms pouring over the data and connecting it with other data. Real-time priority is given to more conversational aspects and its primary tasks. As an example, the AGI may have a primary role as a Medical Doctor, so real-time priority will be given to algorithms of a medical nature rather than algorithms that can link what has been said to lyrics from songs. If the primary task of the AGI was entertainment, then the opposite may be true. To maintain the General aspect, the priority of algorithms can be adjusted to maintain flexibility.

The microworld model, as a master reference of all state in context, is preferable to decentralised approaches simply because of speed. That’s not to say decentralisation, or multi-agent approaches, could not co-exist. It all depends on the nature of the analysis and whether that is required in real-time.

Another aspect of the Memory is the Key memories cache. This is a smaller memory which stores highlights, or important memories/events. For example, if the key memory cache is about a person, it may contain links to their wedding, an accident, a death, etc. This form of memory is just an accelerator and often the first point of call in integrating new memories. It also, like the other types, maintains history and versioning information.

This article has been a whistle-stop tour of the Memory system of an AGI. No doubt, in drilling down into this during development many approaches and additional abstractions will be identified. The ultimate design will come down to performance testing and a little bit of art rather than science.

In this article, I have not discussed the interaction with the Knowledge base, selecting workflows and preparing memory abstractions on an ad-hoc basis. Nor have I discussed predictive extraction of knowledge base items, planning, etc. This adds layers of complexity to the overall architecture.

The key point to takeaway from this article is that Memory, in the context of an AGI, is a complicated affair. Even more so at hyperscale levels where information about groups can be shared behind the scenes.

Most important, is the simple fact that we do not, as yet, have an off-the-shelf solution which even comes close to the basic requirements for this Memory. It will be a big job to develop that from scratch.