Prediction in an AGI is a key performance enhancer and occurs at many levels throughout the system. In order to deliver a real-time experience, there is no avoiding the use of prediction algorithms. An AGI is not magic, it is ultimately built upon similar hardware to many cloud services and thus has the same challenges.

I won’t get into hardware level prediction here and begin at the level of the execution platform. The execution platform used in an Artificial General Intelligence will have many features similar to BOINC and serverless functions. In general usage, we can have hundreds/thousands, perhaps millions, of apps spinning up and terminating depending on the scale of the AGI solution.

Cold starts are one particular issue where predictive behaviour can assist. In general operation, it becomes possible to use big data analysis to predict which applications/functions to warm up and the quantity. Much like an energy provider, AGI as a Service will have its peak and off-peak times as well as sub-trends in terms of application/function usage.

Sitting a layer above this is the prediction systems of the AGI itself. This is where the various applications/functions of the AGI attempt to predict what the user will do next and pre-cache programs and data for those tasks. As with the underlying execution engine, the AGI will learn from experience over time what the most likely execution paths are and hence resources required.

There are two main approaches to pre-caching. The first is basic frequency counts and a Hadoop solution used to maintain graphs of Mindmaps/Workflows. The second approach is a propensity model, or series of propensity models, which perform a similar function.

We can extend upon this predictive behaviour to the general output of the AGI itself. That is, by employing propensity models and/or statistical approaches we can have the AGI predict both the past and future behaviour of any system. This type of approach has applications in nearly every area from general planning to speculation of prior events, from the likely winner of elections to historical analysis. Combining this with more traditional reasoning approaches can reduce noise and thus spurious output.

An AGI with this level of general predictive capability will eventually have greater than human capacity for predicting outcomes over a broad range of timeframes given the correct datasets.

Attention in an AGI is unlike the concept of attention in humans. In general, unless running in a lean-and-mean construct, an AGI will process all classified items which enter its input queues. Attention, in this architecture, is a mixture of subsetting data and/or shifting its processing priority. This, ultimately, is governed by the Mindmaps and Workflws being executed in conjunction with the predictive systems.

Obviously, interruptions demand attention, but this is just classifying the nature and then, if necessary, branching off the execution to the appropriate Mindmaps/Workflows.

In a lean-and-mean scenario, only classified data of certain types are processed from the input stream. For example, while we may have audio available, the AGI may elect only to process the video component in real-time as it is searching for something.

Lean-and-mean, attention and predictive capability ultimately translate into energy saving approaches and is one of the benchmarks an AGI will judge the various algorithms. In addition, on elastic platforms, the predictive and attention features allow the AGI to expand and contract dynamically to optimise on billing.