After discussing the Causality Engine and general approaches to reasoning, I will now take a worked example of how an AGI could answer a complex question. During this, it will become apparent that at each stage different approaches may be viable and they should certainly be explored for performance and scalability.

Let’s say we asked an AGI:

What would happen if I plugged an USB cable into a chicken?

First we must assess the sentence for meaning. Meaning in the context of AGI is unlike a human’s experience. In an AGI, meaning is how it relates to other data and the course of action that sets up.

We can employ an adaptation of Conceptual dependency theory to determine meaning. If we focus on the phrase “what would happen”, we can map that to the concept of “consequence”. We can then use that mapping to determine the course of action, that is, determine the result of this interaction. At this point we enable a workflow, or program, which solves interaction problems.

Moving to the next section of the sentence, “if I plugged in”, this would map to the concept of “insert” which is used as type information for the interaction solver. Then the X and Y unknowns of USB cable and a chicken. The solver’s first approach is an explicit search in the knowledge base and, assuming the question has never been asked before, results in a miss.

From a HCI perspective, we can provide feedback by phrase selection from a set which captures the concept of “missed”. The final response may be “I have never thought about it” or “Hmmm…let me think.”. The purpose of this HCI feedback is to mask delays when they exceed certain thresholds. It buys time. Fairly similar to a spinning egg timer in some applications.

We could also pass the inputs to a humor module. This looks at how closely related two inputs to the interaction solver are. This could be a simple K-nearest neighbour clustering with greater distances between the inputs being funnier. In the case of a chicken and USB cable, the distance would be great and we could generate a giggle to go along with the response. The clustering can be derived from big data analysis of the causality engine.

The illusion of course is that the AGI gets that this is a funny question, but in reality we note that its a best effort at classification.

After this miss, the interaction solver traverses the database seeking suitable places to insert something into a chicken, with the best answer being orifices and the portion of a USB cable which can be inserted. It would seek the typical dimensions of both. A quick comparison would reveal the USB cable endpoints to be larger than the orifices. Another database lookup would return the impact of inserting a foreign object, of larger dimensions, into an orifice. This would return results such as pain, tearing, etc.

Depending on the completeness of the database each of these results could be drilled down into further revealing additional potential consequences such as psychological trauma, sepsis, severity of pain, etc.

The AGI could present the facts in full or alternatively, depending on its personality profile, present the results in funny terms. For example, it could use the severity of pain to determine the scale of a negative experience and a microworld simulation to show a USB cable being stuck up the bottom of a chicken, squealing and the AGI stating “He didn’t like that”.

What we observe here is that common sense reasoning requires a hyper connected knowledge base, with a certain base amount of information which permits inference to reveal new knowledge. This is a type of problem for which deep learning is unsuitable, however, it could play a role in certain select areas. For the most part, statistical methods, comparisons and properly suited structured data is all that is required to answer even the most complex questions.

The primary challenge, of course, is the construction of the knowledge base and that’s just a question of getting on with it and doing the work.