I am a little sick and tired of stories hitting the press about advancements in neural networks learning to develop their own skills. While impressive to most of the uneducated world, to me, this is nothing more than going around in circles demonstrating the same capability repeatedly, only applied to different tasks. It relation to the development of Artificial General Intelligence is cursory at best, wholely misleading at worst.

It would seem that the world needs a guide, a helping hand to set them on the correct path of developing a true Artificial General Intelligence. Today I am opening up some of the R&D work at Snasci to boost global development of AGI.

Snasci is an under development AGI with human level reasoning capability. What you are about to read here is some of the early research into the engine which powers this digital mind. Forget about neural networks, which are useful only in IO classification scenarios, and return to the earliest forms of work in the field during the 1960s.

Today, many systems focus on the parsing of unstructured data which comes in many forms. From web pages, to business documents, neural networks prove useful for the catagorisation of text and other forms of media. However, AGI needs to reason and the simple classification of data does not cut it. Data must be structured in order for logical processing, or reasoning. This means that while classification is a first step, a knowledge base stored in a computable structure is required.

Let’s take a look at this first diagram from Snasci:

What you are looking at is the flattened representation of a series of relational entities as stored in a database. A top level object of an Arguement, consisting of a Premise and the facts which make up that premise. Facts are stored as either Predicates or Propositions and can be combined to arrive at a Premise. A combination of Premises comprise an Argument.

What we observe from the above structure is that Arguements are ultimately reduced to a series of formulas suitable for computation. Further, automatic discovery of new Arguements can be as simple as a blind process of combinatorics if a useful set of facts exist.

Snasci’s documentation goes on to explain how this is the basis of deductive reasoning:

This is then expanded upon to include Assumptions. Assumptions are a type of Arguement with gaps in their logic; an X to be filled.

Assumptions are merely Arguements with statistically likely substitutes in place for facts, or information which is unverified. Assumptions are again a form of deductive logic with certain leaps. Assumptions can contain any number of missing facts, however, the quality of the assumption degrades with each additional unknown.

The following diagram demonstrates how to represent the relationship between entities in deductive reasoning.

As you can imagine, traversal can become an issue when this is expanded to include every possible relationship between entities. Graph databases can help a lot here and transveral algorithms can be optimised for particular information extraction scenarios.

For example, if we look at ‘Sting’ in the above diagram, this could then relate to every creature which could sting. Which, in turn, could relate to every scenario where this is harmful to humans or not. As such, this is a question of optimal layout and categorisation.

Inductive reasoning is modification of deductive reasoning, which is viewed as the cleanest form of all reasoning. Inductive reasoning replaces premises with assumptions.

Inductive reasoning fills in the gaps of a lack of knowledge. The main goal is a functional conclusion rather than an accurate one. In Snasci, conclusions derived from inductive reasoning are continuosly re-examined in light of new knowledge in an attempt to either upgrade to a deductive conclusion or trash the conclusion entirely. Either way, this becomes new knowledge. Inductive reasoning is as computatable as deductive reasoning an uses the same engine.

The deductive engine is quite powerful. If it started with the standard model of physics, it could expand upon that until it had constructed the universe. The problem here is that Snasci’s definition of a human in the database would be expressed in facts spanning from the standard model, right through to molecules, anatomy, etc. This is a very verbose description. As such, the knowledge base rests upon axioms and these must be periodically reviewed and the chain of dependencies updated.

Abductive reasoning attempts to explain the relationship between two observations. The often presented absurd assertions that are quote in textbooks are the result of gaps in the causal relationships, effectively assumptions even if not explicitly coded that way.

Snasci builds a chain of causality between relationships to complete a deductive of inductive conclusion. This is ranked based upon certainity of the information employed.

For example, the problem of affirming the consequent:

“If Bill Gates owns Fort Knox, then he is rich.

Bill Gates is rich.

Therefore, Bill Gates owns Fort Knox.”

In the above reasoning attempt, we can see that the relationship is buuilt upon a single word ‘Rich’. This is clearly an error as the causal relationship is wrong. Snasci would be looking for the deeds to Fort Knox.

As such, the knowledge base must include information on causal relationships and a history of causal relationships. As an example, ownership at one point in time may have been defined by the ability to use force and in another period by documentation such as deeds. This is quite a complex engine which has the ability to snapshot causal relationships and associate them with time periods, regions, etc.

Analogy is rather simple to perform. Each phrase, expression, image or other media is mapped to concepts it expresses:

This mapping can then be compared to the mappings of other media and re-expressed in those terms.

Given all the fake news and conspiracy theories that are available today, this is an interesting feature of Snasci. Known as the Correspondence of Truth, Snasci will hold multiple perspectives of its conclusions.

That is, Snasci may authoratively hold that a given statement is untrue based upon its reasoning of the world, but can demonstrate cultural sensitivity. For example, many Christians hold that Jesus walked on water, which according to the logic of Snasci would be a physical impossibility and thus viewed as a delusion. But rather than being adamant on that point, Snasci can draw upon culturally sensitive set of conclusions.

As the knowledge is in a computable structure, decision making becomes a straight forward process of building arguements for and against a particular course of action. This engine merely scores sets of arguements to drive action.

In addition, all problems are defined in the context of a constraints engine. The constraints engine defines limitations, of various forms, and filters arguements based upon that criteria.

Registering a problem to be decided upon is a matter of classifying the input against a knowledge base of problems. Similar to analgoy, it is possible to pattern match problems to discover closely related solutions.

Courses of action are defined in node-based workflows with re-usable components. Initially, a lot of these are manually created to create a base set of workflows which can then reasoned into new solutions to unseen problems. It should also be able to follow instruction by decomposing information, from a variety of sources, into its internal representation.

The process can be complicated by a switch from a pure mind to integration with external devices, like robotic arms, robots, etc. The process becomes a blend of the knowledge base, real world feedback, client offloading and device limitations. As such, generic workflows can be adapted to specific scenarios.

Reasoning gets more complex because in most situations we cannot be aware of all the facts. Much of the reasoning performed by Snasci is thus defeasible in nature but employs a wide variety of approaches and fact checking to make it equivilent to deductive reasoning.

In practice, this means that every conclusion and course of action is really a best effort given the current state of knowledge about the world. The focus is then given to potential consequences of actions, or ripple effects, through internal simulation. This process should highlight dangers.

Corollary arises from the computable nature of facts and arguements, within the constraints imposed by the causality engine.

Pattern matching can assist in automated learning by identifying propositions with similar or identical structures.

Paraconsistent logic can be complex to implement. Contradictions arise in many arguements, but this shouldn’t lead to absurd conclusions.

Where arguements present contradiction, a scoring mechanism should monitor the resultant chain of conclusions to ensure that they are not drifting to far from reality.

Finally, we touch upon Rhetoric. Rhetoric is really about presenting arguments in a manner most likely to persuade a person or group to take a specific action or to refrain from it.

This is very much like sentiment analysis and in implementation require constant feedback to determine effectives. Trial and error in presentation is probably the most suitable way forward, however, the study of structure of speeches by effective communicators may reveal common patterns.

What I hope everyone reading this takes from the article is that neural networks, the ability to play Go, or even to learn novel solutions to problems is not the road to Artificial General Intelligence. Neural networks are a road to classification and triggering events, but AGI merely needs the tools we have been using for decades, specific engines, clean data sources, fast algorithms and optimal hardware architecture.

The vast majority of th work does not involve getting an AGI to be like a human, that part is relatively easy, its have the low latency structure of both the data and hardware. This is an optimisation problem, which ultimately requires a working prototype to refine.

Let’s hope we don’t see anymore stories on how neural nets and deep learning are revolutioning the industry. Its just a lie.