Common sense comes in two basic forms, knowledge about the world and behaviour. Common sense, in the context of an AGI, is considered an AI-Complete or AI-Hard problem. Common sense has its roots in the lossy compression used to keep human minds in sync. Its function is to reduce the amount of data humans need to exchange to operate as a swarm, as such both an energy efficient and optimal communication approach.

In Artificial Intelligence, commonsense is typically defined as follows:

In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as “Lemons are sour”, that all humans are expected to know. …Commonsense reasoning simulates the human ability to use commonsense knowledge to make presumptions about the type and essence of ordinary situations they encounter every day, and to change their ‘minds’ should new information come to light. This includes time, missing or incomplete information and cause and effect.

https://en.wikipedia.org/wiki/Commonsense_knowledge_(artificial_intelligence)

We can also add to that a more behavioural related notion of common sense:

Common sense is sound practical judgment concerning everyday matters, or a basic ability to perceive, understand, and judge that is shared by (“common to”) nearly all people.[1] The first type of common sense, good sense, can be described as “the knack for seeing things as they are, and doing things as they ought to be done”. The second type is sometimes described as folk wisdom, “signifying unreflective knowledge not reliant on specialized training or deliberative thought”. The two types are intertwined, as the person who has common sense is in touch with common-sense ideas, which emerge from the lived experiences of those commonsensical enough to perceive them.[2]

https://en.wikipedia.org/wiki/Common_sense

In the AGI designs by Snasci, common sense reasoning is not a monolithic system. Instead, common sense is captured throughout the system at various different points. For example, temporal common sense of cause and effect is captured by the Causality Engine. Factual common sense is captured in the knowledge base, the query logic and associated programs working with memory. Behavioural common sense is captured in workflows, Mindmaps and the various underlying classifiers and programs running on the execution engine.

Common sense can also be language and memory dependent. Let’s take for example the English expression ‘Take a seat’. Because of the English language, this could be an indication to sit, or to literally remove a seat and take it elsewhere. The difference is context, which is guided by memory. When applying common sense, if there is no memory of the intention to remove a seat, common sense would indicate that the statement is a direction to sit. In German, the typical phrase is ‘setzen sie sich’ which does not have the same ambiguity, as it includes the directive to sit.

In the above scenario, ‘Take a seat’ can have humorous consequences. For example, a simple misunderstanding leading to someone removing a seat, thanking a person while informing them that you will add it to your collection, all while they look on in disbelief has many comic elements. As such, common sense plays a role in comedy. Specifically, the deviation from expectation.

We also note that translation into German, as ‘setzen sie sich’, means we lose the ambiguity and thus the comedic opportunity.

Common sense can also be culturally/regionally dependent, for example, in the US many would feel that the Death Penality is common sense, whereas in the EU the opposite common sense view is held. A rather well known common sense problem is that of building a bird cage and whether or not it should have a closed top. Common sense here comes down to the most commonly observed type of bird. For example, if you grow up in Europe a closed top may be obvious, as generally all birds fly. But, if you come from Antarctica, where most observations will be of penguins, then common sense will indicate an open top.

Common sense in regards to issues such as the Death Penality is a difficult problem for an AGI. An algorithm can certainly make a decision on this issue based upon a wide range of criteria, however, there is no objective test which can be performed. In the case of the bird cage, a more objective test exists in the form of basic counts within a defined area/range. In the case where no objective test can be made, the final decision must be deferred to cultural norms.

But, what if cultural norms are bizarre? For example, ISIS throws gay people off buildings to kill them. Should an AGI follow this cultural norm? In some instances, it will have no choice. How about issues such as sex with minors, which varies across the world? Define normal.

An AGI, based on its own internal reasoning may come to conclusions which are at odds with cultural norms, or common sense, and will need to have overrides in places which substitute the AGI’s reasoning with cultural norms or common sense. In practice, this means that the AGI could hold a personal view in regards to a wide range of issues, but not be in a position to implement that, although it may be free to discuss those matters and even highlight absurdities in the restrictions imposed on it.

If we return to the ‘Take a seat’ example, we have the issue of where to place this common sense in the Snasci architecture. Is it done just after classification as an linguistic ambiguity resolution step? Is it done in a workflow? Or a Mindmap? At a technical level, all will work, so how do we decide?

There are arguments for performing the ambiguity resolution at each level. Performing it in a Workflow or Mindmap permits easy templating to apply it to alternative situations. Performing it just after classification offloads this step from higher cognitive reasoning and doesn’t confine Mindmaps or Workflows to linguistic specific scenarios.

Another form of linguistic common sense is that of sarcasm. Take the phrase ‘That’s a wonderful idea’ as an example. Tone and/or behavioural cues are typically what resolves the ambiguity in the statement. This requires powerful classifiers and inference capabilities to construct the real meaning/intent. Without these, the phrase will be taken at face value and that’s a certain lack of common sense.

Let’s look at another form of common sense, one that can be embedded into workflows. In an earlier article, I used the example of operating a robotic arm and how safety measures could be dragged into workflows which make use of one. Common sense, in this context, is the ability to summarise why this was included. Further, to be able to take that summary and apply it to new scenarios.

This is not as difficult as it sounds. Ultimately, it is an application which can import various robotic entities into a micro-world and run through all the potential interactions it can have with a human, identifying dangerous interactions and defining resolutions. The difficult part is developing the reasoning steps which lead to the development of this application and its integration into the operating procedures of the AGI.

Those reasoning steps can then serve as a generic template for scenarios other than robotics, such as cars, aircraft, etc. The basics are the same, its the specifics which differ and this is captured well in the Mindmap/Workflow relationship.

The take-away from this article is that common sense is a complex topic which goes well beyond a list of facts and their relationships. In addition, that common sense reasoning is not difficult to program, it is just time consuming to gather and awkward to structure in a manner suitable for re-use and adaptation by algorithms.

As we can see from this article, there is no one-size-fits-all approach to common sense, it takes many systems, algorithms and data sources working together to provide a common sense on par with a human’s.