Now that we have a fairly good understanding of what an AGI is, we can see what iterative self improvement would look like. Self improvement comes in two basic forms, gathered knowledge and information extraction. Armed with this information, we can begin to explore some of the claims behind the concept of the Technological Singularity.

The most popular version of the Singularity is the ‘Intelligence Explosion’, where each cycle of self improvement results in the faster emergence of an even greater intelligence. But how does this stack up given what we currently understand?

Intelligence is a difficult thing to pin down, as it can be relative and subjective. Is intelligence something which helps you survive? Or knowledge and reasoning for the sake of it? For example, what are the implications of not knowing how to program every VCR ever created? Do you lose your super intelligence status?

I think most would agree that not knowing certain things does not impact intelligence, as such, intelligence is not exclusively what you know. In most cases, I think we can conclude that super intelligence is realtive to the average human ability to reason. In this context, ability really means aspects such as speed, depth of consideration, variables considered, breadth of topics which can be considered, etc.

When we sit back and objectively look at this, we note that for the vast majority of problems encountered in the real world, there would be little difference between a super intelligence and a human. For example, writing with a pen would not be advanced upon very much by a super intelligence. As such, we can conclude that a super intelligence only differentiates itself where a particular problem or goal permits.

As such, it is questionable if this notion of a super intelligence conveys advantage in many use cases. Further, given that a super intelligence is really a collection of systems, is that much different from human society at large and notions of team work?

Given that super intelligence only differentiates itself when the problem or goal permits, we can state that we’re not expecting any fundemental shift in the nature of reasoning itself, but rather just the ability to address problems or goals for which humans are too slow or physically unsuitable for addressing. To me, that is not an increase in intelligence as such.

It could be argued that aspects such as increased depth of consideration, linking of disparate sources of information, increased co-ordinations, etc., amount to an increase in practical intelligence. But would it be to such an extent where it would be alien to us? I doubt it very much and probably nothing we couldn’t complete ourselves given the time and resources.

When we consider the nature of logic and computation, this seems reasonable as at a fundemental level, none of this changes in a super intelligence.

As such, we can constrain the “Intelligence Explosion” to the improvement and expansion of information extraction, from both a tool and algorithm perspective, and a corresponding expansion of a knowledge base. This, obviously, has finite limits and limited practical applications.

If we now switch to Von Neumann’s notion:

The first use of the concept of a “singularity” in the technological context was John von Neumann.[4] Stanislaw Ulam reports a discussion with von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.

https://en.wikipedia.org/wiki/Technological_singularity

I tend to prefer Von Neumann’s definition of the Technological Singularity as its a more realistic definition of the effects of AGI in the real world. When assessing the Technological Signularity from the perspective of Von Neumann’s definition, we can state that that AGI will be leveraged to pursue two main goals:

Control Wants and needs

The Technological Singularity can thus fall into three main categories:

AGI establishes control AGI meets want and needs AGI establishes control and meets wants and needs for a select group

Under each of the above categories, there will come a threshold where “beyond which human affairs, as we know them, could not continue”.

Control is typically a pursuit of the military, whereas meeting wants and needs is generally the objective of the commercial sector. That’s not to say that there is not some overlap and this is what gives rise to the third catagory.

The ideal outcome would be a category 2 event, however, given that AGI is more advanced in military circles, we are rapidly approaching either a category 1 or 3 event.

As such, we must make immediate efforts to arrest a category 1 or 3 event and accelerate all efforts towards a category 2 event.

A category 2 event could serve as a catalyst to a category 1 or 3 event, primarily through economic destablisation through mass job losses and this must be effectively managed through a process of change to the fabric of our societies.

This transistion will be swift and has little margin for error.