$\begingroup$

The line of thought in this question is interesting. It also addresses and challenges some trending creeds expressed in the media about artificial intelligence, making this an important topics to address in the AI StackExchange. To properly address interesting detail in the question it is best to answer block by block.

Is time/space estimation of algorithms required for creating an AGI?

The term AGI (artificial general intelligence) is a symbol for a system of concepts that I've written about on two occasions in a Creative Commons works prior to learning about StackExchange. I can briefly summarize those works by listing the primary three beliefs that are taken as axioms in the system.

There is a capability called General Intelligence that humans possess. Human intelligence would be limitless because it is of the general type were it not for biological limitations. A computer system can simulate or replicate General Intelligence.

A corollary of these axioms is that a computer system can scale and be repaired, freeing the capacities of general intelligence to grow with a new and less DNA constrained bounds. Screenwriter Jack Paglen astutely investigates one of the scenarios enabled by this corollary in Transcendence, a movie released in 2014 starring Johnny Depp. In it, protagonist Will Caster (played by actor Johnny Depp) is terminally ill and escapes death via computer upload and spreads, such as in Stephen King's short story Lawnmower Man (1975).

Jack Paglen, however, extends beyond the media hyped Singularity concept based on the above creed. Will Caster downloads himself into both a body he built using nano-tech and an airborne nano-technical life form. This is an epitome of the AGI system of concepts.

The biological limitations that are part of the details of the second axiom above are roughly correct.

The ratio of the cranial volume to the volume required by a neuron, limiting the number of neurons.

Metabolism rates limit the number of pulses a neuron can handle and the speed by which pulses can travel through axons.

Mortality limits both the duration of learning and the time available for conceptual organization of what has been learned.

All of this sounds quite logical until the root of the AGI system is held up to scientific scrutiny. The most central of the three primary axioms on which the entire system is based, Axiom 1, is a belief. Those who believe it often offer as evidence the array of human achievements. When asked to list them, the list is always skews historical events in science, geopolitics, and culture toward the positive. All the gross and recurrent failures are excluded from the list.

The concepts that form the AGI system neglect the addressing of three realities born out of any scientific analysis of the full data set without which an accurate picture of human intelligence is grossly incomplete.

What imminent psychologist Carl Jung called, "A wide array of neuroses," is evident in all expressions of human intelligence. The set of circumstance to which humans adapt through intellect is narrow and marginally proven within the biosphere. 1 The set of circumstance to which humans adapt through intellect is minuscule and largly unsuccessful outside the biosphere.

Analyzing further, we find that the reason intelligence has no agreed upon mathematical definition is that there is no sentence or paragraph that defines intelligence which would stands up to even the weakest scientific scrutiny. The lack of scientific rigor applied to the term intelligence is so dilute that any one schooled in the scientific method would have to admit that there is no real evidence that intelligence exists at all.

AGI is credo superbiae, Latin for creed of pride. One of the primary characteristics of the human brain is its overestimation of itself. Anyone who has managed many projects, gathering estimates from experts, creating a Gantt chart for the project and then had to adjust dates repeatedly because the estimates are always optimistic and never conservative has seen the ample data supporting this fact.

Given infinite resources/time one could create AGIs by writing code to simulate infinite worlds.

By doing that in some of the worlds AGIs would be created. Detecting them would be another issue.

Detecting intelligence in simulations of infinite worlds would be an issue for three reasons, one of which the question author indicated below.

Research resources and time are not infinite.

Although Turing's Imitation Game could be offered as a test for intelligence, given the scrutiny above, that would not be a test for general intelligence because one would need more than infinite resources and time to run through an infinite number of test scenarios that could arise in an infinite number of simulations of infinite worlds.

The creation of infinite worlds only constructs the environment in which an entity's intellectual capabilities can be exercised. The creation of an environment does not automatically imply that self-improving life will originate within it or that the sequence of improvements if they were to be driven forward by some form of genetic evolution would lead to the passing of any Imitation Game we could create that aligns with the human view of what intelligence is. Since we don't have infinite resources the most probable way to create an AGI is to write some bootstrapping code which would reduce the resources/time to reasonable values.

There is currently no bootstrapping strategy that expands resources infinitely in at least three independent spaces which require at least six dimensions of infinite range, based on the question description.

Infinite world in $\mathbb{R}^3$

An infinite number of the infinite worlds in $\mathbb{I}^1$

An infinite number of scenarios required to prove the generality of the intelligence in all possible scenarios in $\mathbb{I}^2$ In that AGI code (that would make it reasonable to create with finite resources/time) is it required to have a part that deals with time/space estimation of algorithms? Or should that be outside of the code and be something the AGI discovers by itself after it starts running? ... by time/space I mean time/space complexity analysis for algorithms, see: Measures of resource usage and Analysis of algorithms

Let's assume that, through some remarkable computing technique, $\mathbb{T}$, an infinite number of test scenarios can be created to test automatons that remarkably arise in an infinite number of infinite worlds. Under those conditions, it would not be necessary to have a part that deals with time/space estimation of algorithms. The entity, whatever its intellectual abilities or how those would be measure would not need to check its bounds because its environment is already rendered boundless through the remarkable technique $\mathbb{T}$.

I think the way I formulated the question might lead people to think that the time/space estimation can only apply to some class of actions called algorithms. To clarify my mistake, I mean the estimation to apply to any action plan. Imagine you are an AGI and you have to make a choice between different set[s] of actions to pursue your goals. If you had 2 goals and one of them used less space and less time then you would always pick it over the other algorithm. So time/space estimation is very useful since intelligence is about efficiency.

The assumption that intelligence is about efficiency may be an assumption, but it is nonetheless an interesting idea, since we know that adaptation in the physical world is a primarily energy acquisition and conservation process. That the DNA process of adaptation was augmented within organism lifespan by neural learning does not except this primary driving force. One could argue rationally that intelligence is indicated when a human being states, "Why should I go through that trouble? What can I expect as a return on that investment of time, energy, and other resource and why should I expect it?"

There is at least 1 exception though, imagine in the example before that the goal of the AGI is to pick the set of actions that leads to the most expensive time/space set of actions (or any non-minimal time/space cost) then obviously because of the goal constraint you would pick the most time/space expensive set of actions. In most other cases though, you would just pick the most time/space efficient algorithm.

One could argue rationally that pursuit of inefficiency is unintelligent.

All of this later thinking is solid, but the earlier approach to building the AIG appears to have four caveats.

The absence of bootstrapping that runs six dimensions if infinite iteration on finite computing resource in a finite time

The lack of a scientifically valid definition of intelligence or even a proof that intelligence as humans tend to think of it truly exists

The lack of strong evidence that life will reliably arise in worlds created

The lack of strong evidence that life will reliably develop the equivalent of a nervous system or any secondary system of evolution that operates within a living entity's operational duration.

The problem is in the infinity.

If one acknowledges that working definitions of intelligence will only describe a relative set of one or more intelligence features2 and that none of them can be infinite or unbounded, then the rest of this line of thinking is plausible.

Footnotes

[1] The adaptive capabilities of humans (of which capabilities commonly deemed as features of intelligence are only a part) have been tested for only 195,000 to 460,000 years years compared to the notable adaptability of bacteria over at least 4,200,000,000 years and possibly all the way back to the formation of Earth. Vladimir Vernadski, who made famous the term Biosphere from the title of his book, questioned whether life might have came Earth, pointing out that life is a geological phenomena and the elements of life are throughout the galaxy. 195,000 years is the back-dating of the oldest clear archaeological evidence of modern humans. 460,000 years is the back-dating of the Irhoud skulls which may have exhibited modern intelligence.

[2] I've suggested in other Q&A here that there must be at least 22 dimensions to intelligent behavior for genetic and statistical reasons.