hi all, earlier this year released a new theory of AGI on this blog. it got substantial/ gratifying hits and am still pursuing it. aligned with this work, did a massive survey of existing state-of-the-art AGI leads. my initial idea was to try to summarize/ survey the different approaches. still have that in mind but its a almost herculean task and too much to bite off at this moment. this blog gets respectable hits but the audience is very spread out and not vocal/ participatory, dampening some of my energy for that high effort currently (but also not ruling it out).

however, this is a massive step in that direction, just painstakingly collecting this large ~180 link set/ collections of top leads.

much of this was found via the MIT AGI slack channel. its like trying to keep up with a firehose, but its very lively and cutting edge and also with tons of noise. as an expr goes used in this blog on various occasions, not for the fainthearted!

in compiling this its striking to me how both/ simultaneously brilliant and obscure some of these approaches are. some seem to me to be very much getting at the heart of AGI (and realized they are closely aligned with my own) but like my own audience, there is a lot of scattering. so far there is fairly low coalescing/ coalescence of groups around common themes/ consensus. my feeling is this disconnection may fall dramatically in the coming years esp with widely known/ publicized breakthrough(s) that drive the currently somewhat meandering herd down much more specific directions. it will be challenging-to-difficult but not inconceivable, exactly that happened on a substantial scale with deep learning within the last few years.

while it may seem overwhelming/ insurmountable at times, in some ways the AGI problem purely reduces to an architecture/ coordination problem, aka engineering. and notice some groups are arriving at the same answer from different directions (mainly psychology, (neuro)biology, machine learning, statistics/ data science/ big data, education/ learning theory, robotics, game AI, etc), with different languages/ vocabularies/ terminologies/ paradigms that are showing some/ early signs of converging/ convergence.

with new technologies, its all about “traction + momentum”. within the next few years, am expecting some major strategy/ consensus to emerge that builds on deep learning that gives rise to a plausible path/ route to AGI. have already outlined it myself, and think my ideas are close to the “secret sauce”, but my influence is low. fully expect nearly the exact same ideas to gain major traction but when espoused by some other leading light/ monolithic authority in the field, either an organization or individual or some combination of the two. it will likely be in the form of some step from the following ideas toward the more specific/ “laser-focused” direction.

odds are if there is some major AGI theory circulating at the present time, its pointed to in these refs, the well-known and not-so-well-known. and boldly both going out on a limb with a crystal ball, furthermore, think odds are strong that a “correct/ viable” AGI theory is in the not-too-distant future/ intermediate horizon and that the seeds will be contained (“holographic like”) in at least some refs cited here, maybe even many.

my own ideas are very close to the refs [a2-a5] and esp [a2] was a bit eyepopping for me, it basically is research exactly along the lines of one of my open conjectures, a year before: conquering game AI with curiosity driven algorithms. Oudeyer [a3] has been working steadily in the field for many years and has very similar ideas. to me curosity is a key catchphrase and suspect it will be not long before other researchers realize that its inherently tightly coupled with novelty detection/ seeking.

so my own ideas are very much aligned with all the following and combine many/ most in a yet distinct/ unique/ novel way. it was energizing for me to realize that.

for me its that old philosophical conundrum/ near (western) zen question: if correct AGI theory is outlined in a blog on the internet somewhere but nobody hears it, does it make any noise?

another category [f] that was thinking of putting in another blog, decided to move into this one. there is so much consternation about AI safety/ bias/ ethics/ philosophy right now, have mixed feelings on all that, on one hand its great to see. in some ways it looks to me like this is more energy focused on the implications of AGI than AGI research itself, aka some major “jumping the gun” or “putting the cart before the horse” or “tail wagging the dog” going on. at times it seems there much more angst, handwringing, teeth gnashing and hair pulling than actual engineering going on? AGI is just such a humanity-changing technology and centuries of ideas/ speculations on the subject are starting to come to a head lately. they tie in strongly at times with bigger issues/ social anxieties such as imbalances in our economic/ (hyper)capitalist systems. even legendary powerbroker Kissinger gets in the act in [f38]! which reminds me of a recent striking quote by Putin

Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world. —Putin

its notable to me that there are early glimmers of theories that are finally starting to marry complex biology and computer science/ engr wrt this problem. some of the pieces of the puzzle are being assembled.

another very notable shift was in what might be called “deep learning skepticism”.[c] this has always existed but the limitations of the method esp wrt AGI are starting to come into sharper focus. some of the leading deep learning researchers eg Hinton really got into deep learning ultimately pursuing the “big picture”/ AGI, while/ whereas others say LeCun (suffering through at least 1 or more “AI winter”) are more content with its current more incremental directions and who pushed back very hard recently against criticism as premature and bordering on dangerous (because it could spook/ scare investors etc as has happened in the past).

this is leading to a very lively, at times fiery debate (to still say the least) and a mass pause/ reconsideration/ reevaulation/ even “soul searching” in the field. feel the skeptics/ iconoclasts serve a very valuable role, even while not proposing a constructive alternative, of pointing out in a sense that the emperor has no clothes. it seems there are now billions of dollars chasing AGI without a lot of focus, and yeah, one might say, “everyone needs to get their act together”. it helps when someone like Hinton himself is familiar with kuhnian paradigm shift theory and realizes/ announces that current methods fall short. this incredible quote by Hinton as cited by Marcus contains the wisdom of ages! another striking/ extraordinary quote by grandfather-of-the-field McCarthy showed up recently as discovered on reddit.[a15.3]

“Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.” —Hinton

If it takes 200 years to achieve artificial intelligence and then finally theres a textbook that explains how it’s done, the hardest part of that textbook to write will be the part that explains why people didnt think of it 200 years ago. —McCarthy