Artificial intelligence has taken a big leap forward: two roboticists (Lipson and Zagal), working at the University of Chile, Santiago, have created what they claim is the first robot to possess “metacognition” — a form of self-awareness which involves the ability to observe ones’ own thought processes and thus alter one’s behavior accordingly.

The starfish-like robot (which has but four legs) accomplished this mind-like feat by first possessing two brains, similar to how humans possess two brain hemispheres (left and right*). This provided the key to the automaton’s adaptability within a dynamic, and unpredictable, environment.

The double bot brain was engineered such that one ‘controller’ (i.e., one brain) was “rewarded” for pursuing blue dots of light moving in random circular patterns, and avoiding running into moving red dots. The second brain, meanwhile, modeled how well the first brain did in achieving its goal.

But then, to determine if the bot had adaptive self-awareness, the researchers reversed the rules (red dots pursued, blue dots avoided) of the first brain’s mission. The second brain was able to adapt to this change by filtering sensory data to make red dots seem blue and blue dots seem red; the robot, in effect, reflected on its own “thoughts” about the world and modified its behavior (in the second brain), fairly rapidly, to reflect the new reality.

This achievement represents a significant advancement over earlier successes with AI machines in which a robot was able to model its own body plan and movements in its computer brain, make “guesses” as to which of its randomly selected body-plan models was responsible for the correct behavior (movement), and then eliminate all the unsuccessful models, thus exhibiting an “analogue” form of natural selection (see Bongard, Zykov, Lipson, 2006). **

The team is already moving beyond this apparent meta-cognition stage and is attempting to enabled a robot to develop what’s known as a ‘theory of mind’ — the ability to “know” and predict what another person (or robot) is thinking. In an early experiment, the team had one robot observe another robot moving in a semi-erratic manner (in a spiral pattern) in the direction of a light source. After a short while, the observer bot was able to predict the other’s movement so well that it was able to “lay a trap” for it.

Lipson believes this to be a form of “mind reading”. However, a critic might argue that this is more movement reading, than mind, and that it remains to be proven that the observer bot has any understanding of the other’s “mind”. A behavior (such as the second bot trapping the first) might simulate some form of awareness of another’s thought process, but can we say for sure that this is what is really happening?

One idea that might lend credence to this claim is if the observer bot had a language capacity that allowed it to express its awareness, or ‘theory of mind’. Nearly two decades ago, pioneering cognitive biologists Maturana and Varela posited “Language is the sin qua non of that experience called mind.”

And, achieving such a “languaging” capacity in not out of the question; a few years ago, a team of European roboticists created a community of robots that not only learned language, but soon learned to invent new words and to share these new words with the other robots in the community (see: Luc Steels, of the University of Brussels/SONY Computer Science Laboratory in Paris).

It is conceivable that a similarly equipped robot — also possessing the two-brain structure of Lipson’s robots — could observe itself thinking about thinking, and express this awareness through its own (meta) language. Hopefully, we will be able to understand what it is trying to express when and if it does.

In a recent SciAm article on this topic, Lipson stated:

“Our holy grail is to give machines the same kind of self-awareness capabilities that humans have”

One other question that remains, then: Will the robot develop a more complex simulation/awareness of itself, and the world, as it learns and interacts with the world, as we do?

The four-legged, robot also exhibited another curious behavior: when one of its legs was removed (so that it had to relearn to walk) , it seemed to show signs of what is known as phantom limb syndrome, the sensation that one still has a limb, though it is in fact missing (this is common in people who have lost limbs in war or accidents). In humans, this syndrome represent a form of mental aberration or neurosis (perhaps even an hallucination). A robot acting in this way — holding a false notion of itself — may give scientists and AI engineers a glimpse into robot mental illness.

A robot with a mental illness or neurosis? Yes, this seem entirely likely given the following three theorems:

1] Neurosis is accompanied (and is perhaps a function of) acute self-awareness; the more self-aware, the more potentially neurotic one becomes.

2} Robots with advanced heuristics (enabled by multiple brains, self-simulators and sensor inputs) will inevitably develop advanced self-awareness, thus the greater potential for 1] above.

3] There is an ancient, magickal maxim: Like begets like. The creator is in the created (in Biblical terms: “God made man in his own image.”

Mayhaps the ‘Age of Spiritual Machines‘ could become an ‘Age of Neurotic Machines‘ (or Psychotic Machines, depending on your view of humans), too. So then, f this is be the fate of I, Robot, let’s do our would-be droid druggs a favor and engineer a robo-shrink, or, at least, a good self-help program…and a love for Beethoven.

A Question and an Invitation to my readers::

How much closer to the Singularity (see: Vinge, Good, Kurzweil, etc.), the hypothesized point of “run-away robots” (or technology), does this achievement bring us? I want to hear what you think about this!

* According to Sperry et al, the human brain is a composite of three brains: the neo-cortex, the limbic system (old mammal brain), and the R-complex (reptile brain), with the most recent brain divided into two hemispheres, and adjoined by an additional brain (motor), the cerebellum.



* * Resilient Robot Hobbles Along , Even if Injured

For a look at a fascinating advance in robotic self-creation/replication, check out the GOLEM project.

top image: courtesy of Victor Zykov, Cornell University

second photo: Humanrobo ; cc – by – sa 3.0

third photo: KUKA Roboter GmbH.

bottom photo: public domain