A recent study by neuroscientists offers clues about how increasingly difficult tasks have evolved the brain.

They created a video game similar to the old video game Tetris, in which programmed artificial adaptive agents (“animats”) have to “catch” moving blocks of different sizes before the blocks reach the bottom (in a game for humans, that might be done by pressing right or left cursor keys).

To play the game, the animats have been endowed with a rudimentary neural system made up of eight nodes: two sensors, two motors, and four internal computers that coordinate sensation, movement and memory.

The computers ran code that makes up the animats’ “DNA” — it determines the wiring between the parts of the “brain” and also allows for random mutations, some of which made the animats better block-catchers.

The neuroscientists — at the University of Wisconsin-Madison and Michigan State University — watched as the animats played a video game in which they tried to “catch” falling blocks, learning to detect where they would land.

Then the scientists selected the best players of each generation and allowed them to replicate.

(If that sounds familiar, you may have experimented with self-evolving cellular-automata gliders and spaceships in John Conway’s addictive “Game of Life.”)

Some animats played simpler versions of the game, but other animats played more and more complex versions over and over. At the end of 60,000 generations, they all had evolved more complex wiring in their neural networks, but the animats that did well in more complex versions of the game had developed particularly intricate neural networks.

“This shows that by adapting to a more complex environment, the organism itself becomes more complex,’’ says Larissa Albantakis, the study’s lead author.

How the hand ax helped evolve intelligence In a new book, How to Fly a Horse, entrepreneur Kevin Ashton proposes that the evolution of human intelligence started with the hand ax, which removed the need for big teeth and was superior to them. As he explains in this excerpt, “smaller teeth and weaker jaws begat big biological benefits: they left space in the skull for more brain cells, and changed the weight and balance of the head so that it became easier to stand erect. Hand axes changed our bodies, and also the course of human evolution. They are the reason we became brainy bipeds.” And able to handle increasing levels of information complexity.

More complexity in the environment requires the animats to develop more neural functions. But because the size of their brains was limited to the eight nodes, the animats adapted to complexity by creating more integration between the nodes.

Neuroscientists have proposed this as a strategy for brain evolution.

“In principle, integration in the brain is not necessary if the brain could just keep growing indefinitely, but in reality, there is an energetic cost to big brains. Integrated neural networks are just more economic, because they can implement the same number of functions with fewer nodes,’’ Albantakis explains.

The Integrated Information Theory of consciousness



However, Albantakis says her study was more interested in the question of how the brain evolved to environments of different complexity and whether that evolution looks like what is predicted by Integrated Information Theory (IIT).

The animats were a simplified system for studying integration in the brain, albeit one that strains the ability of computers, given the need to analyze 60,000 generations of neural connections.

Albantakis is a postdoctoral researcher in the laboratory of the study’s senior author Giulio Tononi, professor of psychiatry in the UW School of Medicine and Public Health, who has proposed IIT as a comprehensive theory of consciousness.

According to the theory, consciousness reflects a system’s capacity for information integration (quantified by a measure of complexity called Phi). The theory accounts for many experimental facts about consciousness and the brain, has led to testable predictions, and permits inferences and extrapolations.

The current study looks at how such systems with high Phi evolve. It found that over thousands of generations, the animats learn a larger number of concepts about the game and they integrate more of the information, but that their learning and integration depend on being presented with a more complex environment — for example, a more difficult level of the game.

“This shows that a rich environment is a driving force towards developing both complexity and integration,” Albantakis says.

The open access study was published in the online journal PLoS Computational Biology. Eminent neuroscientist Christof Koch of the Allen Institute for Brain Science was also an author.

The study was supported by the Templeton World Charities Foundation, the Paul G. Allen Family Foundation, the G. Harold and Leila Y Mathers Charitable Foundation, and a federal DARPA grant on physical intelligence.

UWMedicine | The animats “catch” falling blocks, which resembles an old video game.

Abstract of Evolution of integrated causal structures in animats exposed to environments of increasing complexity

Natural selection favors the evolution of brains that can capture fitness-relevant features of the environment’s causal structure. We investigated the evolution of small, adaptive logic-gate networks (“animats”) in task environments where falling blocks of different sizes have to be caught or avoided in a ‘Tetris-like’ game. Solving these tasks requires the integration of sensor inputs and memory. Evolved networks were evaluated using measures of information integration, including the number of evolved concepts and the total amount of integrated conceptual information. The results show that, over the course of the animats’ adaptation, i) the number of concepts grows; ii) integrated conceptual information increases; iii) this increase depends on the complexity of the environment, especially on the requirement for sequential memory. These results suggest that the need to capture the causal structure of a rich environment, given limited sensors and internal mechanisms, is an important driving force for organisms to develop highly integrated networks (“brains”) with many concepts, leading to an increase in their internal complexity.