1. Introduction

The last 50 years of research in Artificial Intelligence have taught us many things, but perhaps the most obvious lesson is that designing complex cognitive systems is extremely hard. Notwithstanding the success of chess-playing algorithms and self-driving cars, designing a brain that rivals the performance of even the smallest vertebrate has proven elusive. While the computational algorithms that are being deployed today on the aforementioned problems, (as well as on image classification via convolutional nets) are impressive, many researchers are convinced that none of these algorithms are cognitive in the sense that they display situational understanding. For example, the celebrated convolutional nets can easily be fooled [ 1 ] with trivial imagery, suggesting that they implement a sophisticated look-up table after all, with very little understanding.

The failure of the design approach has been acknowledged by several groups of researchers that have chosen an entirely different approach, namely to use the power of evolution to create machine intelligence. This field of “neuro-evolution” [ 2 3 ] is much less developed than the standard design approach, but it has made great strides in the last decade. It also has the advantage (compared to the design approach) that the approach is known to have resulted in human-level intelligence at least once. In the field of neuro-evolution, a Genetic Algorithm (GA) [ 4 ] is used to evolve a program that, when executed, builds a “neuro-controller”. This neuro-controller constitutes the brain of a simulated entity, which is called an agent or animat.

Each program is evaluated via the performance of the agent, and programs that gave rise to successful brains are then replicated, and given proportionally more off-spring programs than unsuccessful programs. Because mutations are introduced in the replication phase, new types of programs are introduced every generation, trying out variations of the programs—and therefore variations of the brains. This algorithm, closely modeled on the Darwinian process that has given rise to all the biocomplexity on the planet today, has proven to be a powerful tool that can create neuro-controllers for a diverse set of tasks.

not an objective. For a population to overcome the valleys in such a rugged landscape, programs must (at least temporarily) acquire deleterious mutations that are at odds with a simple reward system for optimization. This difficulty is typically overcome by increasing diversity in the population [ Using evolution to create brains is no panacea, though. The use of Genetic Algorithms to optimize performance (fitness) of behavior controllers is often hindered by the structure of complex fitness landscapes, which are typically rugged and contain multiple peaks. The GA, via its fitness maximization objective, will discover local peaks but may get stuck at sub-optimal peaks because crossing valleys is specificallyan objective. For a population to overcome the valleys in such a rugged landscape, programs must (at least temporarily) acquire deleterious mutations that are at odds with a simple reward system for optimization. This difficulty is typically overcome by increasing diversity in the population [ 5 ], by splitting the population into islands [ 6 7 ], by using alternative objectives such as in novelty search [ 8 ], or by changing (and thus optimizing) the fitness function itself. Of these solutions, “Diversity” and “Novelty” can be computationally intensive, while fitness function optimization is very specific to every problem, and thus not a general solution.

Here we propose an alternative approach to fitness optimization in the evolution of cognitive controllers, which takes advantage of the insight that functioning brains have a certain number of characteristics that are a reflection of their network structure, as well as their information-processing capacity. If we were able to reward these features at the same time as rewarding the performance of the given task, it may be possible to evade the valleys of the landscape, and move on neutral ridges towards higher peaks. The idea of using multiple objectives in Genetic Algorithms is not at all new [ 9 ], and it has been used previously in neuro-evolution [ 10 ].

We present evidence from simulations of evolving virtual agents that establishes that it is possible to improve the performance of a GA, increase the rate of adaptation, and improve the performance of the final evolved solution, all by incorporating neuro-correlates of the evolving agent’s brains into the fitness calculation. These neuro-correlates are metrics inspired by the interface between Network Science and Cognitive Science that attempt to measure “ how well a brain is working ”, independently of the achieved fitness. These measures typically do not assess an agent’s performance of a task because it is often difficult to relate task performance to cognitive ability. Ideally, these neuro-correlates either quantify the mode and manner that information is being processed, or in which manner the nodes of the network are connected. It is important that these neuro-correlates are agnostic of performance, as otherwise their reward would not open a new dimension in optimization.