Artificial intelligence (AI) is a grand challenge for computer science. Lifetimes of effort and billions of dollars have powered its pursuit. Yet, today its most ambitious vision remains unmet: though progress continues, no human-competitive general digital intelligence is within our reach. However, such an elusive goal is exactly what we expect from a “grand challenge”—it’s something that will take astronomical effort over expansive time to achieve—and is likely worth the wait. There are other grand challenges, like curing cancer, achieving 100% renewable energy, or unifying physics. Some fields have entire sets of grand challenges, such as David Hilbert’s 23 unsolved problems in mathematics, which laid down the gauntlet for the entire 20th century. What’s unusual, though, is for there to be a problem whose solution could radically alter our civilization and our understanding of ourselves while being known only to the smallest sliver of researchers. Despite how strangely implausible that sounds, it is precisely the scenario today with the challenge of open-endedness. Almost no one has even heard of this problem, let alone cares about its solution, even though it is among the most fascinating and profound challenges that might actually someday be solved. With this article, we hope to help fix this surprising disconnect. We’ll explain just what this challenge is, its amazing implications if solved, and how to join the quest if we’ve inspired your interest.

To the challenge: the first life form, likely something resembling a single prokaryotic cell, emerged billions of years ago. Though we do not know the full story of its origin, we do know that over the next few billion years, somehow this humble cell radiated into the full diversity of life on Earth we appreciate today. Somewhere along that magnificent road, cells began to feed on sunlight, more complex eukaryotic cells arose, mating first occurred, and the first multicellular organisms began to proliferate (about a billion years ago). Then, about 500 million years later, in a spectacular surge of innovation called the Cambrian Explosion, almost all the major animal groups emerged. As these groups further diversified into the species we know today, plants unfurled in their near-infinite variety across the land. Eventually, land animals bred from fish, and the planet blossomed with creatures above the sea. All the amphibians, the reptiles, the birds, and the mammals marched into existence, each “animal” actually an aggregate of millions to trillions of eukaryotic cells engaged in an intricate dance—each animal embedded also within a higher-level dance of ecology, of predators, prey, mating, and reproduction. Along the way, fantastic inventions, some beyond the capabilities of all of human engineering today, marked the way—photosynthesis, the flight of birds, and human intelligence itself are but a few such triumphs. This epic story is known as evolution, but such an unassuming word hardly does the awesome achievement justice.

Learn faster. Dig deeper. See farther.

The problem is, “evolution” sounds merely like a plain process, or a physical force like gravity, but the proliferation of life on Earth shows how severely that understates its nature. Evolution better resembles a creative genius unleashed for countless millennia than it does a physical process. It is the greatest inventor of all time. Think about it computationally—evolution on Earth is like a single run of a single algorithm that invented all of nature. If we run a machine learning algorithm today, we’re happy if it gives us a single solution, or maybe a few, but then it’s over—the problem is either solved or not, and the program is finished. But evolution on Earth is amazingly different—it never seems to end. It’s true that “never” is a strong word, but several billion years of continual creativity comes as close to never as we’ll likely ever see. So yes, we call it evolution, but it’s also the never-ending algorithm, a process that tirelessly invents ever-greater complexity and novelty across incomprehensible spans of time.

In fact, there’s another term that captures this notion of a single process that invents astronomical complexity for near-eternity—we call it “open-ended.” The presence of open-endedness in nature is among the greatest mysteries of modern science, though it receives surprisingly little attention. Our textbooks describe evolution as if it is understood or solved, but in reality, while we can fill in a lot of details, its most astounding property—its open-endedness—remains, at best, simply taken for granted. If you look at it from the perspective of a computer scientist, you can easily see that the ultimate explanation for this property would be something deeply profound and powerful, if only we knew it. It is indeed a mystery because, so far, just as with AI, open-endedness has proven impossible to program. We might think we know the ingredients of evolution on Earth, and we might try to formalize them into an algorithm (often called an “evolutionary algorithm”), but, to date, no such algorithm suggests even a hint of the endless, prolific creative potential of natural evolution. While there is a small community of scientists within the research field called artificial life (or “alife,” for short) that has recognized and studied this puzzle for the last two or three decades, open-endedness remains obscure, and interest is confined to a tiny niche of the scientific world. It shouldn’t be that way.

See, something doesn’t make sense here, and when something doesn’t make sense, often there is a paradigm-shattering discovery awaiting somewhere in the shadows. The clue in this case is just how stunted regular evolutionary algorithms (EAs) are—how not open-ended—when compared to what we see in nature. Most EAs run for a short time, maybe of couple days, and converge to a solution, in the ideal case, or otherwise get stuck. Even the most sophisticated EAs, even ones focused on creative divergence, run out of steam pretty fast, eventually exhausting their search space of anything new.

At first, you might think it’s not so surprising that an EA is no match for nature. After all, EAs are just little programs run for a short while to solve a particular problem, just one of many alternatives for machine learning. But given the potential for open-endedness in evolution, EAs, in principle, could have become more than they are. Yet, for some reason, despite many brilliant minds pouring their entire lives into inventing ever-more-powerful EAs and entire conferences dedicated to evolutionary computation, and while EAs have indeed become more capable of solving specific challenging problems, the open-ended inventiveness of nature is still nowhere in sight. Modern EAs simply don’t keep inventing new things you never imagined in perpetuity.

Some machine learning researchers have further suggested that EAs are inferior optimizers and that alternative algorithms (such as deep learning) are simply better suited for optimization. But that kind of critique misses the real issue. Evolution serves as inspiration for algorithms in computer science not because it is a great optimizer for a particular problem, but because it invented all of nature. Nothing in machine learning comes close to that, and the fact that EAs, for the most part, don’t come close either should not be interpreted as a shortcoming of investigating EAs, but rather as a reason we should be investigating them a lot more (just as with AI)—imagine what we are missing out on that might be possible to achieve!

For a moment, imagine if we could actually program a genuine open-ended algorithm. The implications would be extraordinary. Are you interested in new schools of architecture, new car designs, new computer algorithms, new inventions in general? And how about generating them ceaselessly? And with increasing complexity? Endless new forms of music and art, video game worlds that unfold forever without ever getting dull, universes that emerge inside your computer wholly unique and unlike anywhere else—the power of nature is the power of creation, and it’s entirely encapsulated within the mystery of open-endedness. These dreams sound also like some of our aspirations in AI, but if they are part of AI, then they are precisely the area where AI remains unfocused (with so much energy pouring into finding solutions to particular problems). In fact, open-endedness could even be a path to AI, or the path to AI—after all it was open-ended evolution in nature that designed our intellects the first time. But that’s only one of its many creations. So, open-endedness certainly overlaps with the quest for AI, but it’s a broader quest than only that, a shot at capturing the power of creation inside a machine. The incredible self-generation of nature on Earth is but one infinitesimal slice of what might be possible.

This vision is not a fairytale. Indeed, one of the most exciting aspects of open-endedness as a grand challenge is that it appears eminently achievable. For example, it’s plausible that implementing a generative system (i.e., one that produces a broad variety of artifacts) is potentially easier in many cases than engineering by hand the generated artifacts (such as intelligence) themselves. Put another way, while the multi-trillion-connection brain is a product of natural evolution, it is possible that the process of evolution itself is simpler to describe or implement. So, maybe we can actually figure it out and identify its necessary conditions—maybe even sometime soon.

You might then wonder what the problem is in the first place—if it’s so simple, then why is it still unsolved? One reason is that for chance historical reasons, this particular grand challenge has simply failed to attract any attention, with the consequence that very few people (let alone brilliant minds) even know about it. It lacks the mindshare it deserves. But that’s not the whole explanation. The other piece is that, like many grand challenges, its core solution has proven far more slippery than it initially appeared.

It is now becoming clear that open-endedness, while perhaps simple, involves a kind of mind trick that would force us to reexamine all our assumptions about evolution. The whole story about selection, survival, fitness, competition, adaptation—it’s all very compelling and illuminating for analysis, but it’s a poor fit for synthesis: it doesn’t tell us how to actually write the process as a working open-ended algorithm. To pinpoint the reason we see open-endedness in nature (and hence become able to write an algorithm with analogous power) likely requires a radically different evolutionary narrative than we’re used to. And that’s one reason it’s so compelling—while its program might be simple, a radical new perspective is needed to crack the case. And importantly, evolution itself is only one of many possible realizations of open-endedness. For example, the human brain seems to exhibit its own open-ended creativity, so it’s likely that an open-ended process can emerge from many different substrates, including within deep learning.

To be clear, we are not suggesting a reenactment of nature in all its glory. What would be the point? Nature is already here. Rather, the potential here is to introduce a generic never-ending creative algorithm, in whatever domain you want. The hypothesis is that what we observe in nature is only one instance of a whole class of possible open-ended systems. Of course, some might argue that nature is, for some reason, its only possible realization, but there is no obvious explanation for why that would be, any more than we would conclude that birds are the only possible realization of flight. The components of the process—interaction among individuals (or what machine learning researchers might call “candidates”), selection, reproduction, etc.,—appear fundamentally generic. General open-ended algorithms should be possible, which should excite a lot of smart people looking for a challenge with great potential reward. There’s room here yet for an Einstein or a Watson-Crick team to etch their names into scientific history.

A brief history of open-endedness

Because of the inspiration from natural evolution, so far the study of open-ended algorithms has focused primarily on EAs, though it is certainly conceivable that a non-evolutionary process (such as an individual neural network generating new ideas) could exhibit open-ended properties. Nevertheless, because of the historical evolutionary focus, researchers in this area often refer to it as “open-ended evolution.” While technically these are types of evolutionary algorithms, their form and context are often quite different from those seen in the rest of the field of evolutionary computation, so they should not be confused with the more conventional ideas connected to genetic algorithms or EAs. Open-ended evolution is a whole different ballgame.

For one thing, unlike in most of machine learning, when researchers build an open-ended algorithm, they usually aren’t aiming to solve any problem in particular. Rather, they hope to observe a kind of complexity explosion, where evolving artifacts diversify into ever-more intricate and inventive forms. For that reason, much of the early work in the field centered on artificial “worlds” (often called “alife worlds”) where theories could be tested on the kinds of dynamics that might lead to open-endedness. These virtual worlds are usually populated with “creatures” that perform behaviors like running around and eating other creatures.

Some classic example of alife worlds include Tierra (by Thomas Ray), Avida (with a long history, but originally introduced by Charles Ofria, Chris Adami, and Titus Brown), Polyworld (by Larry Yaeger), Geb (by Alastair Channon), Division Blocks (by Lee Spector, Jon Klein, and Mark Feinstein), and Evosphere (by Thomas Miconi and Alastair Channon). While these systems span the gamut from evolving abstract computer code to combat and competition between three-dimensional life-like creatures, they generally at least flirt with the hope that complexity and diversity might explode within them. Usually, the expectation is that the creatures in the artificial world might evolve increasingly complex strategies (and sometimes body plans) as they strive to outcompete each other.

In hindsight, the tendency of alife worlds to resemble miniature Earth-like ecosystem simulations may have inadvertently made the field seem more narrow than it really is. At first glance, these alife worlds look like they are trying only to replicate some tiny slice of ecological behavior on Earth through a relatively simple (compared to Earth) simulation. That could explain in part why the field did not attract as much interest as it deserves. But in the context of open-endedness, alife worlds are not really about studying a small ecological interaction. Rather, they are intended as lenses into the profound question of how to trigger an open-ended complexity explosion. And they do not imply that such explosions are exclusive only to Earth-like simulations either—recall that open-endedness is presumably a property of a wide breadth of possible systems. It’s just that early on in the field, the easy analogy between alife worlds and Earth (where open-endedness initially emerged) made them a convenient option for studying such phenomena. As we’ll see, open-endedness can indeed be studied outside alife worlds. That is, the more ambitious hope of the field is to create not a single open-ended world, but open-endedness on demand, a kind of ubiquitous creative platform that can apply to anything.

Aligned with looking beyond one kind of system, and aimed at greater scientific rigor, in 1992 Mark Bedau introduced a set of measurements called “activity statistics” meant to measure the degree of open-endedness exhibited by such systems. With this tool (which even was applied at one point to measuring cultural evolution), researchers were able to show that some alife worlds yielded open-ended dynamics, at least according to activity statistics. For example, Alistair Channon’s world Geb scored well and was said to “pass” the highest bar of the activity statistics test.

But still, it wasn’t satisfying because, whatever the test might say, it was plain to anyone watching that all these systems are still a far cry from nature. Yes, you could see creatures learning to chase each other or grow faster, but beyond that, little of excitement was immediately obvious. This outcome led to some debate on what open-endedness actually means—do Bedau’s tests somehow miss the real issue? The bar should be high, after all. This debate parallels similar debates in other nascent fields, such as the debate in AI on the definition of intelligence. We have to be careful because we could end up in the quagmire of an endless semantic argument, which would add little value. Illustrating the difficulty of grappling with this issue head on, in 2015, Emily Dolson, Anya Vostinar, and Charles Ofria took the interesting perspective of trying to identify what open-endedness evolution is not. Perhaps in that way, we can come closer to circumscribing its essential character. Other historical takes reflecting key factors in the interpretation of open-endedness include Russell Standish’s 2003 emphasis on the production of novelty and Carlo Maley’s 1999 emphasis on increasing complexity.

It’s also important to highlight that there are likely interesting degrees of open-endedness (an idea indeed embraced by Bedau’s tests). That is, open-endedness is not just a binary either/or proposition. While we are more likely to obtain the deepest possible insight by aiming high, as we will see, in practice, even systems and ideas that capture a limited aspect of open-endedness can still serve useful purposes and teach us important lessons on the road to full-scale never-ending innovation.

Part of the problem with measuring or defining open-endedness is that it is possible that its interpretation is, in part, subjective. That is, the appreciation of innovation (which is key to open-endedness) could be relative to a particular viewpoint or context. We are impressed by certain designs, ultimately, because we appreciate their functionality, but if conditions changed in a way that rendered a design less useful (or perhaps even just less satisfying emotionally) then we would be less impressed, even though the design is no different. For example, the ability of a bike to shift gears to go up hills looks like a bona fide innovation only if there are hills in the world and if we actually care about climbing to their peaks, so it is not clear that there can ever be an intrinsic measurement of innovation in the world without some deference to its context. While scientists often seem allergic to any flavor of subjectivity, there may be ways to embrace subjectivity in open-endedness without sacrificing scientific rigor. For example, we can still try to formalize subjective notions such as impressiveness (in work by Ken Stanley and Joel Lehman). The general issue of subjectivity in open-endedness is interesting but subtle, and Stanley has written on it elsewhere at length, but the main point is that measuring open-endedness is a tricky problem, and for us to make progress, we may ultimately need to retreat somewhat from grasping after a foolproof objective measure, just as AI has advanced despite the absence of any consensus measure of intelligence. Perhaps Bedau’s activity statistics will one day be viewed like the Turing Test of open-endedness—an initial inspiring shot at capturing the essence of the problem, but not the final say on all its intricate complexities and idiosyncrasies.

At the same time, some researchers approach the question of open-endedness by attacking aspects of it rather than the whole problem all at once. For example, in an influential Nature paper, Richard Lenski, Charles Ofria, Robert Pennock, and Christoph Adami studied how complex features can arise through unpredictable routes in evolution, illuminating how seemingly intractable complexity can evolve through random mutations. Others have highlighted different aspects of emergent complexity, such as Josh Bongard’s investigation of how morphological change (i.e., a creature’s body changes both over its own lifetime and across generations) can accelerate the evolution of robust behaviors. While such studies do not implement systems aimed at full open-endedness, they hint at factors that might contribute to them in the future.

The scope of research into open-endedness further broadened with the introduction of the novelty search algorithm (by Stanley and Lehman; see the introductory paper and website). In the context of open-endedness, novelty search is particularly notable for disentangling open-endedness from any particular “world” or problem domain—it introduces some of the flavor of open-endedness as a generic algorithm that can be applied to almost anything. To see this point, it’s helpful to contrast it with the more closed process of conventional EAs, which generally push evolution toward a particular desired outcome. In this conventional kind of algorithm, the opportunities for unhampered discovery are limited because selection pressure directly seeks to improve performance with respect to the problem’s objective (which could be to walk as fast as possible). For example, the algorithm might find a stepping stone that could lead to something interesting (such as a precursor to wings), but because that preliminary discovery does not directly increase performance, it is simply discarded. The idea in novelty search is to take precisely the opposite approach—instead of selecting for “improvement,” novelty search selects only for novelty. That is, if a candidate born within an evolutionary algorithm is novel compared to what’s been seen before in the search, then it’s given greater opportunities for further reproduction. In a sense, novelty search is open-ended because it tends to open up new paths of search rather than closing them off.

At first, it may seem that this approach is closely related to random search, and therefore of little use, but actually, it turns out to be much more interesting than that. The difference is that computing the novelty of a candidate in the search space requires real information (which random search would ignore) on how the current candidate differs behaviorally from previous discoveries. So, we might ask how a current robot’s gait is different from its predecessor’s gaits. And if it’s different enough, then it’s considered novel and selected for further evolution. The result is a rapid branching (i.e., divergence) and proliferation of new strategies (e.g., new robot gaits in a walking simulator). In fact, we discovered that this kind of divergent search algorithm actually leads to the evolution of walking from an initial population of biped robots unable to walk! Not only that, but the walking strategies evolved by novelty search were on average significantly superior to those from a conventional attempt to breed the best walkers. So, the open-ended novelty search style of exploration can actually find some pretty interesting and functional solutions, ones also likely to be diverse even in a single run.

After the introduction of novelty search, a new class of algorithms began to appear that aimed to combine a notion of novelty with a more objective sense of progress or quality. For example, you might want to search for all the possible ways a robot can walk, but in order to discover the best possible version of each such variant. These algorithms (including novelty search with local competition, or NSLC and the multi-dimensional archive of phenotypic elites, or MAP-Elites) came to be known as quality diversity (QD) algorithms. One high-impact QD project from Antoine Cully, Jeff Clune, Danesh Tarapore, and Jean-Baptiste Mouret based on MAP-Elites appeared on the cover of Nature—the idea was to evolve a broad repertoire of walking gaits all in one run, and then use that collection of gaits to adapt quickly if the robot becomes damaged. It’s a great example of how open-endedness and QD can shift from being merely theoretical ecosystem simulations to real-world practical contributions to robotics that benefit from the diversity yielded by open-ended search.

While it’s beginning to sound like the problem is already nearly solved, the truth is, these early methods only scratch the surface of an iceberg of mystery. While certainly they mark some progress in our understanding, they are still missing a key ingredient of open-endedness—they are destined to stop producing anything interesting within days of launching, and there’s nothing we can do about it. The reason is that their problem domains contain only a narrow set of possibilities. There are only so many interesting ways you can walk before it just starts being silly. Maze navigation was a popular early application of novelty search, but again, once you’ve exhausted all the places you can go in a maze, there isn’t much left of interest in the domain. It’s true that crazy, jittery-looking trajectories or strategies (of high-entropy) can continue to be generated almost forever, but they are not deeply interesting in the most ambitious spirit of true open-endedness. They’re not like a whole new kind of animal (e.g., birds) suddenly appearing. In this way, the mystery is, while we have a good sense of how to push a search process to keep finding what is possible within a search space, we don’t really understand how concurrently to expand the space of possibilities itself. That is, the highest level of open-endedness requires not only generating new solutions, but also new problems to solve.

For example, consider giraffes and trees. Intriguingly, while trees were once themselves a viable new solution to the problem of survival and reproduction, as a side-effect, they also inadvertently created an opportunity for the completely different survival strategy of the giraffe to emerge in the future. In short, you can’t get giraffes if you don’t have trees. The solutions being generated by natural evolution simultaneously create opportunities for entirely new kinds of solutions (to new challenges) to exist in the future. Novelty search and QD only really address finding new solutions to existing challenges open-endedly, but they don’t inherently generate new kinds of challenges to be solved. The most powerful open-ended system would generate both.

This idea of the giraffes interacting with the trees is related to coevolution, which is what happens when different individuals in a population interact with each other while they are evolving—and is what happens in nature. Because it seems that such interactions are important in open-ended systems, work in the field of coevolution (within evolutionary computation) is relevant to open-endedness. Many researchers over the years have contributed to our understanding of coevolutionary systems from a computational point of view. Elena Popovici, Anthony Bucci, Paul Wiegand, and Edwin de Jong provide a comprehensive review of work in this area. While coevolution is certainly relevant to open-endedness, it is also an independent area of study. It explores game-theoretic aspects of competitions, the dynamics of “arms races” among coevolving populations, and what causes such arms races to escalate or stagnate. One interesting recent work from Jorge Gomes, Pedro Mariano, and Anders Lyhne Christensen combines coevolution and novelty search in one system. Past studies of coevolutionary algorithms are likely to receive renewed attention as open-ended algorithms themselves gain more interest, and may also draw interest from those working in deep learning, where some recent high-profile results have strongly benefited from coevolutionary-like dynamics. For example, generative adversarial networks (GANs) profit from an arms race between a generator network and a discriminator network for unsupervised learning, and self-play learning in board games and competitions between reinforcement-learning robots have also produced striking results.

Overall, the field of open-endedness is interdisciplinary, young, and relatively unexplored. Results so far have revealed hints of what’s possible from open-ended systems and uncovered thorny philosophical issues yet to be resolved. Much is left unknown. Indeed, the field’s culture around experimental procedures are still in flux, and efforts continue to define success or its essential conditions—an issue that we dive into next.

The necessary ingredients

We are accumulating general insight—for example, that full-blown open-endedness likely requires the interaction of coevolving entities and that it goes beyond novelty or QD. Such thoughts about what is core to open-endedness motivate attempts to formalize its necessary conditions. What are the minimum set of conditions that must be satisfied for a system to have the hope of exhibiting a high level of open-ended dynamics? The ideal conditions would be both necessary and sufficient, but even just identifying some of the necessary conditions is a useful start. The necessary conditions go beyond simply helping us implement a working model—they also, in effect, hint at the breadth of possible open-ended systems. That is, if the necessary conditions are general or abstract enough, then it is likely that many open-ended systems far different from nature are possible. In contrast, if they are highly specific, such as requiring a simulation of quantum physics, then the breadth of possible open-ended systems is much narrower.

Speculation on a set of necessary conditions goes as far back as 1969, when Conrad Waddington analyzed how individuals interact with environments in “typical” evolutionary systems (original paper is here and a good summary by Tim Taylor is here), focusing on mechanisms that lead to diversity. Much later (in 2004), Tim Taylor actually tried to put Waddington’s hypotheses to the test in a system called Cosmos, and commented on the difficulty of putting theoretical conditions into practice. He later proposed his own conditions (here and here). Lisa Soros working together with Ken Stanley recently proposed a set of conditions (with some similarities to previous such proposals) along with a system (an alife world called Chromaria) designed to test those conditions. While there’s yet no final word on the necessary conditions, for your intuition about how people think about this problem it’s useful to present an example set of conditions; here are those proposed by Soros and Stanley back in 2014:

Condition 1: A rule should be enforced that individuals must meet some minimal criterion (MC) before they can reproduce, and that criterion must be nontrivial. Corollary: The initial seed (from which evolution begins) must itself meet the MC and thereby be nontrivial enough to satisfy Condition 1.

Condition 2: The evolution of new individuals should create novel opportunities for satisfying the MC.

Condition 3: Decisions about how and where individuals interact with the world should be made by the individuals themselves.

Condition 4: The potential size and complexity of the individuals’ phenotypes should be (in principle) unbounded.

Of course, it is possible that researchers will change their hypotheses over time, and already (for example) Stanley has expanded his view of Condition 3 (he now allows that it may be possible to bypass Condition 3 by permitting individuals to interact with all their counterparts in the world, as long as that is computationally tractable). However, the point here is not to establish dogma, which would be irresponsible and dangerous at this early stage, but rather to provide a taste of the kind of thinking that occurs at this level of abstraction.

Just this year, Jonathan Brant and Stanley introduced a new algorithm called minimal criterion coevolution (MCC) that implements the idea of novel individuals creating new opportunities for each other. Its aim is to facilitate open-endedness in a generic context independent of any particular world or domain (code here). The idea is to evolve two interacting populations whose members earn the right to reproduce by satisfying a minimal criterion with respect to the other population. The algorithm’s ability to continually generate novel and increasingly complex artifacts is demonstrated through mazes and maze solvers. A population of mazes coevolves with maze navigators controlled by neural networks—the mazes increase in complexity while the neural networks evolve to solve them. In effect, new mazes provide new opportunities for innovation to the maze solvers (and vice versa). One interesting aspect of this experiment and its subsequent (unpublished) follow-up is that if there were sufficient computational resources, it is possible that if we let it continue running for a billion years, we might return to find mazes the size of the solar system with robots controlled by neural networks able to solve them. Highlighting that this thought experiment is at the edge of our current understanding, the extent to which such giant mazes and their solutions would satisfy the most ambitious aims of open-endedness remains unclear.

However, the more interesting implication of this experiment is the potential for all kinds of unimagined pairings and elaborations in the same kind of MCC setup. For example, imagine robot bodies coevolving with brains to control their bodies. Open-endedness begins to look like a tool that might be applied to practical domains.

Even as attempts at open-endedness like MCC improve, as we continue to revisit, creativity on the level of complexity of nature remains still far beyond our reach. Fundamental breakthroughs still await, perhaps in the near future, as the field matures and catches its first glimmers of deeper principles awaiting.

Open-endedness in practical domains

One potentially important application of open-ended systems is in creative design. Almost everything we build, from buildings, to cars, to medicines, to toys, to robots, to drinks we buy at the supermarket, results from some kind design process. Designers, usually humans, have to consider both the function and the aesthetic form of their products. Sometimes similar products have very different applications, such as vehicles designed for luxurious travel versus those aimed at shipping. Open-ended systems offer the potential to generate endless alternatives in almost any conceivable design domain, just as natural evolution generated endless solutions to the problems of surviving and reproducing. From commercial companies to individual hobbyists (perhaps aided by improving three-dimensional printing technology), open-endedness can vastly expand the scope of conceivable options and generate unimagined new possibilities. This advance could be taken in all kinds of unpredictable (and sometimes fun) directions, like ever-novel and interesting recipes. It can potentially partner with humans, who could influence the search (which sometimes goes by the name interactive evolution), or it could be left to generate ideas by itself in perpetuity, where humans can view the latest creations at their leisure.

Great potential also exists in music and art. It may again require humans to participate somewhere in the process, but the progression of music and art over the centuries appears naturally open-ended—one school or genre leads to another in an endless succession. It’s true that much of our appreciation of this history is subjective, but there is nothing in principle to stop us from accelerating our exploration of subjective domains that we enjoy. There are already some hints that interactive evolution (with humans in the loop) can exhibit open-ended properties, such as in the Picbreeder online experiment from Ken Stanley’s lab that allows users to breed new images. There, you can see an ever-growing archive of images diverging into novel areas of image-space (such as insects, animals, and faces). But this demonstration only scratches the surface: personalized perfumes or new families of beverages or home architectures—and who knows what else—could also be co-created interactively. The ability to create such a system with open-ended properties with humans in the loop foreshadows many intriguing future possibilities.

There is also an open-ended aesthetic inherent to mathematical and legal systems. For example, non-Euclidean geometry is a rich field of open-ended discovery, one ultimately resulting from exploring what happens when certain axioms from traditional Euclidean geometry are relaxed. Given some initial seed set of axioms from a curious mathematician, could an algorithmic open-ended system continually produce interesting and surprising proofs? The legal system also has semi-formal rules, and the cat-and-mouse game of lawmakers creating legislation for some purpose while others mine for clever loopholes resembles an open-ended ecology. Open-ended systems could aid politicians by automatically revealing the edge-cases they had not yet considered. What other formal or semi-formal systems contain interesting surprises discoverable through open-ended creativity?

Gaming is another promising application for open-endedness. Already open-world games and content generation are buzzwords in the industry, but a genuinely open-ended world could potentially provide a remarkable experience. Imagine the game world continually reinventing itself and increasing in complexity, almost like its own version of nature. Content also might evolve endlessly into new and unimagined forms. Games might become more than mere games, instead approaching the point where humans explore them almost like naturalists on an alien world—vast expanses of astonishing creations. One early hint at this potential is the Galactic Arms Race video game, which was also originally developed in Ken Stanley’s lab. In it, the game continually invents new particle weapons based on the behavior of players. While only scratching the surface of what could be possible, it helps to illustrate that games that invent novelty on their own are indeed possible even today. In the commercial gaming world, both No Man’s Sky and before that Spore exemplify a desire to fashion worlds without creative limits. While both impressive, imagine how much more grand they could be with true open-endedness and not only clever hand-coded parameter spaces.

Open-endedness in AI

The challenge of open-endedness is a natural cousin to the challenge of artificial general intelligence. If we accept that evolution on Earth is an open-ended process, then perhaps the deepest connection is that human-level intelligence on Earth is one of its many products. In other words, open-endedness could be a prerequisite to AI, or at least a promising path to it. The thought of open-endedness as a path to AI is interesting for how different it is from how most AI research is approached today, where optimization toward specific objective performance is ubiquitous—almost the perfect opposite of how an open-ended system generates its products.

The overlaps don’t end there. Interestingly, while open-endedness could be a force for discovering intelligence, it could also be an engine of intelligence itself. Human nature seems at least in part open-ended. We don’t only optimize our minds to perform tasks, but we invent new tasks and identify new problems to solve. We’re also playful and like to create simply to stimulate ourselves, even if no particular problem is solved, such as in art and music. Both as a society and as individuals across our lifetimes (though some more than others) we tend to generate a continually branching set of ideas and inventions with no unified direction or goal to the whole enterprise. Creativity is, furthermore, among our most humanizing traits. When we see arithmetic geniuses multiply surprisingly large numbers in their heads, we don’t first think how human their talents are, yet if they would compose a beautiful song or invent a device that changes our lives, suddenly they exemplify the best of human striving.

In short, the open-ended component of our minds is a spark that still largely separates us from the way we think about machines, which suggests that open-endedness is a core component of what we mean by general intelligence. This point is important because so many in AI lean toward definitions of intelligence that involve “solving problems” or “learning to solve problems efficiently.” But open-endedness is left out of this equation. A “general” open-ended system is not a machine-like problem solver, but rather a creative master that wanders the space of the imagination. That is certainly within the pantheon of human intelligence.

The social organization that allows inventions to pile upon inventions over years and centuries is also worth contemplating in this context. Intelligence is clearly a factor in cultural progress; after all, culture is a product of brains. But a network of millions of brains seems to produce an effect far amplified beyond what any single brain can do, where ideas propagate throughout the network to become stepping stones for new ideas, which then propagate again, and so on. Culture is, in effect, in some ways reminiscent of evolution. Of course, this resonance between cultural change and evolution has been discussed by many in the past, and here is not the place to do it justice, but it does highlight the relevant fact that when we understand open-endedness, we thereby also gain insight into culture, which is a product of multiple intelligences interacting—an understanding valuable to AI as well.

Reflecting on the intelligence of culture and its drive to innovate, it’s fascinating to note that open-endedness seems to contain some kernel of nearly-circular “meta-creation”: biological evolution produced human minds, which invented countless new open-ended processes like art and science; and science may ultimately distill open-endedness into an algorithm with the potential to produce an AI matching (or even surpassing) its human creator, which could itself spawn new forms of open-endedness. Given this speculative raw potential of open-endedness, there are clear connections to the study of AI safety, which is of growing interest following concerns about societal risks from strong AI. For example, one challenge in AI safety is called value alignment, where the goal is to ensure that the objective function for a powerful AI mathematically encompasses human values. Interestingly, if open-endedness is a viable path to AI, perhaps it is easier to understand what environmental factors would encourage evolution of such values than it is to mathematically define them—for example, to create an artificial world in which gentle cooperation naturally flourishes. After all, it was open-endedness that crafted human values to begin with.

Finally, some specific fields within AI are primed to thrive around the challenge of open-endedness. In particular, fields involving search within large, high-dimensional spaces, namely evolutionary computation and deep learning, stand to be profoundly impacted by advances in open-endedness. Evolutionary algorithms as metaphors for nature are the natural heirs to progress in this area, but deep learning also stands at the precipice of untapped opportunities as it begins to wade into its own capacity to capture some essence of the open-endedness of the human mind. Accurate gradient-following (in the spirit of deep learning) also might speed up open-ended processes when set up cleverly, and perhaps ultimately the hybrid of both evolution and deep learning can capture something like the open-endedness of culture, or the rapid evolution of brains in recent biological history.

Indeed, there are many rich interconnections between the challenges of open-endedness and AI, and many more are likely to be uncovered in the future.

Joining the quest

Though a relatively small tribe has been investigating open-endedness for many years, the lack of concerted effort or substantive funding through the present day means that the field remains basically in its infancy. We have only a preliminary understanding of the breadth of open-ended systems, the degrees of open-endedness that are possible, or the necessary conditions for triggering an open-ended complexity explosion. The potential weapons in our arsenal—evolutionary computation, neuroevolution, deep learning, alife worlds, novelty and divergence, coevolution and self-play, minimal criteria, etc., are powerful and compelling, but we have only a faint understanding of how they fit together to compose the big picture. In short, what better time than today to engage one of the most underappreciated grand challenges we know? The field is wide open, the greatest discoveries remain to be made, the potential applications and implications are vast, and interest is sure to accelerate from here.

So, what do you need to do to get involved? First, if you can program a computer, you probably have the necessary skills already. However, even if not, you might have expertise or insights to offer. Some knowledge of biology, math, or AI might help, but right now we don’t know with certainty which parts of which fields are essential. Consequently, all kinds of diverse backgrounds are potentially valuable—philosophy, art, physics, cultural studies, and beyond—open-endedness is intrinsically interdisciplinary. You just have to remember, in this field, an explanation is not enough—we need to build actual working demonstrations. That’s what makes the challenge so great. So, you have to be up for actually putting ideas to the test, and being sober and tough with assessment. We need to avoid falling into the trap of self-congratulation before we’ve really earned it. Observing something interesting happening, while notable and possibly an important step forward, is not the same as an explosion of complexity on the scale of nature (or culture). Realism and sobriety about what we’ve accomplished so far is what will allow us to accomplish something revolutionary in the future without falling into complacency.

If you feel inspired to join the quest, there are a few useful resources to note. First, most publications and meetings on open-endedness have so far been managed by The International Society for Artificial Life, so you may want to check them out. They hold the Conference on Artificial Life, which next takes place in Tokyo in July of 2018. In two recent years, workshops on open-endedness were held at these conferences, and some of the ideas and attendees from these events are documented, such as a report from the first OEE Workshop in 2015, and a summary website including videos of talks. Papers from the second workshop in 2016 are at this website. Some classic scholarly publications on open-ended evolution can be found at Google Scholar by searching for “open-ended evolution” or for “open-endedness.” Also relevant is the field of neuroevolution, where neural networks (the same structures as in deep learning) are evolved through EAs. Often experiments in open-endedness involve some form of neuroevolution. In a separate article from O’Reilly, Ken Stanley provides a gentle introduction to neuroevolution.

For actual source code to start from, alife-world platforms like Avida or Chromaria, and algorithms like novelty search or minimal criterion coevolution offer potential starting points. However, it is important to keep in mind that the field has yet to coalesce around any particular platforms, benchmarks, or algorithms. As a result, current resources could be superseded quickly in the future, and there is no consensus currently even on the leading approaches today. Open-endedness is still the wild west of science, with the uncertainty—and excitement—that entails.

At the time of writing this article, we don’t know of an active public forum for the general discussion of open-endedness. However, there is actually a forum on open-ended evolution on Reddit, but it has not been active for the last year. We could have pointed readers to that forum (maybe resurrecting it) and we considered that option seriously, but after careful thought and discussion we concluded that it is more prudent to start a new subreddit on open-endedness because we suspect that open-endedness is likely not only an evolutionary phenomenon, and we do not want to exclude all of the amazing ideas that might come from outside the evolutionary computation community (e.g., from deep learning, Bayesian approaches, biology, or neuroscience). At the same time, we have only the greatest respect for the pioneers in this field who are largely based in evolutionary methods (similarly to ourselves), and hope they too will join this reinvigorated conversation. In short, we think with the publication of this article, it’s at least worth trying to create a nexus where a broad interdisciplinary discussion from across the range of relevant expertise can coalesce, and we hope you will consider joining us.

Open-endedness defies the dominant paradigm in computer science and AI or machine learning today of “problems” and “solutions.” In those fields, you can choose a problem and showcase improving results with respect to some benchmark. Open-endedness isn’t like that. Open-endedness is for those yearning for an adventure without a clear destination—it is the road of creation itself, and the entire point is to generate what we presently cannot imagine. Welcome to the newest grand challenge.