







From Complexity to Perplexity Can science achieve a unified theory of complex systems?

Even at the Santa Fe Institute, some researchers have their doubts by John Horgan, senior writer Champagne and big ideas are bubbling at the Museum of Indian Arts and Culture in Santa Fe, N.M. The museum is hosting a dinner for the Santa Fe Institute, where complex people ponder complex things. Some of the institute's brightest luminaries are there, including Murray Gell-Mann, Nobel laureate and co-discoverer of quarks, with his permanently skeptical squint; artificial-life proselytizer Christopher G. Langton, clad in his uniform of jeans, clodhoppers, leather vest and silver bracelet; the ruddy-faced nonlinear economist W. Brian Arthur, who has recently been taking calls from the White House; and world-class intellectual riffer Stuart A. Kauffman, whose demeanor is at once cherubic and darkly brooding. Mingling with these scientific pioneers are various "friends of the institute," ranging from mega-philanthropist George Soros to the best-selling novelist Cormac McCarthy. Before everyone tucks into the filet mignon, David Liddle, a computer entrepreneur who chairs the board of trustees, reviews the institute's accomplishments. "There is a lot to be proud of," he says. There certainly is, at least from a public-relations standpoint. The institute is not large: it supports only six full-time researchers in Santa Fe; 50 "external faculty" members work elsewhere. Nevertheless, in the decade since its founding, the institute has enjoyed much favorable attention from the press, including Scientific American, and has been celebrated in several popular books. It has become renowned as a leading center of complexity studies, a place where scientists impatient with the stodgy, reductionist science of the past are creating a "new, unified way of thinking about nature, human social behavior, life and the universe itself" (as one book jacket put it). What Liddle does not say is that even some scientists associated with the institute are beginning to fret over the gap between such rhetoric and reality. Take Jack D. Cowan, a mathematical biologist from the University of Chicago who helped to found the institute and remains on its board. Cowan is no scientific prude; he has explored the neurochemical processes underlying the baroque visual patterns evoked by LSD. But some Santa Fe theorists exhibit too high a "mouth-to-brain ratio" for his taste. "There has been tremendous hype," he grumbles. Cowan finds some work at Santa Fe interesting and important, but he deplores the tendency of research there "to degenerate into computer hacking." Too many simulators also suffer from what Cowan calls the reminiscence syndrome. "They say, 'Look, isn't this reminiscent of a biological or physical phenomenon!' They jump in right away as if it's a decent model for the phenomenon, and usually of course it's just got some accidental features that make it look like something." The major discovery to emerge from the institute thus far, Cowan suggests, is that "it's very hard to do science on complex systems." Some residents blame the media for the exaggerated claims associated with the institute. "Ninety percent of it came from journalists," Arthur asserts. Yet the economist cannot help but play the evangelist. "If Darwin had had a computer on his desk," he exclaims, "who knows what he could have discovered!" What indeed: Charles Darwin might have discovered a great deal about computers and very little about nature. The grandest claim of Santa Fe'ers is that they may be able to construct a "unified theory" of complex systems. John H. Holland, a computer scientist with joint appointments at the University of Michigan and the Santa Fe Institute, spelled out this breathtakingly ambitious vision in a lecture two years ago: "Many of our most troubling long-range problems-trade balances, sustainability, AIDS, genetic defects, mental health, computer viruses-center on certain systems of extraordinary complexity. The systems that host these problems-economies, ecologies, immune systems, embryos, nervous systems, computer networks-appear to be as diverse as the problems. Despite appearances, however, the systems do share significant characteristics, so much so that we group them under a single classification at the Santa Fe Institute, calling them complex adaptive systems [CAS]. This is more than terminology. It signals our intuition that there are general principles that govern all CAS behavior, principles that point to ways of solving the attendant problems." Holland, it should be said, is considered to be one of the more modest complexologists. Some workers now disavow the goal of a unified theory. "I don't even know what that would mean," says Melanie Mitchell, a former student of Holland's who is now at the SFI. "At some level you can say all complex systems are aspects of the same underlying principles, but I don't think that will be very useful." Stripped of this vision of unification, however, the Santa Fe Institute loses much of its luster. It becomes just another place where researchers are using computers and other tools to address problems in their respective fields. Aren't all scientists doing that? Scientists familiar with the history of other would-be unified theories [see box on pages 108 and 109] are not sanguine about the prospects for their brethren at Santa Fe. One doubter is Herbert A. Simon of Carnegie Mellon University, a Nobel laureate in economics who has also contributed to artificial intelligence and sociobiology. "Most of the people who talk about these great theories have been infected with mathematics," he says. "I think you'll see a bust on the notion of unification." Rolf Landauer of IBM, who has spent his career exploring the links between physics, computation and information, agrees. He accuses complexologists of seeking a "magic criterion" that will help them unravel all the messy intricacies of nature. "It doesn't exist," Landauer says. The problems of complexity begin with the term itself. Complexologists have struggled to distinguish their field from a closely related pop-science movement, chaos. When all the fuss was over, chaos turned out to refer to a restricted set of phenomena that evolve in predictably unpredictable ways. Various attempts have been made to provide an equally precise definition of complexity. The most widely touted definition involves "the edge of chaos." The basic idea is that nothing novel can emerge from systems with high degrees of order and stability, such as crystals. On the other hand, completely chaotic systems, such as turbulent fluids or heated gases, are too formless. Truly complex things-amoebae, bond traders and the like-appear at the border between rigid order and randomness. Most popular accounts credit the idea to Christopher Langton and his co-worker Norman H. Packard (who coined the phrase). In experiments with cellular automata, they concluded that a system's computational capacity-that is, its ability to store and process information-peaks in a narrow regime between highly periodic and chaotic behavior. But cellular-automaton investigations by two other SFI researchers, James P. Crutchfield and Mitchell, did not support the conclusions of Packard and Langton. Crutchfield and Mitchell also question whether "anything like a drive toward universal-computational capabilities is an important force in the evolution of biological organisms." Mitchell complains that in response to these criticisms, proponents of the edge of chaos keep changing their definition. "It's a moving target," she says. Other definitions of complexity have been proposed-at least 31, according to a list compiled several years ago by Seth Lloyd of the Massachusetts Institute of Technology, a physicist and Santa Fe adjunct. Most involve concepts such as entropy, randomness and information-which themselves have proved to be notoriously slippery terms. All definitions have drawbacks. For example, algorithmic informational complexity, proposed by IBM mathematician Gregory J. Chaitin, holds that the complexity of a system can be represented by the shortest computer program describing it. But according to this criterion, a text created by a team of typing monkeys is more complex-because it is more random-than Finnegans Wake. The Poetry of Artificial Life Such problems highlight the awkward fact that complexity exists, in some murky sense, in the eye of the beholder. At various times, researchers have debated whether complexity has become so meaningless that it should be abandoned, but they invariably conclude that the term has too much public-relations value. Complexologists often employ "interesting" as a synonym for "complex." But what government agency would supply funds for research on a "unified theory of interesting things"? (The Santa Fe Institute, incidentally, will receive about half its $5-million 1995 budget from the federal government and the rest from private benefactors.) Complexologists may disagree on what they are studying, but most concur on how they should study it: with computers. This faith in computers is epitomized by artificial life, a subfield of complexity that has attracted much attention in its own right. Artificial life is the philosophical heir of artificial intelligence, which preceded it by several decades. Whereas artificial-intelligence researchers seek to understand the mind by mimicking it on a computer, proponents of artificial life hope to gain insights into a broad range of biological phenomena. And just as artificial intelligence has generated more portentous rhetoric than tangible results, so has artificial life. As Langton proclaimed in the inaugural issue of the journal Artificial Life last year, "Artificial life will teach us much about biology-much that we could not have learned by studying the natural products of biology alone-but artificial life will ultimately reach beyond biology, into a realm we do not yet have a name for, but which must include culture and our technology in an extended view of nature." Langton has promulgated a view known as "strong a-life." If a programmer creates a world of "molecules" that-by following rules such as those of chemistry-spontaneously organize themselves into entities that eat, reproduce and evolve, Langton would consider those entities to be alive "even if it's in a computer." Inevitably, artificial life has begotten artificial societies. Joshua M. Epstein, a political scientist who shuttles between Santa Fe and the Brookings Institution in Washington, D.C., declares that computer simulations of warfare, trade and other social phenomena will "fundamentally change the way social science is done." Artificial life-and the entire field of complexity-seems to be based on a seductive syllogism: There are simple sets of mathematical rules that when followed by a computer give rise to extremely complicated patterns. The world also contains many extremely complicated patterns. Conclusion: Simple rules underlie many extremely complicated phenomena in the world. With the help of powerful computers, scientists can root those rules out. This syllogism was refuted in a brilliant paper published in Science last year. The authors, led by philosopher Naomi Oreskes of Dartmouth College, warn that "verification and validation of numerical models of natural systems is impossible." The only propositions that can be verified-that is, proved true-are those concerning "closed" systems, based on pure mathematics and logic. Natural systems are open: our knowledge of them is always partial, approximate, at best. "Like a novel, a model may be convincing-it may ring true if it is consistent with our experience of the natural world," Oreskes and her colleagues state. "But just as we may wonder how much the characters in a novel are drawn from real life and how much is artifice, we might ask the same of a model: How much is based on observation and measurement of accessible phenomena, how much is based on informed judgment, and how much is convenience?" Numerical models work particularly well in astronomy and physics because objects and forces conform to their mathematical definitions so precisely. Mathematical theories are less compelling when applied to more complex phenomena, notably anything in the biological realm. As the evolutionary biologist Ernst Mayr of Harvard University has pointed out, each organism is unique; each also changes from moment to moment. That is why biology has resisted mathematicization. Langton, surprisingly, seems to accept the possibility that artificial life might not achieve the rigor of more old-fashioned research. Science, he suggests, may become less "linear" and more "poetic" in the future. "Poetry is a very nonlinear use of language, where the meaning is more than just the sum of the parts," Langton explains. "I just have the feeling that culturally there's going to be more of something like poetry in the future of science." A Critique of Criticality A-life may already have achieved this goal, according to the evolutionary biologist John Maynard Smith of the University of Sussex. Smith, who pioneered the use of mathematics in biology, took an early interest in work at the Santa Fe Institute and has twice spent a week visiting there. But he has concluded that artificial life is "basically a fact-free science." During his last visit, he recalls, "the only time a fact was mentioned was when I mentioned it, and that was considered to be in rather bad taste." Not all complexologists accept that their field is doomed to become soft. Certainly not Per Bak, a physicist at Brookhaven National Laboratory who is on the Santa Fe faculty. The owlish, pugnacious Bak bristles with opinions. He asserts, for example, that particle physics and condensed-matter physics have passed their peaks. Chaos, too, had pretty much run its course by 1985, two years before James Gleick's blockbuster Chaos was published. "That's how things go!" Bak exclaims. "Once something reaches the masses, it's already over!" (Complexity, of course, is the exception to Bak's rule.) Bak and others have developed what some consider to be the leading candidate for a unified theory of complexity: self-organized criticality. Bak's paradigmatic system is a sandpile. As one adds sand to the top of the pile, it "organizes" itself by means of avalanches into what Bak calls a critical state. If one plots the size and frequency of the avalanches, the results conform to a power law: the probability of avalanches decreases as their size increases. Bak notes that many phenomena-including earthquakes, stock-market fluctuations, the extinction of species and even human brain waves-display this pattern. He concludes that "there must be a theory here." Such a theory could explain why small earthquakes are common and large ones uncommon, why species persist for millions of years and then vanish, why stock markets crash and why the human mind can respond so rapidly to incoming data. "We can't explain everything about everything, but something about everything," Bak says. Work on complex systems, he adds, will bring about a "revolution" in such traditionally soft sciences as economics, psychology and evolutionary biology. "These things will be made into hard sciences in the next years in the same way that particle physics and solid-state physics were made hard sciences." In his best-seller Earth in the Balance, Vice President Al Gore said Bak's theory had helped him to understand not only the fragility of the environment but also "change in my own life." But Sidney R. Nagel of the University of Chicago asserts that Bak's model does not even provide a very good description of a sandpile. He and other workers at Chicago found that their sandpile tended to oscillate between immobility and large-scale avalanches rather than displaying power-law behavior. Bak retorts that other sandpile experiments confirm his model. Nevertheless, the model may be so general and so statistical in nature that it cannot really illuminate even those systems it describes. After all, many phenomena can be described by a Gaussian or bell curve. But few scientists would claim that human intelligence scores and the apparent luminosity of galaxies must derive from common causes. "If a theory applies to everything, it may really apply to nothing," remarks the Santa Fe researcher Crutchfield. "You need not only statistics but also mechanisms" in a useful theory, he adds. Another skeptic is Philip W. Anderson, a condensed-matter physicist and Nobel laureate at Princeton University who is on the SFI's board. In "More Is Different," an essay published in Science in 1972, Anderson contended that particle physics and indeed all reductionist approaches have only a limited ability to explain the world. Reality has a hierarchical structure, Anderson argued, with each level independent, to some degree, of the levels above and below. "At each stage, entirely new laws, concepts and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one," Anderson noted. "Psychology is not applied biology, nor is biology applied chemistry." "More is different" became a rallying cry for chaos and complexity. Ironically, Anderson's principle suggests that these antireductionist efforts may never culminate in a unified theory of complex systems, one that illuminates everything from immune systems to economies. Anderson acknowledges as much. "I don't think there is a theory of everything," he comments. "I think there are basic principles that have very wide generality," such as quantum mechanics, statistical mechanics, thermodynamics and symmetry breaking. "But you mustn't give in to the temptation that when you have a good general principle at one level it's going to work at all levels." Anderson favors the view of nature described by the evolutionary biologist Stephen Jay Gould of Harvard, who emphasizes that life is shaped less by deterministic laws than by contingent, unpredictable circumstances. "I guess the prejudice I'm trying to express is a prejudice in favor of natural history," Anderson says. Anderson's views flatly contradict those of Stuart Kauffman, one of the most ambitious of all the artificial lifers. Kauffman has spent decades trying to show-through elaborate computer simulations-that Darwinian theory alone cannot account for the origin or subsequent evolution of life. Kauffman says he shares the concern of his former teacher John Maynard Smith about the scientific content of some artificial-life research. "At some point," he explains, "artificial life drifts off into someplace where I cannot tell where the boundary is between talking about the world-I mean, everything out there-and really neat computer games and art forms and toys." When he does computer simulations, Kauffman adds, he is "always trying to figure out how something in the world works, or almost always." Kauffman's simulations have led him to several conclusions. One is that when a system of simple chemicals reaches a certain level of complexity or interconnectedness (which Kauffman has linked both to the edge of chaos concept and to Bak's self-organized criticality), it undergoes a dramatic transition, or phase change. The molecules begin spontaneously combining to create larger molecules of increasing complexity and catalytic capability. Kauffman has argued that this process of "autocatalysis"-rather than the fortuitous formation of a molecule with the ability to replicate and evolve-led to life. "Obscurantism and Mystification" Kauffman has also proposed that arrays of interacting genes do not evolve randomly but converge toward a relatively small number of patterns, or "attractors," to use a term favored by chaos theorists. This ordering principle, which Kauffman calls "antichaos," may have played a larger role than did natural selection in guiding the evolution of life. More generally, Kauffman thinks his simulations may lead to the discovery of a "new fundamental force" that counteracts the universal drift toward disorder required by the second law of thermodynamics. In a book to be published later this year, At Home in the Universe, Kauffman asserts that both the origin of life on the earth and its subsequent evolution were not "vastly improbable" but in some fundamental sense inevitable; life, perhaps similar to ours, almost certainly exists elsewhere in the universe. Of course, scientists have engaged in interminable debates over this question. Many have taken Kauffman's point of view. Others, like the great French biologist Jacques Monod, have insisted that life is indeed "vastly improbable." Given our lack of knowledge of life elsewhere, the issue is entirely a matter of opinion; all the computer simulations in the world cannot make it less so. Kauffman's colleague Murray Gell-Mann, moreover, denies that science needs a new force to account for the emergence of order and complexity. In his 1994 book, The Quark and the Jaguar, Gell-Mann sketches a rather conventional-and reductionist-view of nature. The probabilistic nature of quantum mechanics allows the universe to unfold in an infinite number of ways, some of which generate conditions conducive to the appearance of complex phenomena. As for the second law of thermodynamics, it permits the temporary growth of order in relatively isolated, energy-driven systems, such as the earth. "When you look at the world that way, it just falls into place!" Gell-Mann cries. "You're not tortured by these strange questions anymore!" He emphasizes that researchers have much to learn about complex systems; that is why he helped to found the Santa Fe Institute. "What I'm trying to oppose," he says, "is a certain tendency toward obscurantism and mystification." Maybe complexologists, even if they cannot create a science for the next millennium, can limn the borders of the knowable. The Santa Fe Institute seemed to raise that possibility last year when it hosted a symposium on "the limits of scientific knowledge." For three days, a score of scientists, mathematicians and philosophers debated whether it might be possible for science to know what it cannot know. After all, many of the most profound achievements of 20th-century science-the theory of relativity, quantum mechanics, Goedel's theorem, chaos theory-prescribe the limits of knowledge. Some participants, particularly those associated with the institute, expressed the hope that as computers grow in power, so will science's ability to predict, control and understand nature. Others demurred. Roger N. Shepard, a psychologist at Stanford University, worried that even if we can capture nature's intricacies on computers, those models might themselves be so intricate that they elude human understanding. Francisco Antonio Doria, a Brazilian mathematician, smiled ruefully and murmured, "We go from complexity to perplexity." Everybody nodded.