The benefit of central command and control of complex systems is often obvious. Eyes and legs in communication with the brain allow us to walk in a straight line. But this type of human-made or natural system can at times suffer from acute vulnerabilities. The crippling of important areas of U.S. socioeconomic activity during the recent partial government shutdown gives a vivid example of what happens when a nation’s central control unit suddenly switches off. The dying of brain cells in neurodegenerative diseases, extinguishing the body’s controller, demonstrates the same weakness in biological systems.

Eschewing central control of a system has its advantages. The absence of a “brain” directing the action means the loss of individual parts has little effect on the behavior of the collective. A team led by physicist Neil Johnson of The George Washington University has developed a model of a decentralized system in fly larvae that successfully mimics their movement. The study, published February 6 in Science Advances shows the model performs best when its individual parts are less capable—simpler components make the overall system more effective. By contrast, centralized systems function better as their component parts undergo improvements. The researchers contend this insight has the potential to inform everything from autonomous vehicle design to the treatment of neurological disease to the structure of organizations. It may even have implications for understanding the course of evolution.

Decentralized control abounds in nature, and is to be found in bacteria, slime molds and ant colonies. Johnson took inspiration from the observation that because of the simplicity of their neuronal circuitry, individual segments of drosophila (fruit fly) larvae act in a semi-independent manner during movement. This represents an example of decentralized control in a single organism, as opposed to the “swarm intelligence” displayed by bees or other collective entities. Despite the lack of central coordination of their set of parts, the larvae invariably achieve the goal of moving toward a preferred temperature—a process known as thermotaxis.

Larvae propel themselves forward via waves of contractions. They make turns when segments expand on one side and contract on the other. Temperature-sensing neurons determine segment movements and the combined effect of these motions determines the turn angle. “Coordination in the larvae comes in a similar way to how a crowd coordinates moving to an exit,” Johnson says. “It’s not that everybody’s on the phone to each other, it’s just that, given the outside information, there’s this collective flocking behavior.”

The researchers created a mathematical model that reproduces the movement of a larva using independent components, or “agents,” that store the outcome of past movements in memory (defined as 1 if the result aligns the model better with the target direction, 0 if not). Each agent chooses the next action (turn left/right) based on this history of past outcomes by referring to a set of “strategies,” which associate different sets of past outcomes with different turn directions. The researchers allocated different subsets of all the possible strategies to the different model agents, corresponding to the semi-independent segments in the larvae, making them behave somewhat differently—and each one of them chose its best-performing strategy at each step. The team finding this model produced crawl trajectories that looked remarkably like real data from larvae, persuading them they had captured something of the essence of the real system. “It’s really cool how it matches up with drosophila,” says mathematician David Wolpert of the Santa Fe Institute, who was not involved in the study. “It’s a very clean study, and a good step forward in terms of understanding these issues.”

The key finding relates to variation in the size of the agents’ memories. With very little memory capacity, the model performed poorly—but its performance also grew progressively worse after memory exceeded a certain size.

A system ends up further from its target goal if its component parts become too smart—shown here as "m" becoming too large. Credit: Pedro D. Manrique University of Miami

The researchers explain this result using “crowd/anticrowd” theory, a mathematical account of how independent agents form groups that behave in concert. When memory capacity is small, large crowds of agents form, all pushing in the same direction. They first make a large turn then suddenly switch back in the other direction, producing exaggerated zigzagging motions. If the agents have too much memory, crowds get stuck on strategies determined by long-past outcomes, not taking enough account of recent information that indicates they have veered off course.

The sweet spot between these extremes produces moderate-size crowds using opposing strategies, similar to each half of a rowing team putting oars into the water on opposite sides of a boat. “As you increase memory, it’s the equivalent of overthinking,” Johnson says. “You’re having too much history, reinforcing biases from the past.” Similar effects are sometimes seen with a single agent working on a problem, Wolpert says. “When people predict the stock market [from a series of past values], they’re careful not to look at too many points in the past,” he says. “It’s clutter; it makes the learning problem harder.”

The team claims the work might provide a new way of thinking about how evolution jumped from decentralized natural design, like bacteria, to the use of centralized plans in creatures like humans. The implication is that in a decentralized design there may be a limit to how “smart” components can get without switching to centralized designs. The group next plans to investigate how knocking out parts of larvae’s neuronal circuitry with lasers (a bit like incapacitating individual rowers) affects movement. The team also wants to explore the model’s behavior by doing the equivalent of tethering two rowers together or dropping in one with a super memory among the dullards. Ultimately, Johnson hopes to pursue possible medical implications. Future research will explore whether giving limited feedback to some muscles (for instance, based on whether they have overshot their target) could help dampen tremors caused by impaired control signals from the brain in conditions like Parkinson’s. “We’ll be putting in a grant application to do this precisely with general motor neuronal disease in mind,” Johnson says. “We don’t know whether it would be practical but I think we’ve shown it’s theoretically possible.”

Sign up for Scientific American’s free newsletters. Sign Up

Other areas where this research might be applied include autonomous vehicle design and organizational performance. Wolpert is cautious, however. The study does not compare the model with any other, and so tells us little about the relative merits of decentralized over centralized control, he says. He notes engineered systems can mitigate the vulnerability of a single controller simply by having duplicates. One scenario where this would not apply is teams of robot soldiers on special missions requiring radio silence when they work as a unit. “The robots aren’t allowed to communicate, so they’ve got to be run in a decentralized way,” he notes. “These results suggest that as the [design] engineer you at least consider restricting the robots’ cognitive capabilities in the interest of achieving the overall goal.”