How does natural selection create so much complexity so fast? A bold new theory says it learns and remembers past solutions just as our brains do

Yehrin Tong

A FEATHER isn’t just pretty: it’s pretty useful. Strong, light and flexible, with tiny barbs to zip each filament to its neighbours, it is fantastically designed for flight. The mammalian eye, too, is a marvel of complex design, with its pupil to regulate the amount of light that enters, a lens to focus it onto the retina, and rods and cones for low light and colour vision – all linked to the brain through the optic nerve. And these are just the tip of the iceberg of evolution’s incredible prowess as a designer.

For centuries, the apparent perfection of such designs was taken as self-evident proof of divine creation. Charles Darwin himself expressed amazement that natural selection could produce such variety and complexity. Even today, creationism and intelligent design thrive on intuitive incredulity that an unguided, unconscious process could produce such intricate contraptions.

We now know that intuition fails us, with feathers, eyes and all living things the product of an entirely natural process. But at the same time, current ways of thinking about evolution give a less-than-complete picture of how that works. Any process built purely on random changes has a lot of potential changes to try. So how does natural selection come up with such good solutions to the problem of survival so quickly, given population sizes and the number of generations available?

Random processes are enough to make the all-seeing eye Westend61/Getty

A traditional answer is through so-called massive parallelism: living things tend to have a lot of offspring, allowing many potential solutions to be tested simultaneously. But a radical new addition to the theory …