If you share my view that technology drives history more than any other factor, then you will probably agree that the 21st century is going to be significantly shaped by the outcome of a single question: Will synthetic biology achieve radical success or not? In this column I’ll describe an early warning sign to watch for that will give us a clue about which way this important new field is headed.

Synthetic biology is the current term for the outer reaches of ambition in biotechnology. More often than not, the notion includes making artificial biology more like digital computation. It could hardly be otherwise, for computers are central to most of the prior art we have for building highly complicated structures from scratch. Computers also symbolize the ultimate in freedom through technology. You can hypothetically program a computer to do virtually anything with its input and output devices. If we could only find the right computer program to operate robotic medical devices, for instance, we could create a robot surgeon to cure any disease. If we could do the same with DNA and the other chemicals of life, we could create a huge variety of novel creatures or transform ourselves into astonishing new forms.

But if we entertain the idea that biotechnology is going to become more like computation, we aren’t being very specific, because there is more than one kind of computation. In particular, it might be more revealing to ask if synthetic biology is more likely to turn out like digital hardware or software. That’s an excellent candidate to be the most important question of the century.

From a mathematician’s point of view, hardware and software are practically interchangeable. You can almost always emulate a chip in software or implement a program as a chip. In practice, though, the two things could hardly be more different. Chips get faster and cheaper at a predictable, accelerating rate that is so reliable it is known as a law—the famous Moore’s law. Software typically gets worse over time.

It’s true that faster computers enable new software algorithms that weren’t possible before, like ones for machine vision (see Jaron’s World: Computer Evolution), but old programs don’t necessarily get better as hardware improves. In fact, they often lose efficiency at such a breathtaking rate that they effectively cancel out Moore’s law when they are adapted to run on new, faster machines. Try opening a similar word-processor document on old and new computers: The performance is often similar, even if the hardware has improved a thousandfold. How can this be? Software is so difficult to work with that in practice it almost never achieves its theoretical potential.

If synthetic biology turns out to improve in the accelerating way that computer hardware does, we will be in for quite a ride. It’s hard to predict how weird things could get, so one is tempted to max out deliriously as a futurist. Imagine an artfully designed fungus that looks like a hat; when you put it on, it digests your head and turns it into a still-conscious, rubbery Super Ball an inch across, suitable for easy launch into space. Once there, another fungus might then reconstitute your head and form a protective life-sustaining bubble around it. (This prediction may go too far, but the point is that it’s hard to say by what margin.)

If synthetic biology instead turns out to be more like software, it will still be amazing but in a more incremental, less predictable way. We will witness a succession of plateaus of achievement in areas like medicine and bioenergy. After a decade or two, we might have engineered bacteria that make fuel out of old garbage dumps, or maybe even a substantially artificial cell that acts like a doctor, swimming through the body and fixing our own aging human cells.

Then again, reality often violates our preconceived notions, and synthetic biology could turn out to have a character that doesn’t resemble hardware or software. Natural biology is certainly unlike either of those! It is flexible, as software ought to be from a naive point of view, but it is not as fragile as software. Synthetic biology may very well introduce a fourth kind of design complexity that has some of the qualities of all three precedents.

Lately I’ve had the good fortune to be able to investigate this possibility as a visitor at a remarkable lab in Berkeley called the Molecular Sciences Institute, or MolSci, headed by Roger Brent. He and his team are describing biological phenomena at a minute level of detail, which could pave the way for synthetic biologists to come. One of MolSci’s innovations is a “tadpole,” a human-designed molecule with a protein head and a DNA tail that can precisely count the number of rare molecules inside a cell. The kinds of data that can be gathered at labs like MolSci are exactly what’s needed if we are ever to understand what is going on inside a cell from a computational point of view.

The key to understanding complicated things like synthetic biology is being able to break them into simpler things. Let’s call this modularity. Going back to the difference between hardware and software: The problem a chip designer has to solve is modularized completely within a tight conceptual box. The logic design of a chip is perfectly specified, and the parameters of the physical environment in which it will operate, such as the temperature, can be carefully constrained.

Software, in contrast, makes contact with the wild world outside the limits of comfortable abstractions. Even when you think you’ve considered every condition that a piece of software will encounter, the rebellious nature of reality (including the foibles of human users) will come up with something to violate your assumptions. E-mail programs were originally written without foreseeing that some people would want to write viruses to pierce them.

If you completely understand your problem, as chip designers do, you’re not only halfway to solving it, you can also draw on lessons you’ve learned in the past. Your knowledge compounds, and I think that’s part of why chip engineering gets better and better. If you are instead facing the wilds of nature, you have to adapt constantly, and old knowledge is not necessarily relevant to new challenges.

There’s a subtle philosophical point that needs to be made about modularity. If you look at a complicated thing, like a big computer program, there might be more than one way of interpreting how it can be broken into modules. In the case of biology, it’s most likely there are layers of biological modularity staring us in the face that we haven’t noticed yet. The reason I suspect this is true is because of the amazing resilience of biological systems.

If you make an alphabetized list of all the possible large computer programs (up to a given size) without regard to how they might be broken into modules, the distribution of the ones that crash will be random. Not only that, you can never even be sure that you’ve found all the ones that might eventually crash. In general, you can’t do experiments (using evolutionary experimental method) with large-scale computer code to learn about the code because the results will be random. Dismal indeed! If biology were equally dismal, the results of evolutionary experiments would also be random, and evolution would be impossible.

But there are special cases in which it’s possible to evade the curse. If the code is encapsulated into modules that can work together in a multitude of possible combinations, then a lot of similar large programs might be expected to operate in similar ways because they are just different combinations of those modules. That means there might be areas in that long alphabetized list where a bunch of “good” large programs are clumped together. Conversely, if you find a bunch of “good” programs in close proximity to one another, you have evidence that there’s an underlying modularity helping you avoid the curse of randomness.

Natural biology doesn’t have just one scheme for modularity but a multitude of examples we know about. If you make random changes to a gene, you’ll still be able to get a protein out of the result about a third of the time (though for any specific gene the ratio might be much higher or lower). Brent thinks of that as a perilous rate of failure, but biologists are inured to luxury. If computer science could generate a system in which a third of the guesses yielded programs worth testing, we could probably keep up with Moore’s law! No such luck.

When computer scientists look for ways to make software suck less, we do it by trying out new kinds of modularity. But the sad truth is that computer science still hasn’t found a form of modularity that helps us clump “good” programs so that we can efficiently use experimental method (the way regular scientists do) to explore the meanings of small program tweaks. Instead, we have to slog through all the random results to make progress.

So, finally, the promised early warning sign to watch for: If biologists start reporting the discoveries of new levels of modularity—and in particular, if synthetic biologists can modify those encapsulation schemes—then watch out. The fungus hat will start to sound just a touch less crazy. If, on the other hand, what you hear about is experiments in which previously known modules like genes are swapped around, then expect a more dismal, softwarelike biotechnology. It will get better, but on software’s grudging schedule instead of hardware’s soaring trajectory.