The comments appear like clockwork every time there's a discussion of the Universe's dark side, for both dark matter and dark energy. At least some readers seem positively incensed by the idea that scientists can happily accept the existence of a particle (or particles) that have never been observed and a mysterious repulsive force. "They're just there to make the equations work!" goes a typical complaint.

It's a somewhat odd complaint. Physics has a long history of particles that were predicted based on the math and not detected for years, sometimes decades. But it's not simply physics. Other areas of science have produced evidence that suggests something must be present, but haven't hinted as to what that something must be. These situations, where scientists insert a placeholder for a something they don't understand yet, have sometimes led scientists down the wrong path—phlogiston and aether spring to mind.

But these erroneous placeholders carry the seeds of their own destruction, since they make predictions that the natural world can't fulfill. And, possibly more often, the placeholders turn out to be right, and an understanding of the phenomena behind them revolutionizes our knowledge of the natural world. In this feature, we'll take a look at some of the most successful placeholders in the history of science, and then consider how even a placeholder that has gone wrong can help advance a field anyway.

Darwin was wrong (but not about evolution)

Charles Darwin is rightly celebrated for his Origin of Species, which provided a comprehensive account of his theory of evolution by natural selection. Darwin produced a compelling argument that variations that can improve survival and reproduction will, over time, become dominant features in a population, provided that these variations are heritable. There was just a small problem here: at this point, nobody had proposed any mechanism by which traits could actually be inherited. Darwin's theory, in short, relied on a giant placeholder.

It wasn't an especially controversial one. Most people are well aware that some traits, like eye and hair color, often run in families. But the pattern in which they appear can be bewildering in the absence of a theory of heredity, with traits vanishing one generation and reappearing the next.

Darwin, to his credit, recognized that his theory required an explanation of how favorable variations might be inherited and spread through a population, so he attempted to provide one. But his attempt, for anyone with an understanding of Mendelian genetics, is positively painful to read. The Origin of Species contains a chapter devoted to inheritance that proposes a system by which traits blend in subsequent generations, along with a proposed biological process by which the traits get passed on during fertilization.

Pretty much all of it is wrong. Even Darwin seems to recognize it had problems, as he considered a few situations where blending doesn't seem to occur at all.

So, when the Origin was published, the issue of inheritance was filled in with a placeholder. There was lots of evidence that, somehow, traits often ended up in subsequent generations. As a result, Darwin's theory saw very rapid acceptance among the scientists of his time. But, with nothing more than a bit of phenomenology, inheritance was the dark energy of its day, and most biologists would have to wait decades before any light was shed on it.

Most, but not all. Darwin's contemporary, Gregor Mendel, was busy sorting inheritance out. But his theory of genetics came with a giant placeholder of its own: the identity of the factors that eventually gave the theory its name.

Genetics without the genes

Anybody who has spent time with a biology textbook is well aware of Gregor Mendel's story. With large-scale breeding experiments that used pea plants, Mendel tracked traits across generations. He recognized that the appearance of these traits could be explained by independent, randomly assorting factors that showed dominant and recessive behavior. What he didn't identify, however, was anything at the biological level that could possibly be one of these factors (which picked up the name "genes" long after Mendel). Based on the paper in which he described his results, it's not even clear if Mendel cared.

And, since Mendel's work was promptly misplaced for a few decades, that's where things stayed. When his ideas were rediscovered, it still took a couple of years for someone to suggest that genes might reside on the chromosomes that had been observed in cells, and about a decade for someone else to confirm that idea. But even a known structure didn't really make genes any more concrete than when Mendel had suggested them. It took roughly a century for the discovery of the structure of DNA to really tell us a lot about the physical nature of genes. For all that time, a gene remained a placeholder.

Although we're in somewhat better shape today, a gene remains a pretty nebulous concept. There's not a single, clean definition of a gene that accommodates all the complexity of what happens in DNA, with introns, exons, overlapping and nested transcripts, enhancers, regulatory RNA, and so forth. So even today we have a hard time pinning down exactly what constitutes a gene, even though we've made spectacular progress in genetics.

There's an odd side note to all of this. Steve Fuller, a sociologist who ostensibly studies science, is most noted for testifying in favor of intelligent design at the Dover trial. His argument was that the supernatural has a long history in science, but he had to use a pretty broad definition of supernatural: "Supernatural also applies to the level that is below observation." Since genes took a long time to be observed, they qualified. "Of course, a lot of the things that were called supernatural include things like, well, Mendel's genes or atoms," Fuller testified.

That argument actually highlights the importance of placeholders in science. Rather than enabling a refuge in the supernatural, they stand in for a natural phenomenon that hasn't yet been described.