No-shows are a commonplace, though often hidden, part of the process of scientific discovery. Theories predict. That’s their job. Ever since Isaac Newton and his co‑conspirators in the 17th century consummated their revolutionary programme of subjecting nature to mathematics, this has come to mean that particular solutions to systems of equations can be interpreted as physical phenomena. If a given mathematical representation hasn’t yet matched up with some phenomenon in the real world, it becomes a prediction waiting for its verification. But what happens when the verification never arrives – when the prediction fails to find its match in nature? When do you finally take ‘no’ for an answer?

These are constant issues in science. Take one recent example: for half a century, there was the mystery of something called the Higgs boson. The Higgs is the quantum, or the smallest possible change in energy, in what is known as the Higgs field. The Higgs concept was first proposed in the mid-1960s as part of what is now called the Standard Model of particle physics, a theory that describes the properties of the elementary particles out of which reality is built. Within the Standard Model, the Higgs boson accounts for how certain of those particles acquired the mass that they have in fact been seen to possess.

Over the next several decades, the Standard Model proved phenomenally successful, its predictions matching experimental results to as many decimal places as any measurement could achieve. But not the Higgs, which stubbornly refused to appear.

It finally emerged in observations made in 2012 and 2013, following the construction near Geneva of the Large Hadron Collider (LHC), an instrument powerful enough to peer into domains invisible to earlier devices. Up until the LHC produced its data, it remained an utterly open question whether the Higgs would actually show itself at the energies the machine could produce.

What if the LHC hadn’t yielded its Higgs? The failure to find the result that the theory had anticipated, in a context that demanded some solution, would raise deep and (for theoretical physicists) very exciting questions. It would throw ideas, and careers, into turmoil. And it would have provided an opening for sweeping new theories that would attempt to make sense of the no-show.

The Higgs is no isolated example. Take the mysteries that remain in the account of what happened as the Universe was born. So much has been discovered about that seemingly inaccessible time and process because the Big Bang – the explosive appearance of space and time, matter and energy, essentially out of nothing – left a snapshot of itself in a flash of light called the Cosmic Microwave Background (CMB). Discovered in 1964 (the same year that the Higgs idea first emerged) as a seemingly uniform hiss of microwaves, the CMB offered the chance to do something new: to measure detailed properties of the very early Universe by extrapolating backward from that microwave glow to the Big Bang process itself.

In the decades since, the interplay of cosmological theory and ever more refined observations has yielded a series of insights about that nascent Universe, along with predictions about what kinds of features should be found in the CMB. For example: just by looking around us, it becomes obvious that the present-day Universe is lumpy, with big piles of matter collected into stars and galaxies and clusters of galaxies, and giant, mostly empty spaces in between. What we see now implies that the CMB should clump too, that there should be places in the microwave picture of the Universe that shine just a little brighter than other places: hot spots that map the slightly more matter-rich neighbourhoods that could ultimately grow into galaxy clusters.

Early surveys of the microwave sky showed a completely uniform, blank glow, however. If that were all there was, such a featureless early Universe would seem to be incompatible with what we know is out there now; this in turn would imply that what cosmologists thought they knew about the cosmological evolution was wrong.

That’s how matters stood for almost three decades until 1989, when a specialised telescope called the Cosmic Background Explorer was launched into Earth orbit. By 1993, that instrument had captured enough photons to reveal exactly a broad pattern of light and dark – the first, out-of-focus glimpse of the original ‘seeds’ of galaxy clusters. There was a prediction based on a clearly observed fact in the contemporary Universe… and through enormous effort, it was shown to be true.

Since then, the CMB has been studied at greater and greater resolution to reveal an increasingly detailed picture of the events that turned the infant cosmos into one recognisably like our own. At the same time, theorists have made a series of predictions to be tested when and if observations of the CMB could be improved further still. One idea first proposed in the 1980s suggests that during its first instants of existence, our Universe underwent an episode called inflation, during which space itself expanded at a ferocious rate – ‘the bang of the Big Bang’, as one of its inventors, the theoretical physicist Alan Guth at the Massachusetts Institute of Technology, describes it. For more than 30 years, observations have yielded results that are consistent with inflation, but despite that growing collection of evidence, open questions remained.

The situation seemed set to change in 2014, as researchers closed in on a key expectation of the theory: that inflation’s wild ride would create what are called gravity waves, ripples in the gravitational field that would show themselves in particular (and very subtle) features that might be detectable in the CMB. There are several versions of the idea, each of which predicts somewhat different signals. In some of them, those primordial gravitational waves would leave a specific imprint on the CMB as a particular type of polarisation within the microwave background – thus revealing the first unequivocal connection between the vast, fast madness of the inflationary Universe with our own, more sedate cosmos. If such effects were found, it would be the final rung in the ladder of observations, the clinching evidence that we really do live in an inflated Universe.

That was the mission a research team set for itself with its instrument at the South Pole. The BICEP2 microwave telescope started gathering polarisation data in 2010. The team ran it for two years before beginning to study its data in earnest. It was a delicate, difficult analysis, and the stakes in the answer were so high that the researchers took every precaution they could think of to make sure they got it right. The public announcement came on 17 March 2014: a pattern known as ‘B-mode polarisation’, predicted by inflationary theory, had been observed in the CMB. The team detected the signal at a 5.9-sigma level – a scientific measure of confidence, one that is much better than the 3.5 million-to-one level of certainty required to claim discovery.

Multiverse or stellar schmutz?

It was a thrilling moment. The result made front pages around the world. It brought Andrei Linde, a physicist at Stanford and one of inflation’s inventors, to the verge of tears. For scientists and amateurs of science alike, it was a gift: something beautiful, strange and newly intelligible about existence on the largest scale. There was a distant resonance, an echo of what those first few must have felt in 1687, when the earliest copies of Newton’s Principia came into their hands: a kind of breathlessness, sheer wonder that human minds could penetrate such incredibly deep mysteries. One of the most persuasive readings of inflation is that we dwell not in a singular cosmos, but in just one of uncounted island universes, our little village within a vast multiverse. What a thought! No wonder that a veteran researcher was overwhelmed by the good news.

But observing at the ragged edge of technology is always a tricky business. The tiny fluctuations the BICEP2 team found within their data – the signal they claimed was the signature of inflation’s gravity waves – quickly drew informed scrutiny. Questions about their results became full-on doubts within a few weeks, as scientists from outside the team pressed them on the issue of foreground dust – ordinary debris common in galaxies such as our own Milky Way. By summer’s end, it had become clear that the filtering of light through such nearby dust might explain all of the effects visible in BICEP2 data. Multiverse or stellar schmutz?

Many measures in the Universe behave as if inflationary theory is correct, but the latest attempts to check the BICEP2 measurement confirmed that it was impossible to distinguish a clear answer, given the confounding role of the galactic dust. What is known to date is that the BICEP2 results do not contain a reliable observation of inflation’s signature in the CMB. That doesn’t (yet) mean such traces don’t exist. Several attempts are already underway to probe the CMB with yet more precision. Those measurements will likely settle whether the predicted gravity waves really do reveal themselves in the microwave background, and even if the hoped-for polarisation effects are not found, there are versions of inflation theory that do not require a gravity wave signature in the ancient glow of the Big Bang.

Still, even if some form of inflation remains a persuasive candidate to account for the properties we see in the Universe right now, it hasn’t closed the deal. After more than three decades, the evidence in favour of inflation is strong but largely circumstantial. Theorists firmly support it, but the cosmos could see things differently. Long gaps between prediction and observation always raise the question: what finally persuades science – scientists – to abandon a once-successful idea? The conventional response in science is: right away, or at least as soon as you’re confident of the evidence. But failure to validate a prediction is quite different from falsifying a prediction. Perhaps the failure was due simply to inadequate collection of data. There is plenty of room for stalling.

In a public talk delivered in 1963, the late physicist Richard Feynman said that science is simply ‘a special method of finding things out’. But what makes it special? The way its answers get confirmed or denied: ‘Observation is the judge’ – the only judge, as the catechism goes – ‘of whether something is so or not.’ There is a strange magic to the term ‘the scientific method’. At a minimum, it asserts a particular kind of authority: here is a systematic approach, a set of rules, that when followed will reliably advance our understanding of the material world. Such knowledge, though, is always provisional, a seeming weakness that is the real strength of science: every idea, every generalisation, every assumption is subject to question, to challenge, to refutation.

That’s how the scientific method is usually taught. Every high-school student confronts some version of Feynman’s description. The process of science rides down railroad tracks: you ‘Construct a Hypothesis’ to ‘Test with an Experiment’ (or an observation), and then you ‘Analyse Results’ and ‘Draw Conclusions’. If the results fail to support the initial hypothesis, then it’s back to step one.

Laid out like that, the scientific method can be seen as a kind of intellectual extruder. Set the dials with the right question, pour data into the funnel, and pluck knowledge from the other end. And, most important: when that outcome fails to match reality, then you go back to the beginning, work the dials into some new configuration, try again.

This isn’t just cartoon stuff either, a caricature told to children who might never dive more deeply into science than a Coke-Mentos volcano. Even for those who penetrate into more and more advanced ideas and approaches, the same message gets dressed up in more formal language. Here’s a typical ‘Introduction to the Scientific Method’ aimed at college students: ‘The scientific method requires that a hypothesis be ruled out or modified if its predictions are clearly and repeatedly incompatible with experimental tests’ – pretty much what science-fair contestants are told. But the explanation goes on to echo Feynman’s point: ‘No matter how elegant a theory is, its predictions must agree with experimental results if we are to believe that it is a valid description of nature. In physics, as in every experimental science, “experiment is supreme”.’

In other words, when a long-anticipated outcome fails to materialise, more than a single prediction lies in peril. If gravity waves don’t show up in ever more acute CMB measurements, then at some point the strand of inflation theory that requires them will be in trouble. Within the myth of the scientific method, there should have been no choice about the next move. ‘Experiment is supreme’… ‘Observation is the judge.’ We hold this truth to be self-evident: the hard test of nature trumps even the most beloved, battle-tested, long-standing idea. Does history behave like that? Do human beings?

No: real life and cherished fables routinely diverge. One of the starkest examples is the strange story of the planet Vulcan. In 1859, the French mathematician Urbain Le Verrier – the man who predicted the location of Neptune – calculated a property called the precession of the perihelion of Mercury’s orbit. It is just a measure of how the planet’s oval orbit shifts, with its point closest to the sun (perihelion) changing direction slightly from year to year. After accounting for the gravitational pull of all the known planets, Le Verrier was left with an error of 38 arcseconds per century. That is about 1/100th of a degree. Tiny, yes, but it wasn’t zero. To account for the discrepancy, Le Verrier hypothesised ‘a planet, or if one prefers a group of smaller planets circling in the vicinity of Mercury’s orbit’. The unseen object came to be known as Vulcan.

Source: Wikimedia

The total eclipse of the sun observed July 29, 1878, at Creston, Wyoming Territory. From The Trouvelot Astronomical Drawings 1881-1882. A group of astronomers hoped that the eclipse would make visible an intra-mercurial planet, provisionally named Vulcan.

Here was a concrete prediction, and a spectacular no-show. Vulcan refused to appear, decade after decade, even though its presence had been deduced from that icon of the scientific revolution, Newton’s theory of gravity. Meanwhile, Mercury continued to misbehave. The American astronomer Simon Newcomb was the most authoritative student of the solar system in the last years of the 19th century. In 1882, he redid Le Verrier’s calculation and showed that Mercury’s excess perihelion advance was even slightly larger than Le Verrier had originally determined. But the dramatic failure of an 1878 eclipse observation in Wyoming, intended to look for new planets close to the Sun, left astronomers with few choices. Vulcan, whether imagined as a single planet or a flock of asteroids, was no longer plausible as the source of Mercury’s anomaly. What to do?

if science as lived matched the stories scientists tell about it, Newtonian theory should have been for the chop

After July 1878, almost all of the astronomical community abandoned the idea that a planet or planets of any appreciable size existed between the Sun and Mercury. But that broad consensus did not lead to any radical reassessment of Newtonian gravitation. Instead, a few researchers tried to salvage the core of the idea with ad‑hoc explanations for Mercury’s motion.

The historian of science N T Roseveare catalogued the struggle, dividing it into two main strands. Newcomb followed his recalculation of Mercury’s orbit with a review of the ‘matter’ alternatives – Vulcan-like explanations that depended on coming up with a source of mass that for some good reason remained undetected but could generate enough gravitational tug to produce the perihelion advance. He took Vulcan itself as clearly refuted, but he catalogued a number of more subtle suggestions: perhaps the Sun was sufficiently oblate – fat around its middle – that such an unequal distribution of matter could solve the problem. Alas, the record of solar observation persuaded Newcomb that our star is pretty nearly spherical (as it is). Other proposals – matter rings, like those around Saturn, or enough of the dust that was known to exist near the Sun – fell to a variety of other objections.

After more than a decade of thinking about the problem, Newcomb came to his uncomfortably necessary conclusion: within the framework of the inverse square law of gravity, there was no plausible trove of matter near the Sun that could account for the motions of Mercury. With that, if science as lived matched the stories scientists tell about it, Newtonian theory should have been for the chop. In the fairytale version of the search for knowledge, Newcomb’s verdict – that there was a persistent, unrepentant anomaly that current theory could not explain – would compel researchers to question its status as a valid description of nature.

In any myth there’s at least a hint of some deeper truth, and so, as matter-based ideas fell, Newton’s version of gravity did come under a bit of scrutiny. One astronomer suggested that Newton’s law might be only an approximation: gravity could vary by masses involved and inversely with the distance between them to a power of 2, plus just a tiny amount: .0000001574. That would bring Mercury’s motion into perfect agreement with the math, but there were several obvious objections. For one, it was such a messy move: why would the inverse exponent for gravity ‘choose’ to be so close to a perfect integer, and yet refuse to settle on exactly two?

To be sure, nature sometimes just is, in ways that can seem both arbitrary and unlovely. Even now, there are several numbers in fundamental theories of the large and small that are set by observation. In some cases they are just as odd – or weirder still – than an inverse 2.0000001574 power law. Even so, simplicity, elegance and, above all, consistency have proved to be pretty great ad‑hoc measures of theoretical insight, even if they give no guarantees. An inverse-not-quite-two law was ugly enough that very few researchers took it seriously. The idea finally went away in the 1890s when it was shown to account for Mercury’s motion, but not that of Earth’s moon.

A few more attempts to tweak Newton followed. Some added another term to the classic inverse square law to better fit theory to nature, and others explored the idea that the speed of a body might change its gravitational attraction. None gained significant support from either physicists or astronomers, and they all would collapse under a variety of fatal flaws.

By the turn of the 20th century, most researchers had given up. There was still no explanation for Mercury’s behaviour, but no one seemed to care. There was so much new to think about. X-rays and radioactivity had opened up the empire of the atom. Planck’s desperate creation of the quantum theory was about to transform the study of both energy and the fundamental nature of matter. The decades-in-the-making confirmation that the speed of light (in a vacuum) was truly constant was beginning to hint that extremes of speed might produce some very interesting effects. At the Paris Exhibition of 1900, the American historian Henry Adams marvelled at the practical applications of the new science of electricity. In 1903, the Wright brothers’ experiments on a beach in North Carolina would usher in an age in which, among much else, long-pondered and very difficult questions in physics – such as the motion of air over a surface – took on literally life-and-death significance.

Through it all, good old Newtonian theory worked a treat, pretty much all the time. Its laws of motion described the experience of the real world close to perfectly and, if Mercury acted up a little (so little, those few arc-seconds per century!), comets and Jupiter and falling apples and just about everything else that could be observed proceeded on their way in calm agreement with the rules laid down in the Principia. Amid all this – the tumult of the new and the excellence of the old – Vulcan itself dwindled into a mostly forgotten embarrassment, the physical sciences’ crazy uncle in the attic. There it sat (or rather, didn’t), hooting in the rafters, and yet no one seemed to hear.

in the midst of the fray, it is impossible to be sure what any gap between knowledge and nature might mean

That willful disregard eventually changed, but only after a young man in Switzerland named Albert Einstein started to think about something else entirely, nothing to do with any confrontation between a planet and an idea. He was contemplating the relationship between space, time, acceleration and gravity. He ended up by creating the general theory of relativity, and in the process finally explained the anomalous motion of Mercury’s orbit: not due to another planet or asteroid, but due to the previously unknown effects of the warping of space around the Sun. The improved calculation of Mercury’s orbit was, in fact, a crucial first test of Einstein’s new theory.

What moral to draw, then, of the non-existence of Vulcan and the subsequent triumph of general relativity? At the least this: science is unique among human ways of knowing because it is self-correcting. Every claim is provisional, which is to say each is incomplete in some small or, occasionally, truly consequential way. But in the midst of the fray, it is impossible to be sure what any gap between knowledge and nature might mean. We know now that Vulcan could never have existed; Einstein has shown us so. But no route to such certainty existed for Le Verrier, nor for any of his successors over the next half-century. They lacked not facts, but a framework, some alternative way of seeing, through which Vulcan’s absence could be understood.

Such insights do not come on command. And until they do, the only way any of us can interpret what we find is through what we already know to be true. For more than two centuries, humankind lived in the cosmos that Newton discovered. In the end, that cosmos was demolished not by a failure of prediction, but by a more complete theory. Vulcan’s non-existence did not overthrow Newton’s theory. Rather, it became the marker on which the theory’s passing is written.

Adapted from the book THE HUNT FOR VULCAN by Thomas Levenson. Copyright © 2015 by Thomas Levenson. Reprinted by arrangement with Random House, a division of Penguin Random House LLC.