This story originally ran November 30, 2009.

Sometimes, even as a person pisses you off, they make a point that you can't ignore. In a recent forum discussion that I was involved in, scientists were accused of making pronouncements from on high. The argument was that scientists jump to a conclusion that seems desirable to them, and then treat it as an infallible truth.

Of course, my initial reaction was to pronounce that I, as a practicing scientist, never make pronouncements. But, looking at my articles from the perspective of someone who really knows absolutely nothing about science—as a practice or as a body of knowledge—I can see how one could see little beyond a list of assertions. The truth is more complicated, of course, but it's a truth that science writers find challenging to convey. Science is impossibly broad, and the leading edge sits, precariously balanced, on a huge, solid, and above all, old body of knowledge. To illustrate this problem, I am going to tell you the story about how the speed of light came to be the ultimate speed limit for the entire universe.

What I want you to remember from this story is that any new fact or change in our understanding sits upon generations of accumulated knowledge. Most of that knowledge is now trusted as "mostly correct," though some of it still lies in the "probably not too badly wrong" category. Sitting beneath that is a body of work stretching back some 6,000 years, some of which is still highly relevant.

My overall point is that, even if I were to extend each of my peer-reviewed articles by some 3,000 words—I already get complaints about the length of some of my articles—I still would not have covered the science of an entire subject. By choosing a starting point for the knowledge described in an article, I really am pronouncing from on high that everything beyond that point is established, trusted knowledge, while everything after that point will be explained to some extent.

So, how do we measure stuff anyway?

My arbitrary beginning for this story—and make no mistake, it is a story that leaves out any number of complications—is Galileo. Apart from being a telescope builder extraordinaire, Galileo also had an important insight into the process of measurement. He saw that if he was on a moving boat and fired a cannon forward, he could measure the speed of the cannon ball and come up with a number. But, the poor guy on the receiving end of the cannon ball—giving up his life in the name of science—would, when making the same measurement, come up with a different answer.

Needless to say, a violent disagreement might ensue (provided the target survived the cannonball) over whose measurement was correct. Galileo saw that the difference between the two measurements was the speed of the boat. That is, the person receiving the cannon ball sees that it is moving a bit faster than Galileo because the target sees that the cannon that fired the ball was also moving. Once this extra speed is taken into account, agreement could be reached between different measurements, and Galileo could return to upsetting other people.

The key point that Galileo made clear was that measurements are always relative to some benchmark. We measure the speed of a car relative to the ground, and we measure the speed of stars relative to each other (including the Sun). This principle underlies a lot of modern physics, and it's so fundamental that we don't even give it a name when we teach it anymore.

But it turns out that this principle is, in fact, wrong sometimes. Showing how we know it's wrong and why we found out that it is wrong is what this story is really all about.

Another arbitrary beginning: the story of light

Galileo was not the only person into optics and telescopes. Newton and Huygens both made huge contributions to our understanding of light—Newton demonstrated that white light contained all the colors of the rainbow, while Huygens created a model that explained the structure of the patterns light created after it had passed by a sharp edge.

But these two giants of science disagreed about what light actually was. Newton thought that light was a particle, while Huygens thought that light was a wave. Critically, all observed phenomena could be explained by both models, so both had their adherents and critics. Note, though, that this dispute happened a bit before 1700, but this issue remained unresolved until the middle of the 19th century.

That is not to say that no one cared or did anything about it. On the contrary, evidence for the wave theory of light accumulated and the particle theory of light had to be modified to accommodate the new findings; as it became more complicated, the number of people who supported it shrunk.

The straw that broke the camel's back when it came to support for light as a particle was Young's experiment that demonstrated that light, like water waves and sound waves, could be made to interfere—one of the reasons this took so long is that Young needed a relatively modern light source to make his observations. In the meantime, an important question remained unanswered: if light was a wave, what was doing the waving?

Yet another arbitrary beginning: the story of electricity

Off in a disregarded corner, people with names like Faraday and Gauss had begun to get interested in why, after you had rubbed a cat with a bit of amber, bits of paper would stick to both the cat and the amber, but not to each other. Equally interesting was why compass needles pointed north. Although these phenomena had been known for a long long time, no one had really investigated them—or if they had, their findings had been lost. In any case, scientists got interested in static electricity and magnetism.

They discovered that some materials conducted electricity, that magnets could cause an electric current to flow, and that currents could be used to create magnets. The two were linked, but no one really knew how. Empirical laws were derived that allowed electricity and magnetism to be exploited—dynamos, electric motors, and alternators were all in the process of revolutionizing life, though their effects would take a while to percolate through society. But, despite the applications, the underlying principles remained obscure—we had laws, but no theory.

There were two problems with the laws developed for electricity and magnetism: first, they didn't shed any light on what electricity or magnetism were or why they were linked—the concept of charge had been introduced, but no one knew what a charge might be. Second, they weren't predictive: that is, whenever anyone found a new magnetic or electrical phenomena, a new law was required.

That's where things stood until the late 19th century, when Maxwell decided to use some new-fangled math to describe electricity and magnetism. He found a common set of equations that described both phenomena and how they were linked to each other.

Maxwell's work didn't win instant acceptance. In the first place, it didn't do anything about the first problem—Maxwell's equations offer no insight into the origin of electricity or magnetism, beyond the charge concept, anyway. Meanwhile, there were other theories floating around that were purely mechanistic—they solved the first problem, but failed to be predictive (or at least, accurately predictive). In addition, Maxwell's work introduced a series of new problems.