Suppose you were an electrician. You’ve trained, apprenticed, passed all your certifications, and you’ve worked with electric wiring for years. You’ve wired houses and commercial buildings for years, and you feel pretty confident in your trade. One day you finish wiring a light switch, dust off your hands, and flip the switch to test it. But instead of seeing the light turn right on, you find that it flickers dimly. You’re pretty sure you wired things correctly, so what do you do?

You’d probably go through the common causes first. Replace the bulb. Test the switch to see if it’s faulty. Double check the wiring, etc. What you probably wouldn’t do is presume the entire national electric grid is fundamentally flawed. One flickering bulb is not enough to justify rebuilding everything from scratch. This isn’t being closed minded, but rather based upon a solid understanding of electrical circuitry. The electric grid system has been tested at multiple levels, and our understanding of how electricity works within that system is supported by a confluence of observational evidence. An experienced electrician knows this quite well. Perhaps the electrical grid is faulty and needs to be rebuilt, but an experienced electrician will explore easy solutions before moving to more difficult ones.

A similar thing occurs in science. Occasionally a “flickering light” of data shows up that seems strange. It doesn’t fit the expectations of our theories. When this happens, the first thing scientists do is check the wiring. How was the data collected? Is the interpretation of the data appropriate? Has the result been tested in other ways? They do this because the alternative is to rebuild fundamental scientific theories from scratch. Of course there are those that say scientists are being closed minded. That they are ignoring clear evidence in order to hold on to their flawed theories.

As an example, consider the curious case of the fluctuating speed of light. It is an example made famous (or infamous) in a TEDx talk by Rupert Sheldrake, of Morphic Resonance fame. If you look at the recommended values of speed of light during the 20th century, you’ll notice a sharp drop in its value from 1930 to 1945. A drop that is much larger than the stated uncertainties. This, Sheldrake claims, is evidence that the speed of light is not constant.

While it looks like a pretty clear case of fluctuating speeds, it is much like the flickering light example. Something is clearly odd, but what is the cause? Sheldrake’s solution is to attack the entire scientific infrastructure. Physics assumes light speed is a constant, when the official values clearly show that they are not. Physics is clearly ignoring the evidence while clinging to their dogma.

Maybe Sheldrake is right, and a fundamental tenet of physics is wrong, but before we jump to that conclusion let’s look at some less radical solutions. The first thing to note that these are recommended values rather than actual measurements. The recommended values are taken from various experimental results rather than being actual observations. What we really need to do is look at actual measurements. When we do this what we find is that measurements of light are all over the map until about 1960. By that point we had developed laser interferometry, which is much, much more accurate than previous observations. You can see this in the graph by the lack of obvious error bars at that point on. The measurements stop at 1983, because by then the speed of light measurements were so precise and so constant that we defined the meter in terms of lightspeed and time instead of the other way around.

Before 1960 the values are all over the map. You can see the small dip from in the 1930 to 1945 range, but you can also see larger variations as you go further back in time. It’s only natural to see larger uncertainties with older, less precise instruments, but why the big fluctuations in values? It turns out that these measurements all depend upon other physical measurements as well. As measurements of various physical constants lead to new accepted values, the measurements of light shift accordingly.

Given all that, the evidence for fluctuating light speeds doesn’t seem very strong. But maybe it just so happened that the speed of light fluctuated until about 1960 when laser interferometry was developed. Aren’t scientists being closed minded if they aren’t open to that possibility? That’s where the confluence of evidence comes in. From the direct light measurements alone we can’t be sure that the speed of light has always been constant. But there are lots of other experiments that support that idea. Maxwell’s equations, the theory that describes light, shows that light speed depends on two other physical constants (the permeability and permittivity of light) which also appear unchanged over time. Special relativity is derived from a constant light speed, and has been experimentally validated to the limits of observation. General relativity, based upon special relativity, has also been experimentally confirmed. Then there are astronomical tests that have looked for changes in the speed of light relative to other physical constants, and we find that billions of years ago it had the same value it has today.

Now, perhaps the speed of light was constant for billions of years, but then slowed down between 1930 and 1945 before returning to its original value, but that doesn’t seem very likely. There isn’t much evidence to support the idea, and there is a confluence of evidence opposing it.

And that’s the key. When we have a wide range of experimental results that point to a particular conclusion, it isn’t reasonable to overthrow that conclusion based upon a single strange result without checking for easier explanations. That’s why, when a team measured faster than light neutrinos, or notices fluctuations in the speed of light, or any other host of strange results, we look to change the light bulb before we change the universe.