I’ve used the term “error cascade” on this blog several times, notably in referring to AGW hysteria. A commenter has asked me to explain it, and I think that’s a good idea as (a) the web sources on the concept are a bit confusing, and (b) I’ll probably use the term again — error cascades are all too common where science meets public policy.

In medical jargon, an “error cascade” is something very specific: a series of escalating errors in diagnosis or treatment, each one amplifying the effect of the previous one. This is a well established term in the medical literature: this abstract is quite revealing about the context of use.

There’s a slightly different term, information cascade, which is used to describe the propagation of beliefs and attitudes through crowd psychology. Information cascades occur because humans are social animals and tend to follow the behavior of those around them. When the social incentives are right, humans will substitute the judgment of others for their own.

A useful, related concept is preference falsification, the act of misrepresenting one’s desires or beliefs under perceived social pressures. Preference falsification amplifies informational cascades — humans don’t just substitute the judgment of others for their own, they talk themselves into beliefs most around them don’t actually hold but have become socially convinced they should claim to hold!

I use the term “error cascade” in a meaning halfway between the restricted sense of the medical literature and “information cascade”, and I apply it specifically to a kind of bad science, especially bad science recruited in public-policy debates. A scientific error cascade happens when researchers substitute the reports or judgment of more senior and famous researchers for their own, and incorrectly conclude that their own work is erroneous or must be trimmed to fit a “consensus” view.

But it doesn’t stop there. What makes the term “cascade” appropriate is that those errors spawn other errors in the research they affect, which in turn spawn further errors. It’s exactly like a cascade from an incorrect medical diagnosis. The whole field surrounding the original error can become clogged with theory that has accreted around the error and is poorly predictive or over-complexified in order to cope with it.

Here’s a classic example of missing what’s in front of your face (which, incidentally, I first learned of from James Blish’s Cities In Flight ; never let anyone tell you reading SF isn’t useful). For a couple of decades, cell biologists ignored the evidence of their own eyes when counting human chromosomes. The correct number is 46, but a very respected researcher incorrectly “corrected” his early count of 46 to 48 and the error persisted. At least this one was relatively harmless; yes, the wrong number hung around in textbooks for while, but there wasn’t any generative theory that depended on it in a big way.

For a cascade with wider theoretical consequences in its field, there’s the tale of Robert Andrews Millikan and the electron mass. The famed oil-drop experiment of 1909 demonstrated that electrical charge was quantized, and by implication proved the existence of subatomic particles. For this he deservedly got the physics Nobel in 1923 — but his value for the mass of the electron was significantly wrong. It was too low.

Because Millikan was such an eminence, it took a long time and a lot of confusion and thrashing to correct this. If you get the mass of the electron wrong it has lots of consequences; all theories that use it have at least to include unphysical bugger factors to cancel the error. You end up with even applied science getting screwed up; if I recall correctly what I first read long ago about this debacle, it caused some problems for the then-new technique of spectroscopy.

And yes, preference falsification distorts individuals’ models of what others around them actually believe even in hard science. I once tripped over this in an amusing way, when I volunteered to be on a panel on cosmology and dark matter at some SF convention (might have been Arisia 2004). I did this in the belief that I’d probably be the lone dark-matter skeptic on the panel — the stuff smells altogether too damned much like phlogiston to me. But all four of the other panelists (all of them working physicists or astronomers) also turned out to be dark-matter skeptics, surprising not only me but each other as well!

For anybody who wonders, I favor the alternative explanation of why galaxies don’t fly apart that gravity departs from inverse square at sufficiently long distances (admittedly, this is a purely aesthetic difference, because that theory is not yet testable). But I digress. I didn’t tell that story to argue for this theory, but to illustrate how social pressure to falsify preferences scientists can lead scientists to get stuck in erroneous models of what their peers believe, as well as ignoring experimental evidence.

In extreme cases, entire fields of inquiry can go down a rathole for years because almost everyone has preference-falsified almost everyone else into submission to a “scientific consensus” theory that is (a) widely but privately disbelieved, and (b) doesn’t predict or retrodict observed facts at all well. In the worst case, the field will become pathologized — scientific fraud will spread like dry rot among workers overinvested in the “consensus” view and scrambling to prop it up. Yes, anthropogenic global warming, I’m looking at you!

But climatology is far from the only field to get stuck in a rathole. I have reason to suspect, for example, that Noam Chomsky’s theory of universal grammar may have done something similar to comparative linguistics. I have spoken with linguists who will mutter, if no colleague can hear them, that Chomskian “universal grammar” has Indo-European biases and has to be chopped, diced, and bent out of shape to fit languages outside that group, to the point where it becomes vacuous (and effectively unfalsifiable). The gods alone know what distorting effects this rathole has had on analysis of language morphology (which would be like electron-mass measurements or chromosome counts in this case), but we’re not likely to be shut of them until Chomsky is dead.

There an important difference between the AGW rathole and the others, though. Errors in the mass of the electron, or the human chromosome count, or structural analyses of obscure languages, don’t have political consequences (I chose Chomsky, who is definitely politically active, in part to sharpen this point). AGW theory most certainly does have political consequences; in fact, it becomes clearer by the day that the IPCC assessment reports were fraudulently designed to fit the desired political consequences rather than being based on anything so mundane and unhelpful as observed facts.

When a field of science is co-opted for political ends, the stakes for diverging from the “consensus” point of view become much higher. If politicians have staked their prestige and/or hopes for advancement on being the ones to fix a crisis, they don’t like to hear that “Oops! There is no crisis!” — and where that preference leads, grant money follows. When politics co-opts a field that is in the grip of an error cascade, the effect is to tighten that grip to the strangling point.

Consequently, scientific fields that have become entangled with public-policy debates are far more likely to pathologize — that is, to develop inner circles that collude in actual misconduct and suppression of refuting data rather than innocently perpetuating a mistake. The CRU “team” isn’t the only example of this. The sociological literature attacking civilian firearms possession has been rife with fraud for decades. In a more recent example, prominent sociologist Robert Putnam has admitted that he sat for years on data indicating that increases in ethnic diversity result in a net loss of trust and social capital, because he feared that publishing it would give aid and comfort to political tendencies he disliked.

So…how do you tell when a research field is in the grip of an error cascade? The most general indicator I know is consilience failures. Eventually, one of the factoids generated by an error cascade is going to collide with a well-established piece of evidence from another research field that is not subject to the same groupthink.

Here’s an example: Serious alarm bells rang for me about AGW when the “hockey team” edited the Medieval Warm Period out of existence. I knew about the MWP because I’d read Annalist-style histories that concentrated on things like crop-yield descriptions from primary historical sources, so I knew that in medieval times wine grapes — implying what we’d now call a Mediterranean climate — were grown as far north as southern England and the Lake Mälaren region of Sweden! When the primary historical evidence grossly failed to match the “hockey team’s” paleoclimate reconstructions, it wasn’t hard for me to figure which had to be wrong.

Actually, my very favorite example of an error cascade revealed by consilience failure isn’t from climatology: it’s the the oceans of bogus theory and wilful misinterpretations of primary data generated by anthropology and sociology to protect the “tabula rasa” premise advanced by Franz Boas and other founders of the field in the early 20th century. Eventually this cascade collided with increasing evidence from biology and cognitive psychology that the human mind is not in fact a “blank slate” or completely general cognitive machine passively accepting acculturation. Steven Pinker’s book The Blank Slate is eloquent about the causes and the huge consequences of this error.

Consilience failures offer a way to spot an error cascade at a relatively early stage, well before the field around it becomes seriously pathologized. At later stages, the disconnect between the observed reality in front of researchers’ noses and the bogus theory may increase enough to cause problems within the field. At that point, the amount of peer pressure required to keep researchers from breaking out of the error cascade increases, and the operation of social control becomes more visible.

You are well into this late stage when anyone invokes “scientific consensus”. Science doesn’t work by consensus, it works by making and confirming predictions. Science is not democratic; there is only one vote, only Mother Nature gets to cast it, and the results are not subject to special pleading. When anyone attempts to end debate by insisting that a majority of scientists believe some specified position, this is the social mechanism of error cascades coming into the open and swinging a wrecking ball at actual scientific method right out where everyone can watch it happening.

The best armor against error cascades is knowing how this failure mode works so you can spot the characteristic behaviors. Talk of “deniers” is another one; that, and the moralistic quasi-religious language that it goes with, is a leading indicator that scientific method has left the building. Sound theory doesn’t have to be buttressed by demonizing its opponents; it demonstrates itself with predictive success.

UPDATE: Kudos to Bore Patch for pointing out a real humdinger of an example error cascade: canals on Mars.