Physicist R. F. Streater defined a “lost cause” as a research topic that once seemed promising but has proven to be a dead end, at least in the view of the large majority of people working in the field. But the topic nevertheless continues to be pursued by a small but dedicated research community. Streater wrote a whole book on lost causes in physics. A while back, statistician Larry Wasserman began a series of posts on lost causes in statistics.

“Lost causes” are of course meat and potatoes to philosophers of science interested in Kuhnian ideas of “paradigm shifts” or Lakatosian ideas of “research programs“. People working on lost causes can be viewed as out of step with the current paradigm. Either because they got left behind when the paradigm shifted, or because they’re trying and failing to shift the current paradigm. And they can be viewed as sticking with a research program that’s no longer productive, at least in the eyes of their colleagues.

Lost causes are interesting because they highlight just how much science is a matter of making judgment calls, which while not purely “objective” are not purely “subjective” either. Scientists who pursue lost causes aren’t bad scientists–indeed, they’re often unusually smart and thoughtful. Larry’s post and Streater’s book talk about lost causes that were taken up by famous, brilliant people. If you think science is purely a matter of totally objective decision-making, such that any competent person will agree on the right decision, it’s very hard to explain how brilliant scientists could make objectively-terrible decisions about what research to pursue. But conversely, if decisions about what research is worth pursuing were purely a matter of arbitrary, subjective personal preference, like one’s choice of hobby, or like a preference for chocolate ice cream over vanilla, we wouldn’t see any line of research as a lost cause. If my hobby is birdwatching, nobody whose hobby is woodworking is going to tell me that birdwatching is a lost cause. And if I buy a cone of chocolate ice cream, you’re not going to sadly shake your head and wonder to yourself how a brilliant ice cream chooser like me could possibly choose a dead end flavor like chocolate over an obviously more-promising flavor like vanilla.

Probably every field has examples of lost causes. Tabletop cold fusion in chemistry, for instance. Steven Wolfram’s attempt to derive all of science from cellular automata models is another lost cause from physics. And there are various ideas and schools of thought in economics that probably qualify as lost causes, such as Austrian economics and Marx’s “labor theory of value”.

Many of these examples of lost causes share some features. Many arise when researchers reject one or more axioms or foundational assumptions of the field, and doggedly pursue the consequences of some alternative assumption. Larry Wasserman’s statistical example, Austrian economics, Marx’s labor theory of value, and some of Streater’s physics examples all fit that description. This feature of lost causes probably helps explain why lost causes continue to be pursued. If people agree on foundations, then in principle any disagreement between them ought to be straightforwardly resolvable (in practice it’s another matter, of course, for all sorts of reasons). Think for instance of conducting an experiment to resolve the disagreement between alternative explanations for some phenomenon–that’s something scientists do all the time. But if scientists disagree on something more foundational, our usual methods for resolving disagreement don’t work, thus making foundational disagreements more likely to persist. Of course, this doesn’t mean that rejecting an axiom or foundational assumption implies that you’re working on a lost cause. Rejecting one of Euclid’s axioms leads to non-Euclidean geometry–which far from being a mathematical lost cause is key to Einstein’s theory of relativity.

There are other features lost causes don’t all share. For instance, some lost causes get recognized as lost causes pretty quickly (tabletop cold fusion, for instance), while others need to be pursued for a long time before it becomes apparent that they’re dead ends. And some are pursued by lone individuals, while others are pursued by decent numbers of people (which means you can’t just dismiss all lost causes as reflecting the idiosyncrasies of isolated oddballs).

Lost causes seem like cousins to “fringe science” and “pseudoscience” in some ways. I’ve talked in the past about how there’s no clear bright line separating science from pseudoscience. Lost causes illustrate this, because they share features with both mainstream science and pseudoscience. See for instance this interesting piece on the forensic scientist who claims to have sequenced the genome of “Bigfoot”.* In a lot of ways, she’s a perfectly competent, careful scientist. But she’s also convinced of the value of a certain non-mainstream idea (here, that Bigfoot exists) and isn’t prepared to give it up easily. So she interprets evidence in a way that fits with that idea and suggests that further research will be productive. For instance, weird data and irregularities in PCR performance that anyone else would consider signs of contamination and sample degradation, she interprets as signs that Bigfoot is a hybrid between humans and some distantly-related species. And this is despite the fact that she is well aware of contamination issues and takes a lot of careful procedural steps to avoid contamination (the trouble is, she then assumes that those steps worked). She’s also uninterested in resolving apparent conflicts between her conclusions and well-established results from other areas of science. A trait she shares both with pseudoscientists like Velikovsky, and with many scientists pursuing lost causes. They often tend to focus only on a few lines of evidence and a few implications of their results, to the exclusion of others.

Now, there are times when it makes sense to temporarily set to one side conflicts with existing ideas and knowledge–but I think those times are pretty limited. I think most good scientists see apparent conflicts between different ideas or different lines of evidence as attractive research opportunities that cry out to be explored. No research program is without its flaws and limitations, but actively focusing on those flaws and limitations so as to address them is surely a big part of any good research program. Ignoring conflicts with other ideas and evidence, or setting them to one side in the abstract hope that someday someone will resolve them, isn’t pragmatism, I don’t think. Rather, it’s mostly a way of walling off one’s own ideas from everybody else’s. Although Brian may disagree with me on this–we have a good-natured, long-running debate going on how much scientists should worry about the known flaws and limitations of their approaches (see, e.g., the comments on this post).

Lost causes also seem like cousins to alternative schools of thought, which gets back to my shout-outs to Kuhn and Lakatos above. Members of one school of thought often (not always) have difficulty seeing the value or the point of other schools of thought. For instance, I bet many population and community ecologists agree with Bob Paine when he writes off NEON as a hypothesis-free waste of money that should’ve been recognized as a dead end based on previous experience with the IBP. So maybe in some cases a “lost cause” is just a school of thought that members of some other school of thought don’t like? Although not all lost causes correspond to schools of thought, or to research programs pursued by only a very small number of people. For instance, in the second post in his series Larry Wasserman identifies noninformative priors as a lost cause. He compares them to perpetual motion machines: “[E]veryone wants one, but they don’t exist.” But I don’t think there’s a whole separate school of statistical thought associated with trying to identify non-informative priors. Rather, as far as I know the search for non-informative priors has a been a main line of research in Bayesian statistics, which if anything cuts across several different schools or sub-schools of Bayesian thought.

Maybe sometimes lost causes just reflect the sunk cost fallacy, also called the Concorde fallacy or “throwing good money after bad”. You’ve already invested so much in this line of research, and all that previous investment would feel wasted if you just gave up, so you keep going (see here for an ecological example). But I don’t think that most of the examples discussed above are like that.

Are there examples of lost causes in ecology or evolution? In evolution, the late Lynn Margulis’ search for genetic material in other organelles besides mitochondria and chloroplasts, and her ideas about “symbiogenesis” more broadly, seem like a classic example of a lost cause. And fluctuating asymmetry seems like it has many of the elements of a lost cause (see here). I have a couple of other possible examples in mind, but I wouldn’t expect everyone to agree with them. That’s kind of the point of lost causes. If everyone agreed they were lost causes, nobody would pursue them and they wouldn’t exist. Indeed, at least one of the lost causes in physics that Streater discusses (deriving fundamental physics from information theory) is one that I myself wouldn’t consider to be lost, though of course I’m hardly an expert. And conversely, if everyone agreed that a lost cause was worth pursuing, it wouldn’t be a “lost” cause.

One thing I wonder about is whether researchers pursuing what others regard as lost causes ever recognize the “symptoms” that cause others to see their research as a lost cause. For instance, do those pursuing a lost cause ever recognize that progress seems slow or difficult, or recognize that many blind alleys have been explored in pursuit of the lost cause, but nevertheless find some reason to keep on keepin’ on? Say, because the payoff of success is sufficiently large to make the cause worth pursuing, even if the probability of success is small? Or do lost causes generally seem productive, progressive, and likely to pay off to those pursuing them? I can imagine that one could find examples of both sorts of case.

I’m also wondering if anyone can think of an example, in ecology or any other field, of a line of research that once seemed like a lost cause but eventually panned out. I’m sure there must be some. Which of course is probably part of why people pursuing lost causes continue to pursue them. Every once in a while, lost causes turn out not to be lost after all. Of course, it’s difficult to think of examples here because, once a line of research proves successful, it’s hard to look back and see how it could ever have been reasonably regarded as a lost cause. For instance, I think you could argue that natural selection was a lost cause that panned out. It wasn’t an especially popular explanation for evolution during Darwin’s lifetime, and not without reason. And during the famous “eclipse of Darwinism” in the late 19th and early 20th century it was a distinctly minority view. But history is written by the winners, and so looking back it’s the then-dominant alternative view that evolution proceeds along pre-determined, goal-directed lines (analogous to embryonic development) that looks like the lost cause. This gets into tricky issues that have long bedeviled historically-oriented philosophy of science. How do you tell how productive a line of research really is, or how productive it could’ve been had it been further pursued, without just defining “productive” as “whatever the majority of scientists do” or “whatever today’s scientists do”?

*Just to be clear, I think Bigfoot research is best termed pseudoscience rather than a lost cause, since it’s hard to see why Bigfoot research would ever be considered promising in the first place. But I think you can see the “family resemblance” between Bigfoot research and something like cold fusion research, or the work of Steven Wolfram, or Lynn Margulis’ more extreme ideas about symbiogenesis. All involve people getting attached to an idea and pursuing it well past the point where others would have stopped. And in the process going off the rails in their everyday scientific judgments, such as how to interpret evidence of PCR contamination, or (in Larry Wasserman’s first example) how much importance to attach to the law of total probability.