(Note: We have been having some website issues over the weekend, which is why there has not been a post in a couple of days. All seems to be working now, but we are monitoring closely.)

Science may have a replication problem.

One of the goals of scientific skepticism is to examine the process of science itself, often through the lens of pseudoscience. I find this remarkably helpful, and something that many mainstream scientists often do not understand.

By closely examining pseudoscience as a phenomenon, we can see clear examples of how science goes wrong, how the process of science is subverted, and all the different ways scientists can make mistakes or bias their results. We can then apply this knowledge to legitimate science, flushing out more subtle manifestations of the same problems.

Said another way – if we explore all the reasons that a scientist can come to the conclusion that homeopathy works (when it clearly doesn’t) we will learn much about all the possible ways to fail when people do science (or think they are doing science).

For example, examining pseudoscience has really brought home for me the critical importance of independent replication. We often hear impressive-sounding results from single studies that appear to support one pseudoscience or another. It may not be possible from the published report where the researchers went wrong. The only way to really know is to independently replicate the results. If the researchers were genuine visionaries ready to change science, their results should replicate reliably.

Perhaps the best example of this is the psi research of Daryl Bem. He published a series of studies which he claims provide evidence for precognition, or future events affecting current cognitive processes. This is one of those claims in which it is fair to say, if we know anything in science, we know that this is impossible. This is reversing the arrow of causation. To say that such results are a paradox is an understatement.

Of course, I would be willing to accept such results if they were iron-clad. The results would have to be so robust as to make their falsity more of a paradox than their accuracy. What we got, however, were razor-thin effect sizes with a terrible signal-to-noise ratio, from a researcher who has endorsed questionable research practices. Just a tiny bit of “researcher degrees of freedom” is all that is necessary to explain the results.

The real test of these results, however, came in the replication. Several researchers tried to replicate one or more of Bem’s protocols, with mostly negative results. Not surprising. Far more important than Bem’s unlikely claims and unimpressive research was the reaction of journal editors to these replications.

Richard Wiseman and his colleagues submitting one such replication to the psychology journal that published Bem’s original studies, the Journal of Personality and Social Psychology. Their response was that they do not publish exact replications.

Richard’s response was to create a website where researchers can publish their replications of Bem’s studies.

The response by the journal is the real story here. Journal editors put a low priority on publishing replications of previous studies. They are not exciting. They don’t grab headlines or improve impact factor. That, in turn, decreases the incentive for researchers to carry out replications.

This is a systemic problem. Doing good replications is the only real way to know if a finding is reliable. In addition, with online publishing, journals no longer have the excuse of limited space in a print journal.

To me this is a problem of stoichiometry in science, to use a nerdy metaphor. In order for scientific progress to be optimal we need to have the perfect mix of researchers doing new and speculative research vs doing confirmatory research or applied research. This is like having the right mix of gas and oxygen to produce the hottest flame.

Right now I think the incentives are biased toward new and speculative research and away from confirmatory research. This may mean that we are wasting our time on lots of new ideas that will ultimately lead nowhere, and those ideas hang around longer than they should because we are not confirming them with replications.

This is not just my opinion but an increasingly recognized problem within science. One solution is to dedicate space in existing journals, or even make entirely new journals, for publishing replications. This critical component of science needs to be given a higher priority.

One journal editor is doing just that.

The contradictory results—along with successful confirmations—will be published by F1000Research, an open-access, online-only publisher. Its new“Preclinical Reproducibility and Robustness channel,” launched today, will allow both companies and academic scientists to share their replications so that others will be less likely to waste time following up on flawed findings, says Sasha Kamb, senior vice president for research at Amgen in Thousand Oaks, California.

The journal is the project of biotech company Amgen Inc. and biochemist Bruce Alberts. The journal is a response to evidence that many scientific findings that are still relied upon cannot be replicated.

One recent commentary published in Nature noted that of preclinical cancer research studies they attempted to replicate, only 11% replicated the results.

Another study of psychology studies found that only 39 out of 100 studies were successfully replicated.

Conclusion

I don’t want to overstate the problem. There is a lot of replication going on in science, this is still standard procedure. Typically when I research any medical issue there are multiple studies and we can look to see what the consensus of results show. Eventually replications are done.

But I don’t think we are at an optimal mix, because of perverse publishing incentives. Doing exact replications of studies should be looked upon not as boring but the gold standard of science. I hope we have more journals dedicated to publishing these studies, and a higher priority placed on exact replications by all the major science journals.