More than half of biomedical findings cannot be reproduced – we urgently need a way to ensure that discoveries are properly checked

(Image: Andrzej Krauze)

REPRODUCIBILITY is the cornerstone of science. What we hold as definitive scientific fact has been tested over and over again. Even when a fact has been tested in this way, it may still be superseded by new knowledge. Newtonian mechanics became a special case of Einstein’s general relativity; molecular biology’s mantra “one gene, one protein” became a special case of DNA transcription and translation.

One goal of scientific publication is to share results in enough detail to allow other research teams to reproduce them and build on them. However, many recent reports have raised the alarm that a shocking amount of the published literature in fields ranging from cancer biology to psychology is not reproducible.

Pharmaceuticals company Bayer, for example, recently revealed that it fails to replicate about two-thirds of published studies identifying possible drug targets (Nature Reviews Drug Discovery, vol 10, p 712).


Pharmaceutical company Bayer says it fails to replicate two-thirds of published drug studies

Bayer’s rival Amgen reported an even higher rate of failure – over the past decade its oncology and haematology researchers could not replicate 47 of 53 highly promising results they examined (Nature, vol 483, p 531). Because drug companies scour the scientific literature for promising leads, this is a good way to estimate how much biomedical research cannot be replicated. The answer: the majority.

The reasons for this are myriad. The natural world is complex, and experimental methods do not always capture all possible variables. Funding is limited and the need to publish quickly is increasing.

There are human factors, too. The pressure to cut corners, to see what one wants and believes to be true, to extract a positive outcome from months or years of hard work, and the impossibility of being an expert in all the experimental techniques required in a high-impact paper are all contributing factors.

The cost of this failure is high. As I have experienced at first hand as a researcher, attempts to reproduce others’ published findings can be expensive and frustrating. Drug companies have spent vast amounts of time and money trying and failing to reproduce potential drug targets reported in the scientific literature – resources that should have contributed towards curing diseases.

Failed replications also quite often go unpublished, thereby leading others to repeat the same failed efforts. In the modern fast-paced world, the normal self-correcting process of science is too slow and too inefficient to continue unaided.

Many have wrung their hands and proposed various penalties for scientific studies that cannot be reproduced. But instead of punishing investigators, what if there was a way of rewarding them for pursuing independent replication of their most significant scientific results – the ones they want to see cited and built on – before or shortly after publication? I believe this could be a substantial boon to science and society, which is why I started the Reproducibility Initiative.

I am the co-founder and CEO of Science Exchange, part of the initiative. It is an online marketplace to connect scientific services, such as DNA sequencing, with people who need them. The exchange lists more than 1000 experts in techniques including sequencing, electron microscopy and mass spectrometry. They mostly provide services to their own institute, but are open to other work on a fee-paying basis.

Thinking about the reproducibility problem, I realised that Science Exchange could help by providing investigators with the means and incentives to obtain independent validation of their results.

Here’s how it works. Scientists submit studies to us that they would like to see replicated. Our independent scientific advisory board – all members of which are leaders in their fields as well as advocates on the reproducibility problem – selects studies for replication. Service providers are then selected at random to conduct the experiments, and the results are returned to the original investigators, who can then publish them in a special issue of the open-access journal PLoS ONE. We will issue a “certificate of reproducibility” for studies that are successfully replicated.

In our pilot phase, we expect to attempt to replicate 40 to 50 studies. We also plan to publish an analysis of the overall success of what is essentially an experiment in reproducibility.

Initially, investigators must bear the cost of replications, which we estimate will be approximately one-tenth the cost of the original study. If we are successful, we believe funders will eventually see the value of supporting these replication studies. In fact, we are in discussions with numerous public and private funders who believe our mechanism may meet their own acknowledged need for independent validation.

We hypothesise that the success rate for replications will be quite high, mainly because investigators will submit studies that they are confident can be replicated. And that is one of the points we want to make – we want to identify the most robust, important findings and mark them in a highly visible way.

What we are not doing – a point that many have misunderstood – is trying to police the entire scientific literature. Nor are we calling for a doubling of the budgets required to repeat every experiment, every time. We also won’t demand the publication of reproducibility failures – although, for obvious reasons, we and PLoS encourage investigators to publish all outcomes.

Our goal is to provide a much-needed imprimatur of robustness that will ultimately increase the efficiency of research and development and bring us one step closer to perfecting the scientific method, for the benefit of all.