Reproducibility is a key part of science, even though almost nobody does the same experiment twice. A lab will generally repeat an experiment several times and look for results before they get published. But, once that paper is published, people tend to look for reproducibility in other ways, testing the consequences of a finding, extending it to new contexts or different populations. Almost nobody goes back and repeats something that's already been published, though.

But maybe they should. At least that's the thinking behind a new effort called the Reproducibility Initiative, a project hosted by the Science Exchange and supported by Nature, PLoS, and the Rockefeller University Press.

There are good reasons that scientists usually don't do straight-up repeats of published experiments. Funding agencies have little interest in paying for work that's redundant or derivative, and few journals are willing to run something that's essentially a do-over. Plus, as a researcher, it's simply hard to get excited about doing an experiment where you think you already know what the answer is going to be. With so little incentive for reproducing results, it's not surprising that most people only try to reproduce something if they think the original report was wrong.

How does the Reproducibility Initiative hope to get past this? They've got a partial solution. PLoS one has agreed to create a special reproducibility section, where they'll publish both the original finding, and any results that come out of attempts to reproduce it. That should allow researchers the possibility of getting a second paper out of a single set of results. If the original paper that's being reproduced was published in a Nature or Rockefeller Press publication, they'll link in to the report of reproduction. Data from the verification will be hosted on the Figshare site.

That still leaves a couple of big issues: who does the work, and how does it get paid for? This is where a bit of enlightened self-interest may be at play. The Initiative is hosted by the Science Exchange, which makes money by linking researchers in need of expertise to labs that have it. A researcher could advertise that they need a specific assay done—say, a challenging bit of mouse genotyping—and labs that are good at genotyping can submit bids to perform the work. When a bid is accepted, Science Exchange takes a cut of the price.

Science Exchange is interested in the Reproducibility Initiative because it's set up so that, when a lab wants to see its own work reproduced, it is supposed to find a contractor to do so via the company's service.

The missing piece? Someone willing to pay to see an experiment replicated as precisely as possible. The site promises that there will be announcements soon regarding groups that are willing to put up the money but, so far, there are no specifics.

If that can be sorted out, then there's no reason this wouldn't work. Researchers have an incentive—a second publication for minimal effort—and the people who actually do the experiments get paid for doing something they're presumably good at.

But is it really necessary? Here, the answer is a bit more complicated. In principle, it would be good to know what percentage of results can actually be reproduced. But my expectation is that they'd vary dramatically from field to field. A lot of behavioral studies are done on small populations of undergrads from a single university, and it's probably safe to assume there's a risk that undergrads in Beijing, Boston, and BYU could produce significantly different results. But that's probably a minimal risk in the case of something like structural biology.

Then, unless someone messes up data or an algorithm, it's hard for things to go wrong, since generating math is mostly a matter of well understood calculations. For that and similar fields, problems with reproducibility mostly focus on the code that performs these calculations, which could be restricted by a variety of licenses, which may or may not allow others to even look at the code involved.

Between these extremes, the value of direct reproduction is probably going to be hit or miss. A highly significant result will end up being tested in various ways, simply as a result of different labs following up on it. But some ideas that have been wrong have stuck around and influenced thinking for a while, and sorting those out quickly through reproduction could move science along faster than it would have on its own.

Whether it succeeds or not, the effort is a tacit admission that, with the huge volume of scientific publication and continuing problems with both honest mistakes and outright fraud, it's time to at least be considering ways with which we could provide a greater degree of confidence in scientific findings.