Elizabeth Iorns had the same thought, and she saw a way to do a better and more transparent job. She had founded a start-up called Science Exchange, which uses a large network of contract labs to provide research support to scientists—and in some cases, check their work. She contacted the COS, and together, they launched the Reproducibility Project: Cancer Biology—an initiative that used the Science Exchange labs to replicate key results from the 50 most cited papers in cancer biology, published between 2010 and 2012. (The COS recently used the same model for psychology studies to good effect.)

The results from the first five of these replication attempts were published today—and they offer no clean answers. Two of them largely (but not entirely) confirmed the conclusions of the original studies. One failed to do so. And two were inconclusive for technical reasons—the mouse strains or cancer cell lines that were used in the original studies didn’t behave in the same way the second time round. These uncertainties mean that it’s very hard to say whether each replication attempt “worked,” or whether each original study was actually reproducible.

“Everyone wants us to paint the project in black and white,” says Errington. “What percent of these papers replicate? I’ve been asked that so many times, but it’s not an easy question.” To him, the project’s goal isn’t to get a hard percentage, but to understand why two seemingly identical goes at the same experiment might produce different results, and to ultimately make it easier for one group of scientists to check another’s work.

The Reproducibility Project team pre-registered all of their work. That is, for each targeted paper, they wrote up their experimental plans in full, ran them past the original authors, and submitted them to the journal eLife for peer review. Only then did they start the experiments. Once the results were in, they were reviewed a second time, before being published.

The hardest part, by far, was figuring out exactly what the original labs actually did. Scientific papers come with methods sections that theoretically ought to provide recipes for doing the same experiments. But often, those recipes are incomplete, missing out important steps, details, or ingredients. In some cases, the recipes aren’t described at all; researchers simply cite an earlier study that used a similar technique. “I’ve done it myself: you reference a previous paper and that one references a paper and that one references a paper, and now you’ve gone years and the methodology doesn’t exist,” admit Errington. “Most people looking at these papers wouldn’t even think of going through these steps. They’d just guess. If you asked 20 different labs to replicate a paper, you’d end up with 10 different methodologies that aren’t really comparable.”