Yesterday, the scientific journal Nature published an editorial to announce a new policy that targets a long-standing problem in the biological sciences: reproducibility. Editors apparently recognize the papers published in Nature describe experiments that are often difficult to repeat, or don't give the same results when they are repeated. This is either because the details of the experiment are left out or because the experiments themselves aren't as solid as they should be. As a result, Nature and other journals from the publisher are instituting a mix of new policies about the contents of their research papers. There's even a checklist that authors have to follow in order to get published.

Part of the problem is structural. The Nature journals have a very strict word limit that keeps papers extremely short. As a result, it's often difficult to squeeze in all the results, let alone things like the context of the work or its implications. The experimental methods were often fourth or fifth on the priority list, so many of the papers in these journals published without any methods section at all. Only some offer a vague hint of the experiments that were done, squeezing it into bits of text that describe an image.

That part was relatively easy to fix: strict word limits remain in place for most of the paper, but Nature "will abolish space restrictions on the methods section." Researchers will now be expected to say exactly what they've done and provide a source for any material or equipment used in their paper. More detailed experimental procedures can be uploaded to a site hosted by the publisher called Protocol Exchange.

So, any researcher should now be able to repeat the experiments described in a Nature paper. Should they expect to get the same answer? Evidently, the editors are concerned that they might not. The new checklist that authors have to fill out indicates that authors weren't even mentioning the number of samples examined in an experiment or describing the criteria for inclusion or exclusion of different samples. In other words, the papers didn't say whether a given result was a fluke or typical of the sorts of thing that the authors had seen hundreds of times.

To let certain readers know exactly how likely a given result is, Nature will now provide the authors with a statistician to consult (which, really, they should have arranged for themselves before even writing the paper). Authors will be encouraged to provide the underlying data for any charts or graphs in the paper.

The author checklist will be updated as the journals' editors get more experience with it.

Overall, it's hard to see this as anything other than a good idea. Even if it is partly meant to solve a problem that the journals' limits on paper length helped create, the new policy goes well beyond that. It tries to ensure that the data presented in these journals is reliable. There's obviously more publications could do—a strict policy and check for image manipulation, like the one used by Rockefeller University Press, seems like an obvious choice—but this appears to be a solid step in the right direction.