Many fields of study suffer from what's called "publication bias" as a result of the fact that journals are usually happier to run articles that describe positive results. This is especially problematic in the case of medical studies, where negative results can be extremely useful—it's valuable to know which drugs don't work—and commercial interests are more than happy to keep failed trials quiet. In 2005, medical journals forged a policy meant to ensure that a description of any clinical trial would be registered in public databases before it took place. A new study in this week's edition of JAMA suggests that the impact of this policy has been a bit limited: not every clinical study is being registered, and many trials result in publications that differ significantly from the experiments described in these databases.

The policy in question was crafted with the intention of ensuring that any clinical trial would be visible to the public and research community, regardless of whether the results were ultimately deemed worthy of publication. The journals agreed that they would refuse to publish any results that hadn't been registered in a clinical trial database, such as the US' ClinicalTrials.gov, in advance.

This provides a very strong incentive to register any trial. Nobody typically starts a trial they know is going to fail, and both researchers and drug companies have strong incentives to publish positive results—it makes sense to register everything in advance, simply to ensure the option of publication later. As a result, the trial registries should become a valuable resource to the research community, which will have a better sense of work that might otherwise be invisible, and regulators, who will know if a company might be sitting on informative results.

Of course, all of this depends on things working as intended: journals can't break ranks and publish studies that haven't been registered, and researchers have to register an accurate description of the trial that takes place.

The JAMA study looked into how well this is working out. Its authors searched PubMed for papers published in 2008 that described the results of a randomized clinical trial in one of three fields: cardiology, rheumatology, and gastroenterology. In many cases, the publications included information on the trial's registration. If they didn't, the authors of the paper were contacted and, if that failed, the details were plugged into a variety of national and international trial registration sites in an attempt to identify it.

Right away, they ran into problems. Of the 323 published trials that they identified, a full 89 (27 percent) hadn't been registered at all, as far as the authors could tell. Another 39 had been registered, but had a result, termed a primary outcome, that was too vague. "For example, 'blood pressure' is not a clearly specified outcome," the authors note. "Ideally, we sought an unambiguous definition (e.g., change in systolic pressure from baseline at 12 months)." Three of the trials were actually registered only after the study had been completed.

That left them with only about half of the initially identified studies to work with. But, even here, there were notable problems. In 46 cases, the trial was registered with a primary outcome that wasn't the same as the one described in the publication. So, for example, 15 of them published results that never mentioned the primary outcome that the trial was intended to study. In other cases, the intended primary outcome was demoted to a secondary result, or a planned secondary outcome became the primary focus of the eventual publication.

Looking at why these changes occurred, the researchers found that the shift generally happened because the results from the planned focus weren't statistically significant; the ones that replaced them were. So, there's still a publication bias towards positive results, but it's a matter of a change in emphasis within a publication.

For the most part, it's hard to fault papers for focusing on positive results, even if they weren't really the ones that the researchers set out to examine. Still, the fact that the intended outcomes weren't mentioned at all in over 10 percent of the cases is cause for concern.

Overall, the mere fact that this study could be pursued is an indication that the new system is having an effect: a significant number of researchers are now registering their clinical trials, and doing so well enough that the intended outcomes can be compared with what gets published. Still, the significant discrepancies indicate that it can be viewed as a work-in-progress rather than a job well done.

It's obvious that the journals are still publishing papers based on trials that haven't been registered. This means that their editors need to become stricter about rejecting papers that don't conform to the policy, and apply pressure to the editors of journals that haven't yet adopted it.

But it seems that peer reviewers will have to shoulder some responsibilities as well. It's their job to provide a scientific vetting, and they're best positioned to identify when a trial registration is too vague to be valuable or describes experiments that wind up being at odds with the eventual publication. The journals could obviously help them by requiring that reviewers get trial registration information from the authors and instructing them to compare the registration with the results, but, ultimately, the reviewers will have to follow through.

JAMA, 2009. DOI unavailable. 302(9):977-984.