Scientists often bemoan journalists’ shoddy reporting of research findings. The writer and physician Ben Goldacre has even made a career dissecting shaky scientific claims that appear in British newspapers.

But a new study suggests that scientifically illiterate hacks in desperate need of a story might be only partly to blame. It found that journals themselves are more likely to issue press releases publicising the findings of what may be deemed weaker studies than larger, more scientifically significant trials.

Looking at seven of the world’s most prestigious medical journals, researchers found that half of published observational studies were the subject of a press release, compared with just 17 per cent of randomised controlled trials (RCTs), despite the latter being seen as the only way to reliably test a hypothesis.

There was a similar pattern when looking at the most reliable type of research: RCTs with large numbers of participants. These were given a press release just 14 per cent of the time, compared with 38 per cent of those with smaller samples and observational trials.

It even appears that journalists evened up this discrepancy: agencies and newspapers reported on such “strong” and “weak” research equally, despite the journals giving more publicity to the latter.

“RCTs represent a higher level of evidence than observational studies. Consequently, it might be expected that academic commentary and media coverage would occur more frequently for randomised research than observational research,” concluded “Media Coverage, Journal Press Releases and Editorials Associated with Randomized and Observational Studies in High-Impact Medical Journals: A Cohort Study”.

“However, journal press releases, which influence the content of subsequent news stories, were more common for observational studies than RCTs,” the study, published in the journal Plos One, reported.

Mark Bolland, a co-author and associate professor at the University of Auckland’s School of Medicine, said that he thought the most likely explanation was that because observational studies were not controlled in the same way as RCTs, they were more likely to have positive, quirky or exciting findings, and therefore journals thought that they would receive more coverage.

Previous research by the same authors has warned that press releases were failing to warn journalists of the limitations of observational trials, he said.

“So if you have positive results from an observational study (in which numerous hypotheses might have been examined), and the limitations are not being presented, it is easy to see how that can be turned into a press release that the journal think people will find interesting,” Professor Bolland said.

Journals should focus their press releases on research that can actually inform clinical practice, he said, which normally meant large RCTs.

“Second, if they do a press release on an observational study, they should state the limitations prominently – generally that causality can’t be inferred, that findings from observational research often are not reproduced in clinical trials,” he suggested.

A spokeswoman for the British Medical Journal, one of the journals analysed, said that senior editors selected research to press release “purely on the basis of its news potential, not by study design”.

“We are careful to avoid making inappropriate statements about cause and effect. And, whenever possible, we present absolute risks rather than relative risks,” she added.

david.matthews@tesglobal.com