Just as it's possible to use scientific methods to examine science communications, it's also possible to turn science's tools onto the process itself. Stanford's John Ioannidis has made a career out of doing this, and the things he's found aren't always pretty: bias and unreliable results appear to be far too common.

This week, he's back with another study, one looking at potential biases in the results reported for biological and behavioral sciences. A survey of results shows that, on the biological side of things, US research is relatively bias-free. But as studies edge over into behavioral science, US-based researchers tend to produce more extreme results—ones that are biased toward supporting their authors' hypotheses.

The new work relied on a large collection of meta-analyses in which someone aggregates all the research on a particular question and then combines the results to produce an answer with more statistical power. Ioannidis, along with collaborator Daniele Fanelli at the University of Edinburgh, realized that these meta-analyses provided a chance to look for bias in the results of the individual studies they contained. Since the end result of the analysis is a "typical" answer, an unbiased set of studies that looked at the question should cluster evenly around that answer. Deviations from that even clustering could be a sign of bias, either conscious or otherwise.

They determined the addresses of the authors who submitted each of the papers included in the meta-analyses and then broke them down by region: US, EU, Asia, and "other." They also defined three categories of study: non-behavioral biology, behavioral studies with biological readouts (such as heart rate), and purely behavioral studies.

When it came to biological studies, the results tended to cluster around the typical answer produced by the meta-analysis. There were a number of outliers, but they weren't biased for or against any specific hypothesis. In general, US-based labs produced fewer outliers, in part because the studies done here had much larger samples, which decreased the variance.

This tight clustering went away when it came to the behavioral studies, though, which had a much greater frequency of extreme results. And here, US-based researchers showed a clear indication of bias. Rather than being evenly distributed around the typical answer, the results were much more likely to support the experimental hypothesis.

Why do so many papers that produce positive results get published? Fanelli and Ioannidis suggest that it's because behavioral sciences don't have a robust set of theories, in contrast to traditional biology (which has things like evolution and genetics). Without that, researchers are able to be very flexible about the hypotheses that they propose and the methods they use to test them. They also argue that the "publish or perish" mentality that drives US scientists motivates people to report positive results. The two combine, they argue, to make "US researchers potentially more likely to express an underlying propensity to report strong and significant findings."

There may be nothing more nefarious here than writing a paper as if you expected the results you got the whole time. However, the problem that's been identified could potentially encompass things like choosing experimental procedures in order to make it more likely that you'll get the results you desire. The study simply can't discriminate between the two.

In any case, lest the rest of the world get smug about this, the authors note that other countries are moving toward a publication-based evaluation system for their researchers as well. So, it may be that a similar bias will crop up in other locations. At the same time, however, it's possible that further research will go some way toward allowing a consensus to form about the appropriate methods for handling certain questions.

PNAS, 2013. DOI: 10.1073/pnas.1302997110 (About DOIs).