Researchers don’t hide findings that fail to support the prevailing view of human-caused, CO 2 -based climate change, according to the first large study to look for so-called publication bias in this branch of the scientific literature. Even so, they may spin results in subtle ways.

Reviews of various scientific disciplines have found that researchers are less likely to report—and journal editors are less likely to publish—studies with negative or non-significant results. This bias may make the public distrust science, particularly in the case of controversial topics such as climate change.

Two previous studies suggested that the climate change literature does show evidence of publication bias. But those efforts were small and limited in scope.

In the new study, researchers assessed 1,154 experimental results from 120 studies of the effect of climate change on marine organisms. The papers were published in 31 scientific journals between 1997 and 2013.

But wait a second. How could the researchers know whether or not negative results are under-reported, if those results might never make it into the scientific literature in the first place? They graphed their data using funnel plots, which take on a characteristic asymmetry when results are skewed. This approach has been used to detect publication bias for at least 30 years.

And in this case, the nice, symmetrical funnel plots show no evidence of bias. In other words, the researchers report in the journal Climatic Change, findings indicating a lack of effect of climate change on marine ecosystems are not under-represented in the scientific literature.

However, the researchers did find some evidence of bias—not in what results are reported but in how they are reported. By analyzing results found in abstracts versus the body of papers, they identified a sort of scientific version of clickbait headlines: Abstracts tend to emphasize significant results and the largest effects, while non-significant findings and those with a smaller ‘wow’ factor are relegated to the depths of the results section.

In addition, the most high-profile scientific journals tend to publish studies showing the largest effects (and based on rather small sample sizes). The difference between the results reported in the abstracts and the bodies of papers is greatest for these top journals.

That could shape the public and scientific debate around climate change. For example, when non-technical readers look at scientific papers, they may be likely to focus on the abstracts and to look at the most famous journals.

Nor are scientific readers immune from these effects. “The practice of sensationalizing abstracts may bias scientific consensus too, assuming many scientists may also rely too heavily on abstracts during literature reviews and do not spend sufficient time delving into the lesser effects reported elsewhere in articles,” the researchers write. In addition, studies published in top journals are by definition likely to be cited more often than those that appear elsewhere.

The study isn’t designed to identify what’s behind these patterns or why scientists and journal editors make these choices. But the results do provide a kind of reality check, a outside view reminding us “that science is a human construct, often driven by human needs to tell a compelling story, to reinforce the positive, and to compete for limited resources,” the researchers write.

Source: Harlos C et al. “No evidence of publication bias in climate change science.” Climatic Change. 2017.