As Retraction Watch readers will likely recall, Paul Brookes ran Science-Fraud.org anonymously until early 2013, when he was outed and faced legal threats that forced him to shut down the site. There are a lot of lessons to be drawn from the experience, some of which Brookes discussed with Science last month.

Today, PeerJ published Brookes’ analysis of the response to critiques on Science-Fraud.org. It’s a compelling examination that suggests public scrutiny of the kind found on the site — often harsh, but always based solidly on evidence — is linked to more corrections and retractions in the literature.

Brookes looked at

497 papers for which data integrity had been questioned either in public or in private. As such, the papers were divided into two sub-sets: a public set of 274 papers discussed online, and the remainder a private set of 223 papers not publicized.

His results?

For primary outcomes, the public set exhibited a 6.5-fold fold higher rate of retractions, and an 7.7-fold higher rate of corrections, versus the private set. Combined, 23% of the publicly discussed papers were subjected to some type of corrective action, versus 3.1% of the private non-discussed papers. This overall 7-fold difference in levels of corrective action suggests a large impact of online public discussion.

Brookes noted several limitations, as Nature notes:

It is hard to know whether the unpublicized allegations, coming as the site grew more popular, were as well substantiated as the ones Brookes blogged about early on. The privately discussed papers were about three years older, on average, than the public ones, and Brookes speculates that there might have been less pressure to correct them, as the US Office of Research Integrity has a six-year statute of limitations for investigating allegations of misconduct. And papers in the private set might catch up with the public set in time, although this seems unlikely, he says.

Another limitation surfaced when Ivan was asked to review this paper. (We are occasionally asked to peer review, and given the inherent conflicts in reviewing and covering a particular study, we only accept in cases in which the journal will allow us to say that we reviewed the paper, and publish our comments.) One of the issues that came up at the time was that Brookes decided not to make the primary data — in this case, the critiques, along with the titles and authors of the papers in question — available to readers. (A de-identified data set is now included as supplemental information.)

Brookes did not make that decision — which is certainly understandable, given the legal concerns — lightly. In Ivan’s comments (made available on PeerJ), he wrote:

While I appreciate the sensitive issues around this manuscript, and welcome all attempts to correct the scientific literature, I am reluctant to offer a review without being able to see the data upon which the findings are based. The decision to not make those data available is based on sound reasoning, but it still means that this paper is not being held to the same standard as others. If we demand deposition of data, it should be for all papers. This doesn’t mean I think the author should necessarily reverse his decision, just that I would be uncomfortable making a decision without access to the data.

Brookes acknowledges in the paper that because of these limitations, “it is unlikely that the study can be reproduced independently.” And Ferric Fang, whose work on retractions will be familiar to Retraction Watch readers, told Nature:

“It’s a real limitation,” says Fang, who adds that the same problem beset two recent, widely cited studies in which scientists at pharmaceutical firms said that much high-profile academic research could not be replicated. “I don’t have any reason to question the interpretation, though it would be more persuasive to see it with one’s own eyes,” Fang says.

Still, it’s hard to disagree with the paper’s conclusions, given what we’ve seen of many editors’ responses to allegations:

…journals and other institutions may not wish to engage in dealing with such matters. Many journals do not respond to allegations from anonymous correspondents as a matter of policy, and while there are several reasons for this (e.g., not wishing to allow scientific competitors to sabotage rivals’ work), it is clear that journals do have some leeway in determining whether to respond to anonymous correspondents. Aside from the issue of anonymity, these anecdotes are diagnostic of a corrective system that is far from perfect. While it is beyond the scope of this manuscript to speculate on ways to improve the corrective system in the scientific literature, recent developments such as PubPeer and PubMed Commons are seen as steps in the right direction, toward universal and open post-publication peer review.

Share this: Email

Facebook

Twitter

