Kelly Cobey

Journal stings come in various shapes and sizes. There are the hilarious ones in which authors manage to get papers based on Seinfeld or Star Wars published. There are those that play a role in the culture wars. And then there are some on a massive scale, with statistical analyses.

That’s how we’d describe the latest paper by Ottawa journalologists Kelly Cobey, David Moher and colleagues. We asked Cobey and Moher to answer some questions about the recently posted preprint, “Stress testing journals: a quasi-experimental study of rejection rates of a previously published paper.”

Retraction Watch (RW): What prompted you to do this study?

Kelly Cobey and David Moher (KC and DM): In 2013 John Bohannon conducted a large sting study where he submitted obviously problematic papers to 304 journals and tracked their response. Some of the journals were presumed to be predatory, while others were presumed to be legitimate open access journals. 157/304 submissions were eventually accepted, and acceptances came from both types of journals. Since then there have been many small sting studies showing how easy it is to get problematic work published in predatory journals.

We wanted to build on these studies, and do something more methodologically rigorous. It has been several years since Bohannon’s study. Our study allows us to consider how the scholarly landscape may have changed over time. Similarly, we conceptualized our study as a stress test – did journals, regardless of how they are classified, have sufficient journalology defences in place to withstand inappropriate publication practices.

RW: Describe what you did.

KC and DM: We submitted a paper we previously published in Nature to more than 600 journals from three groups: presumed predatory journals, open access journals, and subscription based journals. Approximately half of the journals we submitted to received the publisher’s formatted version in PDF, while the others received the accepted manuscript version made in Microsoft Word. We tracked journal decisions on the submissions over the period of about 1 month. This study used deception; we lied during the submissions of the article and indicated it was not published previously. We received ethics approval from our research institute and permission to use the Nature paper from Nature.

RW: You posted your protocol in advance, which is of course good scientific practice but means that some journal editors may have been on the lookout for a manuscript including David Moher’s name. Does that concern you?

David Moher

KC and DM: We did post our protocol in advance, but we embargoed it so that it was not publicly available until the study was complete. This format of registration allowed us to be transparent, but maintain the deception necessary for the study.

RW: We imagine that some critics of Beall’s list may find the choice of it as a source of predatory journals to be problematic. How would you respond?

KC and DM: Beall’s lists (single journal publishers and multiple journal publishers) have many shortcomings. For example, it is unclear to us how he discovered new journals and publishers to consider for his lists. The criteria he used to list journals and publishers were not always transparent. We considered this when designing our study, but there were few other options that were practical. One challenge is that many presumed predatory journals are not indexed. This makes it hard to identify them. The only other list of predatory journals we know of is from Cabell’s; this list is behind a paywall and may suffer from similar shortcomings to Beall’s list. Our work was unfunded, so we didn’t pay to access it.

At the time we conducted our work, there was no consensus definition of what a predatory journal is. We now have a definition, and could use it in future work to assess journals to determine if they meet the definition.

RW: What were your findings? Did any surprise you?

KC and DM: We received correspondence back from 308 journals of the 602 we submitted to. Just 4 of the journals we submitted to accepted our paper, and all of these were suspected predatory journals. Three accepted the word version, and one accepted the PDF version. 13 journals requested a revision of the article (1 presumed predatory, 6 open access journals, 6 subscription-based journals).

We were surprised by just how few journals accepted our paper. The rate of acceptance was much lower than in the earlier Bohannon sting study. We thought that we might see some open access and subscription-based journals accept the paper too, but they did not.

RW: Some 40% of the journals rejected the manuscript because it was out of scope. Do you have any information on whether this was just a way to reject a problematic paper without making an allegation of duplication or plagiarism?

KC and DM: We do not know if editors who rejected our paper for being out of scope also identified issues with plagiarism. If they did, they failed to express these concerns. We think that failure is problematic since journals play an important role in maintaining publication ethics. COPE provides guidance on what editors should do when the suspect plagiarism in a submission; journals have a responsibility to act.

RW: You write that “a very small number of editors contacted the institution of the corresponding author to express their concerns about the submission.” What happened in these cases?

KC and DM: Our study received ethical approval from our institution and Nature gave us permission to use our paper for the purposes of the study. We also informed the Head of our Program and the CEO of our institution that the work was happening. When editors contacted the institution expressing concern about the paper, whomever received the correspondence at our institution did not respond until the end of the study period. This was done to ensure the validity of the study – we did not want word to spread about the study within the editorial community. All journals we submitted to were sent a debrief form at the end of the study. The chair of the ethics committee was the conduit for responding regardless of whether the compliant was made to Dr. Moher’s research institute CEO or university faculty (Dean of Medicine).

RW: You write that your “findings suggest that all three types of journals may not have adequate safeguards in place to recognize and act on plagiarism or duplicate submissions.” It will likely come as no surprise to hear that we agree. What kinds of safeguards would you recommend?

KC and DM: Journals should apply plagiarism detection software to help address this issue. We recognize however that this tool comes at a cost, and it may not be feasible for some under resourced journals.

Journals should also engage in education. Many editors receive very little training before assuming their roles. The same is true of peer reviewers. Making those involved in the peer review process better informed of publication ethics issues is essential. They also need to be provided with clear guidance in terms of how to act when they suspect an issue.

Adopting open science practices may also help journals ensure that what they are publishing has not been published previously. For example, if all studies were registered and corresponding articles/preprints/materials were digitally linked to registrations, it would be possible to check what work has been published related to the registration.

Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.

Share this: Email

Facebook

Twitter

