To the best of our knowledge, there is virtually no evidence that ‘scooping’ of research via preprints exists, not even in communities that have broadly adopted the use of the arXiv server for sharing preprints since 1991. If the unlikely case of scooping emerges as the growth of the preprint system continues, it can be dealt with as academic malpractice.includes a series of hypothetical scooping scenarios as part of its preprint FAQ, finding that the overall benefits of using preprints vastly outweigh any potential issues around scooping 3 . Indeed, the benefits of preprints, especially for early-career researchers, seem to outweigh any perceived risk: rapid sharing of academic research, open access without author-facing charges, establishing priority of discoveries, receiving wider feedback in parallel with or before peer review, and facilitating wider collaborations [ 11 ]. That being said, in research disciplines which have not yet widely adopted preprints, scooping should still be acknowledged as a potential threat and protocols implemented in the event that it should occur.

Preprints provide a time-stamp at time of publication, which establishes the “priority of discovery” for scientific claims ([ 13 ], Figure 1 ). Thus, a preprint can act as proof of provenance for research ideas, data, code, models, and results [ 14 ]. The fact that the majority of preprints come with a form of permanent identifier, usually a Digital Object Identifier (DOI), also makes them easy to cite and track; and articles published as preprints tend to accumulate more citations at a faster rate [ 15 ]. Thus, if one were ‘scooped’ without adequate acknowledgment, this could be pursued as a case of academic misconduct and plagiarism.

However, preprints protect against scooping [ 11 ]. Considering the differences between traditional peer-review based publishing models and deposition of an article on a preprint server, ‘scooping’ is less likely for manuscripts first submitted as preprints. In a traditional publishing scenario, the time from manuscript submission to acceptance and to final publication can range from a few weeks to years, and go through several rounds of revision and resubmission before final publication ([ 12 ], see Figure 1 ). During this time, the same work will have been extensively discussed with external collaborators, presented at conferences, and been read by editors and reviewers in related areas of research. Yet, there is no official open record of that process (e.g., peer reviewers are normally anonymous, reports remain largely unpublished), and if an identical or very similar paper were to be published while the original was still under review, it would be impossible to establish provenance.

A persistent issue surrounding preprints is the concern that work may be at risk of being plagiarized or ‘scooped’—meaning that the same or similar research will be published by others without proper attribution to the original source—if the original is publicly available without the stamp of approval from peer reviewers and traditional journals [ 10 ]. These concerns are often amplified as competition increases for academic jobs and funding, and perceived to be particularly problematic for early-career researchers and other higher-risk demographics within academia.

A ‘preprint’ is typically a version of a research paper that is shared on an online platform prior to, or during, a formal peer review process [ 6 8 ]. Preprint platforms have become popular in many disciplines due to the increasing drive towards open access publishing and can be publisher- or community-led. A range of discipline-specific or cross-domain platforms now exist [ 9 ].

A number of regional focal points and initiatives now provide and suggest alternative research assessment systems, including key documents such as the Leiden Manifesto 6 and the San Francisco Declaration on Research Assessment (DORA) 7 . Recent developments around ‘Plan S’ call for adopting and implementing such initiatives alongside fundamental changes in the scholarly communication system 8 . Thus, there is little basis for connecting JIFs with any quality measure, and inappropriate association of the two will continue to have deleterious effects. As appropriate measures of quality for authors and research, concepts of research excellence should be remodeled around transparent workflows and accessible research results [ 21 34 ].

Despite its inappropriateness, many countries regularly use JIFs to evaluate research [ 18 31 ] which creates a two-tier scoring system that automatically assigns a higher score (e.g., type A) to papers published in JIF or internationally indexed journals and a lower score (e.g., type B) to those published locally. Most recently, the organization that formally calculates the JIF released a report outlining its questionable use 5 . Despite this, outstanding issues remain around the opacity of the metric and the fact that it is often negotiated by publishers [ 32 ]. However, these integrity problems appear to have done little to curb its widespread misuse.

Empirical evidence shows that the misuse of the JIF—and journal ranking metrics in general—creates negative consequences for the scholarly communication system. These include confusion between outreach of a journal and the quality of individual papers and insufficient coverage of social sciences and humanities as well as research outputs from Latin America, Africa, and South-East Asia [ 28 ]. Additional drawbacks include marginalizing research in vernacular languages and on locally relevant topics, inducing unethical authorship and citation practices as well as fostering a reputation economy in academia based on publishers’ prestige rather than actual research qualities such as rigorous methods, replicability, and social impact. Using journal prestige and the JIF to cultivate a competition regime in academia has had deleterious effects on research quality [ 29 ].

About ten years ago, national and international research funding institutions pointed out that numerical indicators such as the JIF should not be deemed a measure of quality 4 . In fact, the JIF is a highly-manipulated metric [ 22 24 ], and justifying its continued widespread use beyond its original narrow purpose seems due to its simplicity (easily calculable and comparable number), rather than any actual relationship to research quality [ 25 27 ].

However, this usage of the JIF metric is fundamentally flawed: by the early 1990s it was already clear that the use of the arithmetic mean in its calculation is problematic because the pattern of citation distribution is skewed. Figure 2 shows citation distributions for eight selected journals (data from Lariviere et al., 2016 [ 19 ]), along with their JIFs and the percentage of citable items below the JIF. The distributions are clearly skewed, making the arithmetic mean an inappropriate statistic to use to say anything about individual papers (and authors of those papers) within the citation distributions. More informative and readily available article-level metrics can be used instead, such as citation counts or ‘altmetrics’, along with other qualitative and quantitative measures of research ‘impact’ [ 20 21 ].

The journal impact factor (JIF) was originally designed by Eugene Garfield as a metric to help librarians make decisions about which journals were worth subscribing to. The JIF aggregates the number of citations to articles published in each journal, and then divides that sum by the number of published and citable articles. Since this origin, the JIF has become associated as a mark of journal ‘quality’ and gained widespread use for evaluation of research and researchers instead, even at the institutional level. It thus has significant impact on steering research practices and behaviors [ 16 18 ].

Scientists understand that peer review is a human process, with human failings, and that despite its limitations, we need it. But these subtleties are lost on the general public, who often only hear the statement that being published in a journal with peer review is the “gold standard” and can erroneously equate published research with the truth. Thus, more care must be taken over how peer review, and the results of peer reviewed research, are communicated to non-specialist audiences; particularly during a time in which a range of technical changes and a deeper appreciation of the complexities of peer review are emerging [ 48 52 ]. This will be needed as the scholarly publishing system has to confront wider issues such as retractions [ 39 54 ] and replication or reproducibility ‘crises’ [ 55 57 ].

Another problem that peer review often fails to catch is ghostwriting, a process by which companies draft articles for academics who then publish them in journals, sometimes with little or no changes [ 46 47 ]. These studies can then be used for political, regulatory and marketing purposes. In 2010, the US Senate Finance Committee released a report that found this practice was widespread, that it corrupted the scientific literature and increased prescription rates 11 . Ghostwritten articles have appeared in dozens of journals, involving professors at several universities 12 . Recent court documents have found that Monsanto ghost-wrote articles to counter government assessment of the carcinogenicity of the pesticide glyphosate and to attack the International Agency for Research on Cancer 13 . Thus, peer review seems to be largely inadequate for exposing or detecting conflicts of interest and mitigating the potential impact this will have.

At times, peer review has been exposed as a process that was orchestrated for a preconceived outcome.gained access to confidential peer review documents for studies sponsored by the(NFL) that were cited as scientific evidence that brain injuries do not cause long-term harm to its players 10 . During the peer review process, the authors of the study stated that all NFL players were part of a study, a claim that the reporters found to be false by examining the database used for the research. Furthermore,noted that the NFL sought to legitimize the studies’ methods and conclusion by citing a “rigorous, confidential peer-review process” despite evidence that some peer reviewers seemed “desperate” to stop their publication. Such behavior represents a tension between the reviewers wishing to prevent the publication of flawed research, and the publishers who wish to publish highly topical studies on things such as the NFL. Recent research has also demonstrated that widespread industry funding for published medical research often goes undeclared, and that such conflicts of interest are not appropriately addressed by peer review [ 44 45 ].

Multiple examples across several areas of science find that scientists elevated the importance of peer review for research that was questionable or corrupted. For example, climate change skeptics have published studies in thejournal, attempting to undermine the body of research that shows how human activity impacts the Earth’s climate. Politicians in the United States downplaying the science of climate change have then cited this journal on several occasions in speeches and reports 9

Researchers have peer reviewed manuscripts prior to publication in a variety of ways since the 18th century [ 35 36 ]. The main goal of this practice is to improve the relevance and accuracy of scientific discussions by contributing knowledge, perspective and experience. Even though experts often criticize peer review for a number of reasons, the process is still often considered the “gold standard” of science [ 37 38 ]. Occasionally however, peer review approves studies that are later found to be wrong, and rarely deceptive or fraudulent results are indeed discovered prior to publication [ 39 40 ]. Thus, there seems to be an element of discord between the ideology behind and the actual practice of peer review. By failing to effectively communicate that peer review is imperfect, the message conveyed to the wider public is that studies published in peer-reviewed journals are “true” and that peer review protects the literature from flawed science. Yet, a number of well-established criticisms exist of many elements of peer review [ 41 43 ]. In the following, we describe cases of the wider impact of inappropriate peer review on public understanding of scientific literature.

2.4. Topic 4: Will the quality of the scientific literature suffer without journal-imposed peer review?

Peer review, without a doubt, is integral to scientific scholarship and discourse. More often, however, this central scholarly component is coopted for administrative goals: gatekeeping, filtering, and signaling. Its gatekeeping role is believed to be necessary to maintain the quality of the scientific literature [ 58 59 ]. Furthermore, some have argued that without the filter provided by peer review, the literature risks becoming a dumping ground for unreliable results, researchers will not be able to separate signal from noise, and scientific progress will slow [ 60 61 ]. These beliefs can be detrimental to scientific practice.

The previous section argued that the existing journal-imposed peer gatekeeper is not effective, that ‘bad science’ frequently enters the scholarly record. A possible reaction to this is to think that shortcomings of the current system can be overcome with more oversight, stronger filtering, and more gatekeeping. A common argument in favor of such initiatives is the belief that a filter is needed to maintain the integrity of the scientific literature [ 62 63 ]. But if the current model is ineffective, there is little rationale for doubling down on it. Instead of more oversight and filtering, why not less?

The key point is that if anyone has a vested interest in the quality of a particular piece of work, it surely is the author. Only the authors could have, as Feynman (1974) 14 puts it, the “extra type of integrity that is beyond not lying, but bending over backwards to show how you’re maybe wrong, that you ought to have when acting as a scientist.” If anything, the current peer review process and academic system penalizes, or at least fails to incentivize, such integrity.

culture of doubt necessary for science to operate a self-correcting, truth-seeking process [ Instead, the credibility conferred by the "peer-reviewed" label diminishes what Feynman calls thenecessary for science to operate a self-correcting, truth-seeking process [ 64 ]. The troubling effects of this can be seen in the ongoing replication crisis, hoaxes, and widespread outrage over the inefficacy of the current system [ 35 41 ]. This issue is exacerbated too by the fact that it is rarely ever just experts who read or use peer reviewed research (see Topic 3), and the wider public impact of this problem remains poorly understood (although, see, for example, the anti-vaccination movement). It is common to think that more oversight is the answer, as peer reviewers are not at all lacking in skepticism. But the issue is not the skepticism shared by the select few who determine whether an article passes through the filter. It is the validation and accompanying lack of skepticism—from both the scientific community and the general public—that comes afterwards 15 . Here again more oversight only adds to the impression that peer review ensures quality, thereby further diminishing the culture of doubt and counteracting the spirit of scientific inquiry 16

36, Quality research—even some of our most fundamental scientific discoveries—dates back centuries, long before peer review took its current form [ 35 65 ]. Whatever peer review existed centuries ago, it took a different form than it does now, without the influence of large, commercial publishing companies or a pervasive culture of publish-or-perish [ 65 ]. Though in its initial conception it was often a laborious and time-consuming task, researchers took peer review on nonetheless, not out of obligation but out of duty to uphold the integrity of their own scholarship. They managed to do so, for the most part, without the aid of centralized journals, editors, or any formalized or institutionalized process. Modern technology, which makes it possible to communicate instantaneously with scholars around the globe, only makes such scholarly exchanges easier, and presents an opportunity to restore peer review to its purer scholarly form, as a discourse in which researchers engage with one another to better clarify, understand, and communicate their insights [ 51 66 ].

67, A number of measures can be taken towards this objective, including posting results to preprint servers, preregistration of studies, open peer review, and other open science practices [ 56 68 ]. In many of these initiatives, however, the role of gatekeeping remains prominent, as if a necessary feature of all scholarly communication. The discussion in this section suggests otherwise, but such a “myth” cannot be fully disproven without a proper, real-world implementation to test it. All of the new and ongoing developments around peer review [ 43 ] demonstrate researchers’ desire for more than what many traditional journals can offer. They also show that researchers can be entrusted to perform their own quality control independent of journal-coupled review. After all, the outcry over the inefficiencies of traditional journals centers on their inability to provide rigorous enough scrutiny, and the outsourcing of critical thinking to a concealed and poorly understood process. Thus, it seems that the strong coupling between journals and peer review as a requirement to protect scientific integrity seems to undermine the very foundations of scholarly inquiry.

70, To test the hypothesis that filtering is unnecessary to quality control, many traditional publication practices must be redesigned, editorial boards must be repurposed, and authors must be granted control over peer reviewing their own work. Putting authors in charge of their own peer review serves a dual purpose. On one hand, it removes the conferral of quality within the traditional system, thus eliminating the prestige associated with the simple act of publishing. Perhaps paradoxically, the removal of this barrier might actually result in an increase of the quality of published work, as it eliminates the cachet of publishing for its own sake. On the other hand, readers—both scientists and laypeople—know that there is no filter so they must interpret anything they read with a healthy dose of skepticism, thereby naturally restoring the culture of doubt to scientific practice [ 69 71 ].

In addition to concerns about the quality of work produced by well-meaning researchers, there are concerns that a truly open system would allow the literature to be populated with junk and propaganda by vested interests. Though a full analysis of this issue is beyond the scope of this section, we once again emphasize how the conventional model of peer review diminishes the healthy skepticism that is a hallmark of scientific inquiry, and thus confers credibility upon subversive attempts to infiltrate the literature. As we have argued elsewhere, there is reason to believe that allowing such “junk” to be published makes individual articles less reliable but renders the overall literature more robust by fostering a “culture of doubt” [ 72 ].

We are not suggesting that peer review should be abandoned. Indeed, we believe that peer review is a valuable tool for scientific discourse and a proper implementation will improve the overall quality of the literature. One essential component of what we believe to be a proper implementation of peer review is facilitating an open dialogue between authors and readers. This provides a forum for readers to explain why they disagree with the authors’ claims. This gives authors the opportunity to revise and improve their work, and gives non-experts a clue as to whether results in the article are reliable.