Josh Salvi is a biomedical fellow in the Laboratory of Sensory Neuroscience at The Rockefeller University and a student in the Tri-Institutional MD-PhD Program. He also acts as Executive Director of the Weill Cornell Community Clinic. You can read more of his posts on his blog, Musings of a MudPhud and can follow him on Twitter (@joshsalvi).

A key component in science is communication. We hope that this communication is accurate, conveys its intended purpose, and remains archived for future reference. Thus, the medium by which this message is conveyed must be regulated.

Peer review is the process by which members of a field evaluate the work of other members in the same field as a form of regulation. This increases credibility and, presumably, quality within the field. For example, this can refer to review of manuscripts for publication, review of teaching methods by other educators, or, within the medical profession, the creation and maintenance of health care standards. My focus will be on scholarly peer review, more particularly on methods of peer review in publication and less in the clinical setting for the purposes of this post. Issues relating to technical peer review in fields such as engineering or standardization within education will not be discussed here. However, remember that “peer review” is a broad term encompassing many fields. The purpose of this post is to bring to light historical context and to bring into focus the benefits and drawbacks of our current system.

In 1665, Henry Oldenburg created the first scientific journal to undergo peer review, the Philosophical Transactions of the Royal Society. Peer review in this journal differed from the kind we see today. Whereas professionals in the same field and often in competing labs will review today’s articles for publication, articles in this journal were reviewed by the Council of the Society. This journal created a foundation for the papers we see today, disseminating peer-reviewed work and archiving it for later reference. Peer review later developed in the 18th century as one where other professionals, often experts in the field, would perform the review, as opposed to the editorial review of the aforementioned journal. This form of scholarly peer review did not become institutionalized until closer to the 20th century. However, professional peer review, such as that performed by physicians, dated back to the 9th and 10th centuries, where one physician would comment on the ethical decisions or procedures of another.

Since that time, scholarly peer review has become a mainstay of academic publication. It is amazing to think that this regulatory process has only been so strong for less than a century. However, the procedure does not come without significant criticism (Though what topic in science is not heavily criticized?).

First, though, let us consider the benefits of scholarly peer review. Mentioned above was the improved quality of published work. Simply put, this works by first presenting a barrier that authors must overcome in order to be published, and critiques from reviewers are then addressed by authors to improve the quality of a manuscript. These suggestions may include additional experiments that will further test the work. The process filters out scientific error, thus improving accuracy of published information. Poor-quality work is rejected by the peer-review process. Additionally, work is stratified by journal quality, and this process routes papers to the correct tier. In effect, peer review is at the heart of scientific critique.

One of the most common critiques of peer review is that it remains untested, as purported by a 2002 article in JAMA. The Cochrane Collaboration in 2003 (and reconfirmed in 2008) concluded that there existed “little empirical evidence to support the use of editorial peer review as a mechanism to ensure quality of biomedical research, despite its widespread use and costs.” They recommend, “A large, well-funded programme of research on the effects of editorial peer review should be urgently launched.” Additionally, a study took an article about to be published in the British Medical Journal (BMJ), purposely added a number of errors, and measured the error detection rate to be about 25%, with no reviewer correcting more than 65% of the errors. This study was particularly interesting, as it was headed by Dr. Fiona Godlee, who later went on to critique the lack of external peer review of the Cochrane Collaboration. Her pioneering work in this field has stimulated much interest.

Finally, single-blinded peer review is open to bias. This could be bias against nationality, language, specialty, gender, or competition. Additionally, there is a common trend of bias toward positive results. Double-blinded review may help to overcome this critique.

Alternatives to single-blind review include double-blind review, post-publication review, and open review. In double-blind review, neither the authors nor the reviewers know the other party, and this would presumably reduce aforementioned bias. Surveys had shown a preference to double-blind review. Post-publication review would be an excellent supplement to the current review system to improve the rate of error correction in publications. Finally, open peer review, where the reviewer is known, would also possibly reduce the bias. However, one may be less willing to critique work by a senior author in the field, and the pilot by Nature in 2006 was far from successful.

The question is not, “Is peer review an ineffective system?” I believe it is. Instead, the question is, “Why does peer review sometimes fail to meet our lofty expectations?” This is a question that can be answered with rigor.

At this stage, the system is the best we have, and problems lie less in the peer review process and more in the access to scholarly work without a costly subscription. Discontent in the field does not translate to a desire for one of the alternative methods described. Nonetheless, we should be critical of our process, much in the same way the process itself is critical.