by Stephen Curry, Professor of Structural Biology, Imperial College (@Stephen_Curry)

As the song goes – and I have in mind the Beatles’ 1963 cover version of Money (that’s all I want) – “the best things in life are free.” But is peer review one of them? The freely given service that many scientists provide as validation and quality control of research papers submitted for publication has its critics. Richard Smith, who served as the editor of the British Medical Journal from 1991 to 2004, considered peer review to be “ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.” Although my own experience, and that of many colleagues, is that peer review mostly provides valuable clarification and polishing of submitted manuscripts, Smith is worth listening to because there are growing concerns about the inability of peer review to provide a sufficient test of the integrity of the scientific record. That trend should worry everyone involved in scholarly publication.

Ultimately the problem is money – and prestige – as embodied in the publish-or-perish culture that has allowed preoccupations with journal-based ‘measures’ of esteem to gain an over-weening influence in bids for jobs, promotion, and research funding. The over-heated demand for publications incentivises scientific fraud, the well-known bias towards positive results, and erodes the duty of care that we teach our students to take over the controls and statistical power of experiments – all of which present severe tests for the capabilities of even the most conscientious reviewers. Lately, an alarming new phenomenon has emerged: cartels that allow authors or their friends to review their own manuscripts reveals that reviewers themselves are sometimes deliberately undermining the quality of published paper.

The increasing awareness of nefarious practices within academia – even if they remain very much a minority practice – has triggered several attempts to straighten things out. These include a renewed emphasis on robust procedures of research evaluation (e.g. the San Francisco Declaration on Research Assessment (DORA) and the Leiden Manifesto), initiatives in pre-registration and testing reproducibility, and moves towards greater transparency among journals, funders and scholarly societies.

The mechanisms of peer review have themselves come under renewed scrutiny. The debates around blinding, anonymity, and openness have been stirred once again, alongside a newer question: should researchers be credited for doing peer review? The matter is less one of a direct material reward than public acknowledgement of the quantity and quality of the reviewing work that researchers undertake. While peer review is considered to be an important function within the research community – one that it routinely brags about to the general public – it is still conducted anonymously and out-of-sight. But the invisibility cloak that generally surrounds it has a doubly negative effect: it shields those who do a poor job from broader scrutiny and it means that those who review with great care and commitment are not recognised for their efforts.

Is crediting reviewers the answer? I think there is certainly a strong case for making reviews open by publishing them alongside the revised paper, a practice that has already been adopted (with permission of the authors and reviewers) by some journals. The rising popularity of preprints is also enabling greater transparency throughout the publication process – as exemplified by journals such as Atmospheric Chemistry and Physics, and the F1000 Research platform. Even if reviewers’ reports remain anonymous and uncredited, my view is that, such openness is a significant spur to higher quality reviews.

Nevertheless I retain reservations about calls for all reviews to be signed. Anonymity affords early career researchers a useful degree of protection, so that they can give a frank assessment of the work of more senior scientists who might later be sitting in judgement over them. We might hope for a better future where everyone acts professionally, but we should be realistic about the flaws of our human nature.

As regards the various services that have popped up in recent years to provide credit (and in some cases rewards and prizes) for review activity, we need to remain vigilant about unintended effects. At first sight, outfits like Publons, Reviewer Credits, Peerage of Science, and Elsevier’s Reviewer Recognition appear to be an unfettered good. They help journals find conscientious reviewers and ease the pathway to providing reviewer credit where it is obviously due, whether the reviewing task is performed anonymously or not. Such services usefully validate researchers’ claims, usually made on CVs, to their work as reviewers.

In most cases, however, it is quantity, not quality, that is being recorded (unless the reviewer, with the permission of the journal, has made the review open). Publons, which was acquired earlier this year for an undisclosed sum by Clarivate Analytics, the purveyor of the Journal Impact Factor, states that its mission is to “turn peer review into a measurable indicator of a researcher’s expertise and contributions to their field.” The metricisation of review is a worry, particularly since one of the company’s future goals is a commitment to “assessing the quality and performance of peer review around the world.” In itself that would be a laudable goal, but I would not want to see it lead to the creation of a Reviewer Quality Score quoted to three decimal places. It would therefore be helpful if companies offering reviewer credit opened themselves up for discussion on how they mean to tackle unintended effects.

In the end, while I would like to see greater credit given to reviewers, I would like to see that done through peer review – that is, with nuanced judgement. I share David Crotty’s reservations about the trend towards enumerating yet another component of academic endeavour. I favour a greater push towards publication of reviews, preferably with a DOI so that they can be captured by ORCID, but at the very least with a validating mechanism that might allow even anonymous reviewers to cite their work confidentially (e.g. in internal promotion procedures).

I am with Merton in seeing peer review as a ‘communalist’ activity that should be done in the spirit of amateurism that still pervades the research community. Ultimately, of course, we need to have a conversation with that community – in labs and common rooms across the globe – to find out if most people still also see peer review as a labour of love (which, as the Beatles contended, can’t be bought with money). I hope that the forthcoming ASAPbio meeting in 2018 will be an opportunity to have that conversation.