In 2010, researchers Jeremy Fox and Owen Petchey issued a stark warning about the sustainability of scientific peer review. “The peer review system is breaking down and will soon be in crisis,” they wrote in The Bulletin of the Ecological Society of America1. As the number of published articles was rising each year, demand for reviews would soon outstrip supply, they said.

But a new calculation argues that there’s no need for alarm. In 2015, the number of scientists in the life sciences far exceeded the demand on them for peer review, according to Michail Kovanis, a computational physicist at the French National Institute of Health and Medical Research (INSERM) in Paris, and his colleagues. In fact, the supply of scientists is rising faster than the demand for reviewers, the researchers say.

Yet their study also suggests – based on data obtained from a rapidly-growing website of peer-review activity – that 20% of the scientists undertook between 69% and 94% of reviews last year, lending credence to some researchers’ complaints that they are overburdened. “These ‘peer-review heroes’ may be overworked, with risk of downgraded peer-review standards,” Kovanis and colleagues write in a paper published in PLOS One earlier this month2.

Modelling peer review

Kovanis’s team examined published papers listed in Medline, a database of life-sciences research, for 1990–2015. For each year, they estimated the reviews required by making assumptions on what fraction of papers go through multiple rounds of review. For the 1.1 million papers published in 2015, for example, the researchers estimated that those studies needed 9 million reviews (although they added that the figure could be as high as 30 million reviews). All those reviews would suck up a total of 63.4 million hours, they say, basing those estimates on earlier peer-reviewer surveys by Dutch publisher Elsevier.

To find out how many manuscripts a scientist typically reviews each year, the team used data from the website Publons, covering the activity of more than 70,000 researchers who had uploaded peer-review records (in part, to receive recognition for their work). On that basis, scientists who are active reviewers conduct around five reviews each year on average, the team says, although some get through dozens and others only do one. In 2015, they suggest, 1.8 million reviewers were needed to meet demand.

The number of scientists available always exceeds the demand for reviewers, say the researchers. In 2015, for example, around 6.4 million authors were listed on life-science research papers – giving journal editors a vast over-supply of people to ask. Even if editors only asked specific authors to be reviewers — for example, those in prestigious first and last positions in the author list — they still had 2.1 million candidates.

Unequal burden

In general, “the maths makes sense, and the numbers should hold true,” says Tim Vines, who runs the peer-review agency Axios Review in Vancouver, Canada, although experts will argue with some of the assumptions of Kovanis’s model, he says. Phil Davis, a publishing consultant in Ithaca, New York, says that while the so-called ‘crisis’ may indeed not be affecting elite journals, lesser-known journals that receive a lot of badly written manuscripts may well be struggling to find reviewers, a possibility that the PLOS One paper doesn’t analyse.

Petchey, now at the University of Zurich in Switzerland, says that it’s good to know there are enough reviewers available, and notes that his earlier critique was based on anecdotal, not quantitative, evidence. “That means that it is OK for me to say ‘no’ to an editor from time to time,” he says.

But the Publons data also suggest that a small number of reviewers do most of the work. That, says Vines, holds true in his experience as managing editor of the journal Molecular Ecology between 2008 and 2015. At that time, he says, the journal’s top 300 reviewers — who were 8% of the journal’s reviewing pool — took more than a quarter of the manuscripts. This was in part because some researchers refused to review, but also because editors leant heavily on reviewers they knew and trusted, he says.

Bernd Pulverer, chief editor of the EMBO Journal in Heidelberg, Germany, says he is not sure that the data from Publons accurately represents how many reviews researchers actually do, since many scientists aren't yet registered with the site. Still, he says, editors do need to broaden their referee pool to include younger researchers, as well as people in Asia. “We are too restrained in where we turn for reviewers,” he says. A 2014 Elsevier survey found that Chinese researchers wrote substantially more papers than they reviewed, simply because they were not asked to be reviewers.

Martijn Arns, a neuroscientist at Utrecht University in the Netherlands, cautions that even if there are enough potential reviewers available, this doesn’t mean that they conduct their reviews thoroughly. As their administrative tasks pile up, pressured scientists may feel they can skimp on time put into reviews, causing a crisis of poor reviewing. “Quantity does not equal quality,” he says.