“It takes a researcher 3-5 hours to review a manuscript,” editors quip, “whether you give him/her a week or six months!”

I’ve heard many variations on this joke, but the principle remains: most people are motivated by deadlines and need periodic reminders to meet them.

One academic discipline that could use stronger motivation is economics, where researchers commonly wait months –if not years– for their manuscripts to be published (Ellison, 2002). In the early 1990’s, economists were playing with the idea of paying reviewers to speed up the process (Mason, 1992). At present, the American Economic Association is one of the few publishers that pays reviewers for reviews. The B.E. Journal of Economic Analysis & Policy continues to use a banking model where researchers, by reviewing other manuscripts, accumulate credits that they can apply to have their own manuscripts reviewed. This is similar to the PubCred banking model proposed for ecology.

With the exception of the BMJ, few editors feel comfortable experimenting on their journal, relying more on anecdotal evidence or the preferences of editors to help guide journal practices. When it comes to publishing, everyone believes they are an expert.

In a recent paper, “How Can We Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics,” Raj Chetty, Harvard economist and editor of the Journal of Public Economics, decided to experiment on his own journal, testing whether shortened deadlines and cash incentives increased the speed and quality of peer reviews.

During a period of 20 months, 1,500 referees of the journal were randomly assigned to one of four groups:

A control group with a 6 week (45 day) deadline to submit a referee report A group with a shortened 4 week (28 day) deadline A cash incentive group rewarded with $100 for meeting the 4 week deadline, and A social incentive group in which referees were told that their reviewing times would be posted publicly

Chetty reported that shortening the reviewer deadline from 6 to 4 weeks reduced the median review times by 12 days (from 48 to 36 days). You’ll note that most reviewers are missing their deadline, but much of the increase in speed occurs in the last week before their deadline. Writing on the journal website:

If you shorten the deadline by two weeks you receive reviews two weeks earlier on average. In fact, we noticed that whatever timeframe you give, most people submit their review just prior to the deadline.

Providing $100 cash incentives for submitting a report within 4 weeks further reduced review times by 8 days. In addition, there was no evidence that when the cash incentive stopped, reviewers in this group reverted back to slower reviewing practices.

The social incentive treatment reduced median review times by just 2.5 days, smaller than the effects of other treatments. However, the social incentive treatment had more of an effect on tenured faculty, who are less sensitive to deadlines and cash incentives, Chetty explains.

Their Kaplan-Meier plot reveals the differences in review times between the four groups. You’ll note how fast the cash-incentive group rushes to submit their reviews just before the deadline. Nearly 50% of reviewers in this group submitted their report between their reminder email and deadline.

If you’re worried about reviewers rushing to send a few perfunctory comments in exchange for some quick cash, shorter deadlines did reduce the length of the submitted reported somewhat, but it did not have an effect on the quality of the review (as measured by whether the editors followed the reviewers decisions), the researchers reported.

The researchers also looked at whether changing the incentives at one journal affected the performance of reviewers for other journals. If one journal were offering cash incentives, would it reduce their willingness to review for a competing journal, for example? Restricting their comparison to 20 other economics journals published by Elsevier, Chetty found no detectable effects on referees’ willingness to review manuscripts; nor did the experiment affect their review times at other journals.

If paying editors to review does become commonplace among economics journals, I do wonder whether reviewers would change their decision-making process, initiate a competition between journals on reviewer compensation, and leave those journals who cannot afford to pay reviewers at a distinct disadvantage. I believe there are many things that we currently do on a voluntary basis that we would stop doing knowing that a functioning market exists for our services.

A straw poll I conducted on The Scholarly Kitchen last year found that monetary rewards for peer review services were deplorable for some respondents, while others appeared to view them as either a solution to a growing problem, or as a potential source of additional income. Monetary and reputational rewards can coexist in the same scholarly marketplace and attract different participants for different reasons. There is no reason to believe that there will be just one future model for peer review.

I posed the question about whether faculty should be paid for reviewing papers at a dinner party and, as expected, got a variety of answers. Two biologists were fundamentally opposed to the idea whereas the business school professor was willing to entertain it. The sociologist thought context was vitally important and wondered who was ultimately going to pay this cost. These Cornell and Ithaca College faculty assumed they would still be reviewing relevant, high-quality, well-written manuscripts. When I asked them if they were willing to review a unintelligible, poorly-written manuscript, no one thought $100 was worth their time.

When priced too low, cash incentives and other discounts to reward voluntary work can have the opposite intended effect, as this systems biologist gripes about being offered a $10 book coupon for his work. In this case, a simple thank you letter would have been more appropriate. Likewise, the Company of Biologists discontinued the practice of paying reviewers $25 as a token of appreciation when the transaction costs at both ends didn’t seem to be worth the effort and served only to frustrate and infuriate some reviewers. If you have any experience with incentivizing reviewers (successes and failures), please let us know in the comment section.

What I like about the Chetty paper is that approaches the issue of timeliness of the review process as a complex problem, tests various solutions rigorously, and proposes small changes that can nudge the system to a more desirable state, at least for economics.

The “dismal science” can offer the rest of us some important lessons.