The British Medical Journal is one of the oldest and most prestigious scientific journals. Photo by BMJ/NIH.

TORONTO, Dec. 23 (UPI) -- Peer review is the quality control process designed to ensure scientific journals publish only the most sound research. Each review board is tasked with protecting a journal's integrity and reputation, and with confirming each submitted paper's scientific bonafides.

But does it work? And at what cost?


A new study suggests the current model may succeed in keeping out the scientific riff-raff, but its maintenance of the status quo comes with a drawback, the study's authors argue -- the regular rejection of cutting-edge work.

A team of researchers, led by scientists at the University of Toronto, concluded as much after looking at more than 1,000 papers submitted to three prominent medical journals -- the Annals of Internal Medicine, the British Medical Journal and The Lancet. Only 62 were published. The majority of those rejected went on to be published elsewhere.

As predicted, the peer review did a good job at ensuring a certain level of quality. Papers by one of the three elite journals earned more citations on average than those that were rejected and then accepted by another journal.

But researchers found that out of the 1,000-plus papers -- all published more than 10 years ago -- those with the most citations were more likely to have been first rejected. In fact, the 14 most highly cited papers in the study were initially rejected. Twelve of the 14 didn't even make it to peer review, bounced outright by the editors who control what the board reviews.

Some suggest the results prove the current system neglects pioneering work in favor of prudence.

"This finding raises concerns regarding whether peer review is ill-suited to recognize and gestate the most impactful ideas and research," the study's authors wrote in abstract of the new paper, published this week in the journal PNAS.

"The market dynamics that are at work right now tend to a certain blandness," Michèle Lamont, a sociologist at Harvard University who did not participate in the study, told Nature.

But some journal editors say the amount of citations a paper tallies isn't necessarily the best assesment of its uniqueness or merit. Even the new study's main author, sociologist Kyle Siler, admits citations may suggest a piece of research is broad in scope, not necessarily innovative.

"The analogy I use is, 'Is Nickelback the best band in the world?'" Siler explained to Science Magazine. "No, not necessarily, but they've got that lowest common denominator, middle-of-the-road niche. The best research, in some cases, might be more esoteric."

Still, there are plenty of critics of the peer review system, and most are happy when new analysis stirs the pot and gets other scientists talking about the best way to facilitate and encourage cutting-edge research.

"Many people think the system is full of weaknesses," said Lamont. "It's not perfect, but it's the best we have."