The peer review process – long considered the gold standard of quality scientific research – is a “sacred cow” that should be slaughtered, the former editor of one of the country’s leading medical journals has said.

Richard Smith, who edited the British Medical Journal for more than a decade, said there was no evidence that peer review was a good method of detecting errors and claimed that “most of what is published in journals is just plain wrong or nonsense”.

Research papers considered for scientific and medical journals undergo a process of scrutiny by experts before they can be published. Hundreds of thousands of new studies are published around the world every year, and the peer review process exists to ensure that readers can have confidence that published findings are scientifically sound.

But Dr Smith said pre-publication peer review was slow, expensive and, perhaps ironically, lacking in evidence that it actually works in its chief goal of spotting errors.

Speaking at a Royal Society event earlier this week, he said an experiment conducted during his time at the BMJ, in which eight deliberate errors were included in a short paper sent to 300 reviewers, had exposed how easily the peer review process could fail.

“No-one found more than five, the median was two, and 20 per cent didn’t spot any,” he was quoted as saying by Times Higher Education. “If peer review was a drug it would never get on the market because we have lots of evidence of its adverse effects and don’t have evidence of its benefit.”

He said the process of peer review before publication could also work against innovative papers, was open to abuse, and should be done away with in favour of “the real peer review” of the wider scientific community post-publication.

“It’s time to slaughter the sacred cow,” he said, while acknowledging that do so would likely be “too bold a step” for a journal editor to take.

Dr Smith, who edited the BMJ between 1991 and 2004, is a longstanding critic of the pre-publication peer review process. In the past he has bemoaned the delays that the process can bring, in some cases of more than two years, between a paper being completed and its final publication.

His comments come at a time of serious soul-searching within the scientific community, over the quality of much published research.

The editor of the second of the country’s two leading medical journals, Dr Richard Horton of The Lancet, wrote in an editorial earlier this month that “much of the scientific literature, perhaps half, may simply be untrue”, blaming, among other things, studies with small sample sizes, researchers’ conflicts of interest and “an obsession” among scientists for pursuing fashionable trends of dubious importance”.

“The apparent endemicity of bad research behaviour is alarming,” he wrote. “In their quest for telling a compelling story, scientists too often sculpt their data to fit their preferred theory of the world.”

Dr Horton also suggested reform of the peer review process – but to improve it, not scrap it; potentially with the introduction of incentives for scientists who peer review more critically.