PEER review, many boffins argue, channelling Churchill, is the worst way to ensure quality of research, except all the others. The system, which relies on papers being vetted by anonymous experts prior to publication, has underpinned scientific literature for decades. It is far from perfect: studies have shown that referees, who are not paid for their services, often make a poor fist of spotting serious mistakes. It is also open to abuse, with reviewers susceptible to derailing rivals or pinching their ideas. But it is as good as it gets. Or is it? Marcus Munafò, of Bristol University, believes it could be improved—by injecting a dose of subjectivity. The claim, which he and his colleagues present in a (peer-reviewed) paper just published in Nature, is odd. Science, after all, purports to be about seeking objective truth (or at least avoiding objective falsity). But it is done by scientists, who are human beings. And like other human endeavours, Dr Munafò says, it is prone to bubbles. When the academic herd stampedes to the right answer, that is fine and dandy. Less so if it rushes towards the wrong one.

To arrive at their counterintuitive conclusion the researchers compared computer models of reviewer behaviour. Each began with a scientist who had reached an initial opinion as to which of two opposing hypotheses is more likely to be true. The more controversial the issue, the lower the confidence. He then sends the manuscript supporting one of the hypotheses to a reviewer, who also has a prior opinion about its veracity, and who recommends either rejecting or accepting the submission. (In this simple model journal editors are assumed to follow reviewers' advice unquestioningly, which is not always the case in practice.) Subsequently, the reviewer himself writes and submits his own paper advocating one of the hypotheses to the journal, and the process repeats itself.

Each reviewer thus faces two decisions. First, does he endorse the paper? And second, which hypothesis to put forward in his own submission. Dr Munafò modelled two different ways in which reviewers decide whether or not to recommend a paper for publication. In the first, the subjective opinion about the truth of the hypothesis (itself affected by the publication history) influences the decision in addition to objective criteria such as robust methods. In the second, the decision is based solely on the objective criteria.

It turns out that herding occurs in both models: some scientists end up submitting manuscripts advocating hypotheses which disagree with their initial opinion. This happens regardless of the how controversial the hypothesis was to begin with. But in the objective model the herding is irreversible; after about 15 cycles the alternative hypothesis disappears from subsequent literature. In the subjective model, meanwhile, a tiny proportion of hold-outs persists, leaving room for a revision of the received wisdom.

The proposal has its problems. One reviewer noted that herding was an inevitable upshot of the assumption, built into both models, that people alter their decisions based on what others do. Dr Munafò, a psychologist, insists this is a reasonable thing to assume—and that it need not in principle have led to pervasive herding. The models also imply that an author can decide freely which hypothesis to support, regardless of what the data say. As other studies (by both Dr Munafò and others) have shown, scientists do indeed often enjoy enough discretion over interpretation of data to enable them to do precisely that.

What does this mean in practice? Dr Munafò thinks that editors might, say, ask reviewers not just to appraise papers', but also offer a personal judgement about the truth of the hypothesis under investigation. Such short-term subjectivity may be a small price to pay for long-term objectivity of science.