Published online 6 May 2011 | Nature | doi:10.1038/news.2011.270

Column: Muse

Reputations emerge in a collective manner. But does this guarantee that fame rests on merit, asks Philip Ball.

Does everyone in science get the recognition they deserve?

Obviously, your work hasn't been sufficiently appreciated by your peers, but what about everyone else? Yes, I know he is vastly over-rated, and it's a mystery why she gets invited to give so many keynote lectures, but that aside — is science a meritocracy?

How would you judge? Reputation is often a word-of-mouth affair; grants, awards and prizes offer a rather more concrete measure of success. But increasingly, scientific excellence is measured by citation statistics, not least by the ubiquitous h-index1, which is intended to quantify the impact of your literary oeuvre. Do all or any of these things truly reflect the worth of one's scientific output?

Many would probably say: "sort of".

That's to say, most good work gets recognized eventually, and most Nobel prizes are applauded and deemed long overdue, rather than denounced as undeserved. But not always. Sometimes important work doesn't get noticed in the author's lifetime, and it is a fair bet that some never comes to light at all. There is surely an element of chance and luck (not to mention politics) in the establishment of reputations.

Rich get richer

A paper, published online in PLoS ONE on 4 May, by physicist Santo Fortunato at the Institute for Scientific Interchange in Turin, Italy, Dirk Helbing of ETH in Zurich, Switzerland, and their coworkers aims to shed some light on the mechanism by which citations are accrued2. They have found that some landmark papers of Nobel laureates quite quickly give their authors a sudden boost in citation rate — and that this boost extends to the author's earlier papers too, even if they were in unrelated areas.

Several older papers by John Fenn saw a sudden increase in the number of citations after his landmark paper was published. Click for a larger image. From ref 2

For example, citations of a pivotal 1989 paper by future chemistry Nobel laureate John Fenn on electrospray ionization mass spectrometry3 took off exponentially, but citations of at least six of Fenn's older papers also rose (see graph). These peaks in citation rate stand out remarkably clearly for several laureates (some of whom have more than one peak), and might be a useful indicator both of important breakthroughs and of scientific performance.

This behaviour could seem reassuring or disturbing, depending on your inclination. On the one hand, some of these researchers were not particularly well known before they published their landmark papers — and yet the value of the work does seem to have been recognized, overcoming the rich-get-richer effect by which those already famous tend more easily to accrue more fame4. This boost could help innovative new ideas to take root. On the other hand, such a rise to prominence brings a new rich-get-richer effect, for it awards 'unearned' citations to the researcher's other papers.

And the findings seem to imply that citations are sometimes selected not because they are necessarily the best or most appropriate but to capitalize on the prestige and presumed authority of the person cited. This further distorts a picture that already contains a rich-get-richer element among citations themselves. An earlier analysis suggested that some citations may become common largely by chance, benefiting from a feedback effect in which they are chosen simply because others have chosen them before5.

Collective search engine

What the new findings underscore is that science is a social enterprise, with all the consequent quirks and nonlinearities. That has potential advantages, but also drawbacks. In an ideal world, every researcher would reach an independent judgement about the value of a paper or a body of work, and the sum of these judgements should then reflect something fundamental about its worth.

That, however, is no longer an option, not least because there is simply too much to read — no one can hope to keep up with all that happens in their field, let alone in related ones. As a result, the scientific community must act as a collective search engine that hopefully alights on the most promising material - a task that the Faculty of 1000 tries to formalize in biology. The question is whether this social network is harnessed efficiently, avoiding blind alleys while not overlooking gems.

ADVERTISEMENT

No one really knows the answer to that. But some social-science studies highlight the possible consequences. For example, it seems that selections made ostensibly on merit are somewhat capricious when others' choices are taken into account: objectively 'good' and 'bad' material still tends on average to be seen as such, but feedbacks can create a degree of randomness in what succeeds and fails6. Doubtless the same effects operate in the political sphere — so that democracy is a somewhat compromised meritocracy — and also in economics, which is why prices frequently deviate from their 'fundamental' value.

But Helbing suggests that there is probably an optimal balance between independence and group-think. Helbing's earlier work on a computer model of people leaving a smoky crowded room in a fire shows that the room empties most efficiently when there is just the right amount of follow-the-crowd herding7. Are scientific reputations forged in this optimal regime? And if not, what would it take to engineer more wisdom into this particular crowd?