The ripples of a retraction are felt throughout a scientific community, resulting in fewer citations and less funding for studies on related topics, according to an analysis of 1,104 retractions mined from the PubMed biomedical database.

The study, published this week as a National Bureau of Economic Research working paper, used the PubMed Related Citations Algorithm (PMRA) to fish out papers that are topically similar to retracted articles but written by different authors. After a retraction, the rate at which these related papers were cited dropped by 5.7% relative to a selection of control papers that were not related to a retraction.

More striking, however, were the estimated effects on funding. PubMed also tracks US National Institutes of Health (NIH) grants that are acknowledged in publications. This data showed that retractions that undermine the validity of published results, whether owing to fraud or ‘honest’ mistakes such as failure to reproduce the data, were followed by a 50–73% drop in NIH funding for related studies. “The funding response is massive,” says lead author Pierre Azoulay, an economist at the Massachusetts Institute of Technology Sloan School of Management in Cambridge.

Why should a retraction cost unaffiliated researchers in lost citations and cash? Azoulay and his colleagues reasoned that there could be at least two explanations for the ripple effect: either scientists perceive that there is limited potential in a field besmirched by a retraction, or they are fearful of being tainted by association with a ‘contaminated’ area of study. Several lines of evidence pointed to the latter. The magnitude of the shift away from a field was enhanced when the retraction was associated with scientific misconduct. And researchers at for-profit firms, where scientific motivation is more likely to be focused on the development of a commercial product regardless of a field’s broader reputation, rarely penalized related articles by citing them less often.

If that’s the case, society pays a price as well when scientists pull away from fields worthy of study. One solution, the authors suggest, is to develop a universal coding scheme for retractions, so that the cause underlying a retraction — something that is often left unclear in published notices — is unambiguous. It is a nice idea, but one that Azoulay and his colleagues acknowledge may not be popular among journal editors, who would then be forced to make a clear determination of what went wrong and why.