There is a crisis of reproducible research in neuroscience. At least, that is the view of the British Neuroscience Association (BNA). The BNA has witnessed huge scientific progress since its establishment in the 1960s, but contends that “Recent decades, however, have seen an increasing pressure to publish as many papers as possible, and incentives to publish only surprising and novel findings”. A generalised disdain for confirmatory results is seemingly eroding advances in this field. To tackle this situation, the BNA put forward its action plan in a Manifesto for the Credibility in Neuroscience , launched at a Parliamentary Reception in London, UK, on Nov 25, 2019.

The document outlines BNA's commitment to lead a “shift in research culture”, to provide the instruments to achieve it, and to reward best practices. BNA recognises that post-graduate and PhD students are most vulnerable to the damaging circumstances created by years of “hyped expectations” and by the factors that “threaten the credibility of research”. Those factors are the biases and distortions of an academic system that puts the emphasis on breakthrough discoveries, high-impact publications, and hyper-competitive recruitment. BNA believes that the current state of affairs “jeopardises the translation of research to real-world applications”, and instead wants to promote confirmatory research and replication studies. BNA also wants to end the use of the journal impact factor as a research-quality measure and to incentivise collaborative approaches and open science (that is, freely accessible reports and data), as part of their overarching approach to fix neuroscience. But neuroscience is an umbrella term encompassing the endeavours of many scientists who are trying to understand the nervous system. From the biophysicists discerning neuronal growth to the neurologists testing repair therapies, the field uses a plethora of methods and hypotheses. The credibility crisis might not have the same causes nor affect different specialists in the same way. For instance, validation is a well established process in clinical neurosciences, as otherwise the medical regulatory agencies wouldn't allow for diagnostics or treatments to move into clinical practice. That rigour might not be generally applied in psychological science, however, in which a crisis of reproducibility is apparently widespread. In fact, several international research networks were set up a few years ago, such as the ongoing Psychological Science Accelerator , to tackle reproducibility issues in this area. Large networks (ie, team science) and prespecified protocols can help improve sample sizes, data collection, and methodology, and hence, lead to robust findings; however, the capacity of research networks to tackle distorted incentives in the academic system is debatable. Particularly, if a researcher's performance is measured according to traditional standards of success, such as their publication record.

The dissemination of findings is a crucial step of the scientific process, and academic journals share the obligation to improve its efficiency. The quality and impact of a report are commonly used as proxies to evaluate scientists' accomplishments. A bias against negative and confirmatory results, inadequate peer review, and poor or selective reporting are some of the most common blunders that editors hold responsibility for. Over a decade ago, publication biases were already identified as a major cause behind research waste by Iain Chalmers and Paul Glasziou in a landmark Viewpoint . They estimated that, because of the cumulative effects of bad production and reporting of research, more than 85% of the research investment could be wasted, implying that “the dividends from tens of billions of dollars [of investment in research] are lost every year because of correctable problems”. Fixing such problems is therefore not just an academic requisite, but a social need. The Lancet Series on “Increasing value and reducing waste” of biomedical research came up with recommendations that could prove useful for the neuroscience community at large.

The Lancet journals require the registration of all interventional trials, adhere to strict reporting guidelines , do not dispose of studies because of negative findings—if their data and methodology are robust—and support open science . But only a small proportion of submissions gets published in popular academic journals (about 5% in The Lancet Neurology). This highly stringent selection means that, for most scientists, the path towards publication can be time-consuming and arduous, regardless of the potential relevance of their findings.

When publication is the most important measure of success, many talented people are left behind. To accelerate scientific progress, the field must come up with the performance metrics that recognise achievement in the new culture that BNA aspires to create.