By Robyn A Grant, Lecturer in Environmental Physiology and Behaviour, Manchester Metropolitan University.

I was lucky enough to attend the Society for Neuroscience (SfN) Meeting this year in Washington DC. A collection of over 30,000 neuroscientists met to discuss exciting new research findings, along with ethical and scientific issues in the neurosciences. One such discussion occurred on a Sunday morning session entitled Enhancing Reproducibility of Neuroscience Studies. It was a symposium chaired by Story Landis (past Director of NINDS) and Thomas Insel (Director of NIMH), with a good balance of speakers representing funders, publishers and scientists, including Francis Collins (Director of NIH), Veronique Kiermer (Nature Publishing), Huda Zoghbi (Bayer College Medicine) and John Morrison (Mount Sinai Hospital).

This meeting sparked me to thinking more about reproducibility in research, and while we all may nod along and think that all our methodologies are entirely reproducible, in the neurosciences this just really isn’t so. In the biosciences, cancer research was the first area to be identified as a problem for reproducibility, with only two out of three studies being reproduced. Button et al. 2013 in Nature Reviews Neuroscience went on to outline Neuroscience as the next problem area due to over-estimates of effect size and low reproducibility of results. This is mainly due to a lack of details in methods sections in articles. Surgical procedures, for instance, can vary dramatically and are not explained in detail within publications. In addition, systematic random bias is also prevalent, as we just can’t help but pick the good neuron to trace or measure in order to make our experiment just that little bit easier. I remember a study recently that found that laboratory mice were only really completing tasks for female researchers as they were disturbed by male researcher pheromones. The only way to keep track of these biases is to keep good lab books, take lots of notes through experiments and write full, comprehensive methods sections. Nature publishing have addressed this a little by extending the length limitations of methods sections in their journals.

Making science publicly accessible is one part of reproducibility. Being able to produce figures from available data and run analyses using available code is certainly a step in right direction. Some publishers insist on this, for example Nature publishes code, and Royal Society publications need data to be made publicly available. Certainly, there are many code and data repositories moving towards this. However, at universities we often do not have the time or the money to work towards developing the infrastructure to support data or code sharing. I think that it is great that these public databases are being successfully developed. Perhaps the fundamental problem in reproducibility lies with the methods sections themselves, rather than data availability.

Now, let me just make it clear that reproducibility is not about addressing fraud, this is rare, rather we are talking about good science. As lecturers, we all tell our students to write detailed methods sections so that they are able to carry out the experiments again and again, but do we do this ourselves? It is up to us to set a good example to our own students and postdocs. We must give them the time to develop their methodologies and make sure they are repeatable. Perhaps even allocating two students to one experiment might help get around biases. Try not to sway your students with your own biases or predictions: students or postdocs might feel pressure in finding support for your own hypotheses, rather than simply carrying out good science. In the US, the NIH have a course in best practice for PhD students, and one of the units is Research Misconduct, which focusses on keeping good records during experiments. In the UK, we also have PhD training programs at every institution, but it might be worth checking that these are as rigorous as you would like.

Overall, academics feel the continual pressure to publish research and apply for funding. I feel that there is simply a lack of reproduction in research, due to there being no impact to be gained from doing things twice. As academics, we are under a lot of pressure to produce high impact in short amounts of time, which often causes compromises in reproducibility and good science. We must remember that we are also the reviewers of all the grants and papers, and that we sit on all the promotion panels. Therefore, we can take control of scientific rigour and make sure that methods sections are up to scratch and allow publication of reproduced research. Perhaps we need to ensure that we get more support from senior faculty members, funding councils and publishers for this happen. Ultimately it is up to us to set good examples and enforce good practice in the scientific community.

Thanks to SfN for making me think, I will make sure to strive to do this myself.