Prof. John Antonakis caught our attention when he coined the five diseases of academic research. As a global information analytics company specializing in health, we are deeply invested in research getting the all clear.

Dr. Antonakis is Professor of Organizational Behavior on the Faculty of Business and Economics at the University of Lausanne and Editor-in-Chief of The Leadership Quarterly. The five diseases he discovered not only plague the production of useful research but infect people with doubt about the accuracy, and even the truth, of scientific discovery.

His diagnosis suggests that the cure could lie in the hands of the institutions that disseminate the research. “Science is not necessarily self-correcting,” he warns. So, just as a spoonful of sugar helps the medicine go down, concrete incentives to encourage healthier research could be just what the doctor ordered.

But what are the five diseases? Below we take a brief look at each of them and explore what Elsevier is doing to administer preventive medicine. If you’d like to read about the diseases in full, Dr. Antonakis’ position paper On doing better science: From thrill of discovery to policy implications is free to access online until 31 December 2017.

Join the conversation We welcome your ideas on this subject. Please join the discussion in the comments section below.

1. Significosis – “Doctor, Doctor, my stats aren’t big enough!”

The research landscape is suffering from a statistical inflammation caused by an unhealthy obsession with statistically significant results. What is actually out there in terms of effect sizes is obscured because statistically significant results are much more likely to get published. Once biased results are published, the basis for future research and policy is set and the inflammation spreads.

Adam Fraser, Publisher in Psychology and Cognitive Science at Elsevier, confirms this diagnosis, saying there is a misplaced and often unbalanced focus on statistically significant results in journal publishing. “Making poorly substantiated claims about the significance of your product is the job of 24-hour shopping channels, not scientific journals,” he proclaims. “We’ve got greater standards to uphold.”

When he seeks a second opinion from his colleague Jennifer Franklin, Associate Publisher in Developmental Psychology, she explains that when diagnosing the significance of a paper, it is necessary to consider the layers of evidence supporting it within the collective body of literature: “Statistics only make up one piece of the puzzle.”

Journals in psychological science are making significant moves towards fighting Significosis by encouraging transparency of data, methods, and reporting. See, for example, Professor Roger Giner-Sorolla’s editorial in Journal of Experimental Social Psychology, Approaching a fair deal for significance and other concerns. The journal has introduced policies to improve standards and the cure is spreading. Rich Lucas and M. Brent Donnellan at Journal of Research in Personality recently articulated some clear guidelines for authors to ensure that the journal’s papers have better discussions of the authors’ sample size decisions and the effects of these decisions on statistical power and precision.

2. Neophilia – “Doctor, doctor, you’ve never heard this before!”

Neophilia is an all-consuming obsession with novelty. Symptoms include deafness to replication studies, null results, and solid extensions of a known theory, despite loud agreement that there’s much to learn from these. To cure this disease, we need a prescription of value for nuanced results.

Dr. William Gunn, Director of Scholarly Communications at Elsevier, agrees that there needs to be policies to allow for nuanced results:

It’s important that publishers like Elsevier actively support the proposals outlined in “A manifesto for reproducible science”. We can help raise the bar on reproducibility by lowering barriers for researchers to publish replication studies, thus empowering researchers to share their methods and data, championing rigorous and transparent reporting, and creating outlets — in journals such as Contemporary Clinical Trials Communications and Heliyon — for research that upholds reproducibility.

3. Theorrhea – “Doctor, doctor, you’ve never thought this before!”

Theorrhea is characterized by a mania for new theory. Patients suffering with Theorrhea are often found “HARKing” (hypothesizing after the results are known). Under pressure to make a theoretical contribution, they compose a self-validating theory to fit around the existing data rather than using the data to test a hypothesis.

Franklin says a new article type can help address this: Results Masked Review articles. “The paper is initially peer reviewed without the results or discussion being present. We hope that this will contribute to addressing issues such as over-analysis of data and publication bias.”

Results-Masked Reviews are currently being piloted by a number of journals in business, management and organizational psychology. “These journals are committed to publishing important results, whatever they may be,” Franklin says.

Registered Reports is another article type that can also help salve this ailment in academic publishing. Fraser points to the initiative Cortex launched four years ago to publish Registered Reports: Splitting the review process into two stages. This article type shifts the focus from the results to the importance of the research question and how well thought out a researcher’s methods are. “I’m encouraged to learn that this practice is spreading; some 50 journals are now publishing Registered Reports,” Fraser says.

However, he cautions that this approach is not a silver bullet because it does not lend itself to all types of research. But the early signs are good, and it does point to a route with incremental gains.

4. Arigorium – “Doctor, doctor, my theory can’t stand up!”

This malady affects the rigor in theory. It’s cure? Phenomena need to be tested more rigorously and ensure the correct causal specification of the model tested. For that, we prescribe strong theory sections. Researchers need to be more precise and transparent about their predictions. “If research findings can be replicated, they are more trustworthy and reliable,” Dr. Gunn explains.

Sounds easy, but the perception that editors don’t want to publish replication studies is a condition that also needs curing. Elsevier is taking active steps to dose this misconception. They include promoting replication studies when they’re published, issuing calls for replication papers, and developing a new article type specifically for replication studies.

5. Disjunctivitis – “Doctor, doctor, I just can’t stop!”

Researchers who catch disjunctivitis are susceptible to producing quantity over quality. Early career researchers are an especially at-risk group. In some fields, disjunctivitis has led to the increase of short and rapid publications. This can fragment results within and between disciplines as authors seek to defend a position and do not proceed in an integrative, paradigm-driven manner. Sufferers of disjunctivitis are also vulnerable to neophilia and significosis.

Francesca Buckland, a Senior Product Manager at Elsevier, is familiar with disjunctivitis, having witnessed it in her work with the research community. Yet she has also seen the emergence of positive initiatives from the research community to counter the problem. New technologies leveraging big data offer an alternative, holistic approach to evaluating research performance. She explains:

There are many new tools in the scholarly publishing environment to help researchers to distinguish quality research. An example is the plethora of article recommendation tools now available on the market, including Mendeley Suggest, which was launched last year and uses large volumes of anonymised readership information to create personalised article suggestions. We're in the age of information and communication, and the way research has changed reflects broader trends across society in this respect.

Technologically developed tools that make the search for quality more efficient can neutralize the effects of quantity over quality.

“Authors, editors and scholarly gatekeepers need to ensure that they make the best use of existing and new technologies,” Buckland says, “ensuring the metrics by which research is judged are the right ones: transparent and replicable.”