Credit: Adapted from sorbetto/Getty

Near the end of April, my colleagues and I published an unusual scientific paper — one reporting a failed experiment — in Genome Biology. Publishing my work in a well-regarded peer-reviewed journal should’ve been a joyous, celebratory event for a newly minted PhD holder like me. Instead, trying to navigate through three other journals and countless revisions before finding it a home at Genome Biology has revealed to me one of the worst aspects of science today: its toxic definitions of ‘success’.

Our work started as an attempt to use the much-hyped CRISPR gene-editing tool to make cassava (Manihot esculenta) resistant to an incredibly damaging viral disease, cassava mosaic disease. (Cassava is a tropical root crop that is a staple food for almost one billion people.) However, despite previous reports that CRISPR could provide viral immunity to plants by disrupting viral DNA, our experiments consistently showed the opposite result.

Careers collection: Funding science

In fact, our paper also showed that using CRISPR as an ‘immune system’ in plants probably led to the evolution of viruses that were more resistant to CRISPR. And although this result was scientifically interesting, it wasn’t the ‘positive’ result that applied scientists like me are taught to value. I had set off on my PhD trying to engineer plants to be resistant to viral diseases, and instead, four years later, I had good news for only the virus.

Every peer reviewer agreed that our study was methodologically sound, but it soon became apparent that the finding was a message no one wanted to share. Why was it so hard for reviewers and editors to publish a single report showing a limited failure of CRISPR technology?

Scientists have become so accustomed to celebrating only success that we’ve forgotten that most technological advances stem from failure. We all want to see our work saving lives or solving world hunger, and I think the collective bias towards finding positive results in the face of failure is a dangerous motivation. Additionally, in fields such as genetic engineering, antiscience activists are always ready to declaim any hint of failure as an indictment of the field as a whole. My work, when published, was dutifully misrepresented by some who were eager to damage the reputation of gene engineering.

And even if my research was flawed, the problem remains that the scientific world largely ignores negative results. Data from a 2012 study of more than 4,000 published papers show that scientific literature as a whole is trending towards more positivity. The study’s author, Daniele Fanelli, found that the frequency at which papers testing a hypothesis returned a positive conclusion increased by more than 22% from 1990 to 2007. By 2007, more than 85% of published studies claimed to have produced positive results. Fanelli concluded that scientific objectivity in published papers is declining.

Collection: Scientific data

When negative results aren’t published in high-impact journals, other scientists can’t learn from them and end up repeating failed experiments, leading to a waste of public funds and a delay in genuine progress. My study did not solve the scourge of viral disease in cassava, but it did show researchers where not to look for a solution, and that’s important for real progress. At the same time, young scientists like me are bombarded with stories only of scientific success, at conferences and in journals, leading to an exacerbation of ‘imposter syndrome’ when our own work doesn’t match these expectations.

The pressure to publish a positive story can also lead scientists to spin their results in a better light, and, in extreme instances, to commit fraud and manipulate data. In fields such as biotechnology and genomics, social scientists have already pointed out that hyping up the science could foster unrealistic expectations in an already sceptical public, counter-intuitively leading to greater distrust when real-world advances come at a slower pace.

The problem is worsened by funding agencies that reward only those researchers who publish positive results, when, in my view, it’s the scientists who report negative results who are more likely to move a field forward.

We need reviewers and publishers to commit to publishing negative results in their journals. We need academic conferences to embrace honest discussions of failed experiments. We need funding agencies to support scientists who produce sound negative results. And, as scientists, we must acknowledge that all important work should be recognized, irrespective of its outcome.

Simply put, we need more honesty in science.