Science journals are a great way to learn about discoveries in a wide variety of fields, and reading them keeps you scientifically literate and up-to-date on new advances. Journals such as Nature and Science have become increasingly popular outside their core “hard science” constituencies, partly due to successful outreach efforts including new podcasts, videos, and free web content. These growth initiatives have not diluted the top-tier scientific stature of the publications –they remain at the pinnacle of the scientific community. Because of the journals’ exclusivity and reputation, most readers naturally assume the findings featured in their articles are true. But the reliability of published scientific research has come under scrutiny recently, ironically in a paper published in Science itself that called into question papers published in leading psychology journals.

The study was simple. They would take 100 papers published in top psychology journals and try to replicate their results. Replication is a keystone of science: if I get one result and you get another doing the same study or experiment, then we can’t say for sure that either my or your result is true. The scientific method has included replication for 500+ years, all the way back to the Muslim scientists of Baghdad. So, when modern scientists replicate old psychology studies, you can expect them to get the same result, right?

Well, that’s the problem. Many of these studies are so hard to replicate that the effort is never even attempted. Part of the problem is that scientists won’t get a headline study published with a title like “Blah Blah Study From A Year Ago Done Again And The Same Thing Happened!” New, exciting studies are what get published, and that’s what a scientist wants: to get published. But let’s assume that not all researchers are like this, and some do science purely with a goal of advancing and confirming science. It’s really hard to be this person, mainly because replicating science is just not easy. It’s easier to just do your own studies and get your own results.

So when this study was completed, replicating 100 psychology studies, their results were disappointing. By one of the replication success indicators from the publication, only 36% of the studies returned the same results as they did the first time. What that means is that more than half of the studies replicated after they were published in a top psychology journal failed a large part of the scientific method. This puts the whole psychology field into question. Should more researchers be replicating past studies? Certainly, as Brian Nosek, a psychologist working on the Science study said,

“It would be great to have stronger norms about being more detailed with the methods… If I can rapidly get up to speed, I have a much better chance of approximating the results.”

Basically, scientists and researchers should include a detailed description of how they did the study, so any other scientist could easily repeat and prove or disprove your results. Even Ibn al-Haytham, a Muslim scientist born in 956, did this by including incredibly detailed instructions on how to carry out his experiments in his book Book Of Optics, and scientists today should put more care into helping with replication. It even has a name: the crisis of irreproducibility. And what’s even more scary is that psychology has to do with our mind and how we behave, so these studies that have, currently, less than a 50% chance of the results being true affect how people live their lives dramatically. But in the future, scientists (and journals) have to take more care in the reproducibility of their studies, because without reproducing studies, we can never really know what’s true and what’s not.