"The majority of papers that get published, even in serious journals, are pretty sloppy," said John Ioannidis, professor of medicine at Stanford University, who specializes in the study of scientific studies.



This sworn enemy of bad research published a widely cited article in 2005 entitled: "Why Most Published Research Findings Are False." Since then, he says, only limited progress has been made....



"Diet is one of the most horrible areas of biomedical investigation," professor Ioannidis added -- and not just due to conflicts of interest with various food industries. "Measuring diet is extremely difficult," he stressed. How can we precisely quantify what people eat?



In this field, researchers often go in wild search of correlations within huge databases, without so much as a starting hypothesis. Even when the methodology is good, with the gold standard being a study where participants are chosen at random, the execution can fall short.



A famous 2013 study on the benefits of the Mediterranean diet against heart disease had to be retracted in June by the most prestigious of medical journals, the New England Journal of Medicine, because not all participants were randomly recruited; the results have been revised downwards.



So what should we take away from the flood of studies published every day?



Ioannidis recommends asking the following questions: is this something that has been seen just once, or in multiple studies? Is it a small or a large study? Is this a randomized experiment? Who funded it? Are the researchers transparent?



These precautions are fundamental in medicine, where bad studies have contributed to the adoption of treatments that are at best ineffective, and at worst harmful.



In their book "Ending Medical Reversal," Vinayak Prasad and Adam Cifu offer terrifying examples of practices adopted on the basis of studies that went on to be invalidated, such as opening a brain artery with stents to reduce the risk of a new stroke.



It was only after 10 years that a robust, randomized study showed that the practice actually increased the risk of stroke.

Remember, this is the standard by which Richard Dawkins and Sam Harris believe truth should be measured:Never forget that science cannot be considered reliable until it is called "engineering". Until then, the most that one can accurately assume is that it has about a fifty percent chance of actually being correct. The fact that some physicists got some very accurate results in the 1950s says preciselyabout that study published by a biologist or a medical researcher or an economist 70 years later.

Labels: science