Findings "disappointing"

Problem widespread in science

(NaturalNews) An article in the journalcasts doubt on how seriously the findings of any one psychological or other type of scientific study should be taken. In order to contribute to an ongoing debate about the reliability of psychological research, 270 researchers on five continents repeated 100 experiments that had been published in major psychological journals in 2008.They were only able to replicate the original experiment's findings in 36 of 100 cases."The key caution that an average reader should take away is any one study is not going to be the last word," said lead researcher Brian Nosek of the University of Virginia. "Science is a process of uncertainty reduction, and no one study is almost ever a definitive result on its own."The researchers replicated studies falling into one of two general categories: social psychology, concerning social topics such as identity, self-esteem, prejudice and social interactions; and cognitive psychology, concerning topics related to basic operations of the mind such as memory, perception and attention. Only half the results of cognitive psychology studies could be replicated, and just 25 percent of the results from social psychology experiments could be replicated.Even in cases where the researchers replicated the findings of prior studies, they nearly always found a less significant effect. In fact, the average effect size in the replicated studies was about half that of the original studies."There is no doubt that I would have loved for the effects to be more reproducible," Nosek said. "I am disappointed, in the sense that I think we can do better."One social psychology experiment that was successfully replicated showed that people are equally accurate at recognizing pride in faces from different cultures. A cognitive psychology study that was replicated showed which regions of the brain show increased activity when people receive fair offers in a financial game.An example of a study that was not replicated was a social psychology experiment that found that people were more likely to cheat if they were encouraged to believe there is no such thing as free will.There are many possible reasons why any given study might have failed to replicate, the most obvious being that the original study simply yielded a false positive; statistically, even a well-designed study will yield an incorrect result a small proportion of the time, typically 1 to 5 percent. The second study might also have been performed under slightly different conditions or with a slightly different methodology; this is why studies are supposed to be replicated many times before their conclusions are considered sound.Small changes to data analysis can also explain the lack of replication or the change in the magnitude of the effect that is found, such as when scientists exclude portions of the data that undermine their hypotheses. Journals themselves might contribute to this problem by only selecting the strongest effects for publication, thereby encouraging scientists to massage their numbers.The problems are not limited to psychological studies, said co-author Marcus Munafo of Bristol University."I think it's a problem across the board, because wherever people have looked, they have found similar issues," he said.Part of the problem, Munafo said, is built into the structure of the profession."If I want to get promoted or get a grant, I need to be writing lots of papers," Munafo said. "But writing lots of papers and doing lots of small experiments isn't the way to get one really robust right answer. What it takes to be a successful academic is not necessarily that well aligned with what it takes to be a good scientist." Conflicts of interest between the goal of accurate research and scientists' financial or political desires — including ties with the industries or government agencies involved with GMOs, vaccines or other contentious areas of science — create further incentives for scientists to either deliberately or unknowingly distort their research.