UA students in an evidence-based medicine class are trying to replicate the experimental results their peers from a previous semester found, where having a woman on a man’s lap while he bench pressed allowed him to do more repetitions.

The inspiration behind the original experimental project was a circulating Internet meme that claimed women helped increase testosterone levels in men, which led to the increased repetitions. However, there was no data to back up the claim.

The goal of the former students was to test the validity of the meme, and they found positive results; the goal of the current class is to follow up and test the validity of those results.

“If you can’t reproduce results, you can’t tell if they are true or not,” said Samantha DiBaise, a senior studying molecular and cellular biology, physiology and Spanish, and the leader of the project. “You can’t say anything about it.”

Reproducibility is a major issue in science today, said Joanna Masel, an associate professor in the department of ecology and evolutionary biology and instructor of the course. For example, Masel cited the work of John Ioannidis, a professor of medicine at Stanford University, who took a set of research studies and looked to see whether they had been reproduced and what happened if they had been. What was found was a lot of the studies could not hold up — the results of the study could not be reproduced or had smaller effect sizes than what was originally claimed.

“The academic culture of science has created an incentive to be the first one to publish something which convinces people wrongly that reproducing an effect is not a valuable pursuit,” said Parris Humphrey, a graduate student studying ecology and evolutionary biology and teaching assistant for the course.

In the past, it was difficult for researchers to get published in a good journal if they only reproduced a study, Masel said.

“If you got the same thing a second time, people say they already knew it, and if you didn’t get it, that doesn’t help either,” Masel said.

But one study that achieved statistical significance and found an important result is not enough for that result to be true. Assessing the likelihood of seeing that result over all of the times it has been investigated is crucial, Humphrey said.

“If you do a little trial with 10 or 20 patients in each group, you might get a result that is interesting enough to be worth exploring further, but you wouldn’t want to change clinical practice on the basis of it,” Masel said.

However, reproducibility is not a new concept; it has always been there, Masel said. She noted that the difference now is scientists recognize to what extent they haven’t been doing enough of it.

The scientific community has been trying to remedy this. For example, there are an increasing number of journals, such as PLOS ONE, where scientists can publish anything that is methodologically sound.

“It doesn’t have to meet some editor’s concept of what is or isn’t interesting,” Masel said.

There has also always been some awareness of it in medicine. For a drug to get approved by the Food and Drug Administration, there needs to be results from two clinical trials to support it, Masel said.

“Science is a confidence-building exercise,” Humphrey said. “And we can only achieve confidence when we have tested and retested the same set of ideas and see how they hold up under different circumstances.”

_______________

Follow Julie Huynh on Twitter.

