With each set of hands study results pass through, there is a chance for another layer of spin to be added or chipped away. Adding is awfully common, though. By the time it gets to the press release stage, the spin can be snowballing fast. The narrative that takes hold can be awfully far removed from what the data bears out.

There are just so many temptations along the way, aren’t there? A couple of years ago, a bioethics commission called it a “hype pipeline”. Every player, from funders to universities, from researchers to journals and the media, seems to have incentives to puff out science hype. A study can even have a hyped-up name before it starts: IMPROVE-IT and MIRACLE, I’m looking at you!

Are we stuck with it? I think we are, as we are with all forms of bias. I think a sizable proportion of the players on the science and journalism sides don’t see themselves as part of the problem, wouldn’t want to do what needs to be done, or just don’t know how. And I don’t think we know how to change that.

But even if we can’t stop science hype, we can whittle away at it, and reduce its impact. You would think how to do that effectively would be a really high research priority. However, along with reducing the biases in thinking that increase the amount of research spin and its impact, reducing it is not being studied with anything like the effort needed. Scott Lilienfeld and colleagues wrote this years ago:

…we have made far more progress in cataloguing cognitive biases than in finding ways to correct them.

That’s still true. And it’s true for the research on research spin, too.

Let’s start with what we do know.

Research spin is common in journals

For example, there was over-interpretation in about a third of papers on diagnostic studies, over half of papers on molecular diagnostic studies, two-thirds of clinical trials with non-significant findings for their primary outcomes, and over 80% of non-randomized studies of interventions.

Spin in journals can bend readers’ opinions

We don’t know how many readers are susceptible to spin, but some clearly are. Here’s a randomized trial by Isabelle Boutron and colleagues that showed spin can work on clinician readers. And a set of randomized trials that showed it affects consumers, too. (That’s the first publication for a planned prospective meta-analysis of 16 trials.)

“Journalist” spin is often a megaphone of spin from journals and press releases, not the origin

There is a bunch of observational studies now suggesting that a chunk of what seems to be media spin could be originating with scientists, not the journalists. For example, this one found that about half of the spin in articles in the general media was also in the journal reports and press releases. And so did this one.

This one found that exaggeration in press releases predicted exaggerations in the news – but didn’t increase the chances that a study would get news coverage. And this one found that spin a press release almost always resulted in spin in the news, but it was uncommon in the news if the press release didn’t start it.

Because these are all observational studies, though, we don’t know how much of this is journalists just broadcasting scientists’ spin, and how much would have happened even if the scientists stuck to the straight and narrow. Chris Chambers, who is a co-author of some of these studies, said that finding out that most media misreporting is actually because of scientists, their institutions, and journals “would make grumbling scientists look like hypocrites”. (That quote comes from a Twitter thread that’s gripping reading.)

Chambers & co went that next step, and did a randomized study. They found that hype in a press release was more likely to lead to hype in newspapers – and that a study didn’t need to be hyped to get covered. There might not be a downside to authors, journals, and universities being more cautious about what they claim.

There isn’t an evidence base, though, to show what works to reduce hyping of research results.

Boutron and colleagues tout author reporting guidelines as a spin-limiter. But I don’t see how that could help. Recommending the use of reporting guidelines hasn’t had a powerful impact on the core problem they are meant to address – fully reporting a study’s key elements. I don’t see how they could have a big impact on spin. You can fully report results and still fire up a lot of spin.

The bioethics commission I mentioned earlier had recommendations along the lines, just stop it with the buzzwords, people! And for members of the public, advice to watch out for the buzz words, and increase your scientific literacy. But if even people who do science for a living have trouble making correct inferences and claims from their data, we’re expecting rather a lot from non-scientists, aren’t we?

In a way, calling all this “spin” spins it towards the implication this is about deliberate manipulation, doesn’t it? Exaggeration and misinterpretation of results is often, or even possibly mostly, caused by people not knowing better or letting their enthusiasm carry them away. Or just copying what others are doing.

The language and techniques of various types of spin are so widespread, that people absorb them into their own practice. When something becomes a norm, people even teach it. Here’s an example of participants at an AAAS meeting being taught about abstracts: “one good result can lead to acceptance”. Another sign of the underlying problem: Min Qi Wang and colleagues found a shockingly high rate of biostatisticians reporting that they had been asked to do inappropriate analyses or reports, including at “top tier” universities.

A couple of years ago, Steve Goodman argued that if research institutions were serious about reliable research, they would supply enough statisticians and methodologists to support all study design and data analysis – about 40 for a major institution, or 1 for every US$10m of research funding. And they would have an ombudsperson, too.

He’s a statistician, you could think, so he would say that. Well, I’m not, and I think he’s right. We know that peer review by statisticians is one of the few things shown to improve scientific papers. Ramping up their numbers and roles makes a lot of sense – and it’s a testable strategy, too. Statisticians and journalists are getting closer in a variety of ways around the world, and that’s great.

New role! First-ever Senior Advisor for Statistics Communication & Media Innovation at the American Statistical Association. It will soon be a joint appointment with a local university to focus on stats, journalism & communication–details still under negotiation, so stay tuned! pic.twitter.com/3Yw0a5mmfc — Regina Nuzzo (@ReginaNuzzo) June 27, 2019

Kudos to anyone who goes down this route, especially the key player here: universities. It wouldn’t be easy, though. The shift in resources to “produce” enough statisticians alone would be a heavy lift. The recent spirited Twitter debate on whether you need statisticians to teach statistics to psychologists is a good introduction to the kind of academic headwind to expect – as is the Chambers’ twitter thread I mentioned earlier.

Provocative question to all psychologists teaching statistics: Shouldn't you just stop & let mathematically trained statisticians take over? Empirical research shows that psych's teaching statistics don't know statistics well enough. So how can still teaching it be justified? — Rebecca Willén (@rmwillen) June 29, 2019

All of this has implications for the communications and teaching skill sets statisticians need, too. I was recently at the World Conference of Science Journalists in Switzerland. The session I was speaking in was really well-attended, and that was encouraging. On the other hand, it was a reminder that the stereotypes of people who are into “words” versus those who are into “numbers” are rooted in reality. One of the ways this can be broken down is if more statisticians get better at communicating with the number-phobic! A lot is riding is on it.

[Update 29 July 2019] A study of press releases and news stories in the UK and Netherlands found that only 7 to 7.5% of stories quoted an independent expert not included in the press release – and that was associated with less exaggeration.

~~~~

Disclosure: My travel and attendance at the 2019 World Conference of Science Journalists in Lausanne was paid by the conference/World Federation of Science Journalists. (My slides for my presentation on reporting meta-analyses are here, and Jop de Vrieze’s write-up of the whole session is here.)

The cartoons are my own (CC BY-NC-ND license). (More cartoons at Statistically Funny and on Tumblr.)

On the steampunk cartoon: check out this interesting read on “-trons”, their history in science and culture. Why steampunk? Because scientists were celebrities by the late 19th century and sensationalist newspapers were booming – and the printing presses that enabled this boom were steam-powered.