In 2011, Petroc Sumner of Cardiff University and his colleagues published a brain imaging study with a provocative result: Healthy men who have low levels of a certain chemical in a specific area of their brains tend to get high scores on tests of impulsivity.

When the paper came out, thousands of people across England were rioting because a policeman had shot a young black man. “We never saw the connection, but of course the press immediately saw the connection,” Sumner recalls. Brain chemical lack ‘spurs rioting’, blared one headline. Rioters have ‘lower levels’ of brain chemical that keeps impulsive behaviour under control, said another.

“At the time, like most scientists, we kind of instinctively blamed the journalists for this,” Sumner says. His team called out these (shameful, really) exaggerations in The Guardian, and started engaging in debates about science and the media. “We quickly began to realize that everyone was arguing on the basis of anecdote and personal experience, but not evidence. So we decided to back off, stop arguing, and start collecting data.”

And the data, published today in BMJ, surprised Sumner. His team found that more than one-third of academic press releases contain exaggerated claims. What’s more, when a study is accompanied by an exaggerated press release, it’s more likely to be hyped in the press.

Because press releases are almost always approved by a study’s leaders before being distributed, Sumner’s findings suggest that scientists and their institutions play a bigger role in media hype than they might like to acknowledge.

“We’re all under pressure as scientists to have our work exposed,” Sumner says. “Certainly I think a lot of us would be quite happy not to take responsibility for that — just to say, ‘Well, we can’t do anything about it, if they’re going to misinterpret that’s up to them but it’s not our fault’. And I guess we’d like to say, it is really important and we have to do something more about it.”

Sumner and his colleagues looked at 462 health or medicine-related press releases about issued by 20 British universities in 2011. For each press release, the researchers also analyzed the scientific study it was based on, and news articles that described the same findings.

The researchers limited the analysis to health and medicine partly because (as I’ve written about before) these stories tend to influence people’s behavior more than, say, stories about dinosaurs or space. They focused on three specific ways that press releases can distort or exaggerate: by implying that a study in animals is applicable to people; by making causal claims from observational data; and by advising readers to change their behaviors (“these results suggest that aspirin is safe and effective for children,” say, or, “it’s dangerous to drink caffeine during pregnancy”).

More than one-third of the press releases did each of these things, and the misinformation showed up in the media, too. For example, among press releases that gave exaggerated health advice, 58 percent of subsequent news articles also contained exaggerated health advice. In contrast, among press releases that didn’t make exaggerated recommendations, only 17 percent of news articles did so. The researchers found similar trends for causal claims and for inferring that animal work applies to people.

“We certainly don’t want to be blaming press officers for this,” Sumner says. “They’re part of the system. The academics probably don’t engage as much as they should.”

I called Matt Shipman, a science writer and press information officer at North Carolina State University, to ask what he thought of the findings. Shipman has been a press officer for seven years, and before that he was a journalist. “The numbers are very powerful,” he said, and they underscore the importance of press releases at a time when reporters often don’t have the time or resources for thorough reporting. (Shipman has just signed on with Health News Review to rigorously evaluate the quality of health-related press releases.)

Shipman also brought up an important caveat. Because this study is observational, it doesn’t prove that press releases are themselves the cause of hype. “If a researcher is prone to exaggeration, which leads to exaggerated claims in a news release, the researcher is likely to also be prone to exaggeration when conducting interviews with reporters,” Shipman says. “The news release may be a symptom of the problem, rather than the problem itself.”

When he writes press releases, Shipman says he almost always begins by meeting with the researcher in person and asking him or her to explain not only the findings, but what work led to them, why they’re interesting, and what other experiments they might lead to. Then Shipman writes a draft of the release and sends it back to the researcher for approval. He asks the scientist to check not only for factual inaccuracies, but for problems in emphasis, context, or tone. Different press officers at other institutions, however, write press releases using far less rigorous methods, as I have learned by swapping stories with them over the years. And some press officers are judged by the quantity of stories that come out in big outlets, which naturally creates an incentive to make research seems newsworthy, even when it might not be.

“What I think is probably the case is that all of the variables at play here — the researchers, the press officers, and the journalists — are all humans,” Shipman says. “And all of them are capable of making mistakes, intentionally or unintentionally.”

So. Is there any concrete way to reduce those mistakes?

In an editorial accompanying the BMJ studyeditorial accompanying the BMJ studyeditorial accompanying the BMJ study, author and doctor Ben Goldacre makes two suggestions. First, the authors of press releases and the researchers who approved them should put their names on the releases, he writes. “This would create professional reputational consequences for misrepresenting scientific findings in a press release, which would parallel the risks around misrepresenting science in an academic paper.” That seems reasonable to me.

Second, to boost transparency, press releases shouldn’t only be sent to a closed group of journalists, Goldacre writes. “Instead, press releases should be treated as a part of the scientific publication, linked to the paper, referenced directly from the academic paper being promoted, and presented through existing infrastructure as online data appendices, in full view of peers.”

That sounds good, but “would require a significant shift in the culture,” according to Shipman. Press officers would have to be brought into the process much earlier than they are now, he says. And scientists would have to be far more invested in press releases than many of them are now.

I think we journalists need to own our portion of the blame in this mess, too. Let’s go back to Sumner’s 2011 brain-imaging study, for example. His university’s press release didn’t have any wild exaggerations, and it certainly didn’t make a connection between the research and the riots. That came from the journalists (and/or their editors).

“But that actually doesn’t happen very often, it turns out,” Sumner says. “Most of the time, the media stories stay pretty close to what’s in the press release.”