This recent study by Lauri Nummenmaa and colleagues has been a popular topic of discussion on science blogs and in the popular press due mostly study’s striking figures.

Reading the captions for said figures reveals that they were designed to display data related to subjects’ ratings of body sensations during different emotional states and not, as you might assume without context, data from some type of biological imaging. Charting the perceptual experience of different emotional states may not be the most glamorous path of inquiry in psychological research (social neuroscience probably still holds that dubious honor), but it is an important piece of basic research nonetheless.

Trawling through discussion threads on Reddit and on NPR’s Shots Blog, the results of this study seem to have generated an unusual amount of consternation for a piece of basic research. Again, mostly due to its figures.

Looking at the paper, it is very clear that Nummenmaa et al. were interested in answering questions regarding the experience of emotions rather than anything related to physiology. As a former neuroscience researcher (and, before that, a former psychophysiology researcher) this seems perfectly reasonable. Previous work on the physiology of emotion has been generally inconclusive (for a detailed review of the subject- look here) and valid data related to perceptual experience holds tremendous translational potential for researchers and other professionals involved in the treatment of mood disorders.

Nummenmaa et al.’s sample appears to be sufficiently large and their manipulations and statistics seem sound. In the paper itself and in figure captions shown in press reports, it is made quite clear that the figures reflect subject ratings not physiological measures. Despite this, I’ve seen scores of comments expressing outrage that the data is being misrepresented as some sort of biological (and therefore somehow “more objective”) measure.

Taking a step back, Nummenmaa and colleagues could probably have put more thought into how their figures would be perceived when taken without context. This may sound like a strange thing for scientists to worry about, but if scientists are to embrace methods of communication like Twitter, they will need to think about how their content will be perceived when separated from any context. Digressions aside, and despite the opinions displayed in many of the comments I’ve encountered, I think the authors provided more than sufficient context with their figure captions and clear statements of their research questions in the abstract, introduction, and discussion sections of their paper.

From what I can tell, this isn’t a case of scientists misrepresenting data but rather simultaneously a case of reporters not doing their due diligence in giving accurate descriptions of scientific material (c’mon reporters, sensation ≠ perception), and readers not giving said descriptions more than a perfunctory glance before making disparaging comments (c’mon readers, I know the figure for shame looks like Spiderman, but read the captions before going on an anti-science diatribe).

Science should be collaborative. This does not just mean that scientists need to work together; it also means that scientists, reporters of science, and readers of science need to work together to drive public understanding forward. I am the first to criticize scientists for communicating poorly or misrepresenting data to an unsuspecting audience. But, in this case, the blame lies elsewhere. Trusting reporters to actually report and readers to actually read is not the same as misrepresenting data.