Several readers have called my attention to a new paper in PLoS ONE by Christine Ma-Kellams and Jim Blascovich, psychologists from the University of California at Santa Barbara, supposedly showing that reading about science in an experimental study makes one behave more morally and altruistically. They also claim to show that studying science improves your moral judgment (reference below; download is free.) I say “supposedly” and “claim” because I don’t find the paper terribly convincing.

The authors did four experiments, three of them using the same protocol to “prime” the subjects with science or with a “neutral” non-science task. The authors’ predicted that “the notion of science as part of a broader moral vision of society [i.e., Enlightenment values] facilitates moral and prosocial judgments and behaviors.” And that’s what they found, in all four studies. I’ll briefly describe the experiments and results.

Study 1. This used 48 undergraduates from UCSB. All of them first read a date-rape story in which a guy drives a woman home, the woman invites him in for a drink, and then he has “nonconsensual sex” (a euphemism for “rape”, I guess) with her. Afterwards, the participants answered questions about their field of study and how wrong they thought the man’s act was (on a scale from 1–completely right, to 100–completely wrong). They also answered the question “How much do you believe in science?” on a scale from 1 (not at all) to 7 (very much).

Results: Field of study was correlated with greater moral condemnation with rape, with science students being significantly (p = 0.01) more condemning than nonscience students. Belief in science was also positively correlated with moral condemnation (p <0.001).

Problems here include use of a small pool of college undergraduates, correlation that doesn’t show causation (perhaps more ‘moral’ students tend to gravitate to or are more accepting of science), and lack of replication, as this was a one-time study. It’s also not clear whether the science-friendly students would actually behave more morally. Answering one question doesn’t show that you are generally “more moral” than others.

Studies 2,3, and 4. These studies used 33 undergraduates, 32 volunteers from the area, and 43 participants between 18 and 22 from the university’s “research participation pool,” respectively.

In all three studies, the participants got a list of five scrambled words from which they had to choose four to make a complete sentence. The “science” condition contained words like “logical”, “hypothesis”, “theory”, “laboratory” and “scientists.” The controls had a list of five nonscience words; they give the example of “shoes give replace old the.” There were thus two primes: a science one and a “control” one. Then each of the three groups was subject to a different test, described under “results” below.

Results, study 2. After their prime, students read the same date-rape scenario and made their 1-100 moral judgment. The subjects primed with science words were more condemnatory of rape (p = 0.04).

Problems here include small sample size again, a probability that is barely significant (0.05 is the cut-off level), and a worry that this ranking (82 for control primes, 96 for science primes) isn’t a good indicator of moral behavior.

Results, study 3. Students, after priming, completed a “prosocial intentions measure,” which included “the likelihood of engaging of each of several behaviors in the following month, including prosocial activities (donating to charity, giving blood, volunteering, and distractor activities (attending a party, going on vacation, seeing a movie).” The subjects primed with science reported greater prosocial intentions relative to controls (p = 0.024).

Problems are again small sample size unrepresentative of the general population, a probability that isn’t all that impressive, and the uncertainty that reporting your intentions doesn’t mean you’ll actually fulfill those intentions.

Results, study 4. After priming, students took a standard “economic exploitation” test; each was given five one-dollar bills and told to divide the money between themselves and another anonymous participant (there was no opportunity for the other person to reject the dosh). After the experiment was over, both participants got $5 anyway. In this case alone there was an effect of gender, with women keeping more money for themselves than did men (p = 0.03), but there was no gender X prime interaction, and those primed with science gave away significantly more money than those receiving the control prime (p = 0.046).

Problems are again small sample size, nonrepresentative population, and a probability that is barely significant (agai, 0.05 is the cutoff).

Note that in the last three studies, the possibility that science-y people were more moral a priori was not an issue, as the priming tests were allocated randomly among participants, regardless of their area of study.

The authors conclude that “Taken together the present results provide support for the idea that the study of science itself independent of the specific conclusions reached by scientific inquiries holds normative implications and leads to moral outcomes.” Well, only the study #1 had anything to say about the “study of science,” as it was the only one that tested science students vs. non-science students. “Priming” with words in the other three studies has nothing to do with “the study of science.”

What this study shows is simply a need for further studies, as sample sizes were small and probabilities often marginal. But I am wary of assessing how moral or altruistic someone is from their response to a single test. That itself would need validation by correlating test performance with moral or altruistic behavior in the real world, something that has ever been done, much less can be done.

Further, tests like these are in severe need of replication. For example, the authors mention the paper of Vohs and Schooler (free download at link) showing that reading about humans’ lack of free will made them more likely to cheat on a subsequent task. That paper got a lot of attention. But, as I’ve written about before, the Vohs and Schooler paper was not replicated in a subsequent study by Rolf Zwaan at the University of Rotterdam. Zwaan found no difference in cheating behavior after participants read a piece a piece by Francis Crick on the illusory nature of free will vs. those who read a “control” piece by Crick. Ma-Kellams and Blascovich don’t mention the failure of replication. And of course their own tests need to be replicated on larger and more diverse populations.

I’ve been hard on this study precisely because it produced the results I’d like to see: studying science is good for your behavior. The connection between the two is not as obvious to me as to the authors, but maybe it’s true. But the differences were small, and I’m not sure what the implications would be even if the results were real. As Feynman said, the main purpose of science is to keep you from fooling yourself, so we must be extra cautious about accepting results that meet our preconceptions.

Note, too, that the paper was published in PLoS ONE, which is a journal that doesn’t review papers for novelty or generality. The journal will publish anything so long as the experiments seem to have been performed properly. There have been some good papers in that journal, but I regard it largely as a dumping ground for papers that can’t meet the more rigorous standards of other journals. With increasing pressure to publish, scientists can turn to journals like this to publish nearly anything, so long as the experiment was properly designed and properly analyzed. I’m not saying that this paper was not interesting, but if the results were so momentous why did the authors send them to PLoS ONE?

/critique

_____________

Ma-Kellams, C., and J. Blascovich. 2013. Does Science Make You Moral? The Effects of Priming Science on Moral Judgments and Behavior. PLoS ONE 8:e57989 EP