That’s the title of a new paper by Paul Smaldino and Richard McElreath which presents a sort of agent-based model that reproduces the growth in the publication of junk science that we’ve seen in recent decades.

Even before looking at this paper I was positively disposed toward it for two reasons. First because I do think there are incentives that encourage scientists to follow the forking paths toward statistical significance and that encourage journalists to publish this sort of thing. And I also see incentives for scientists and journals (and even the Harvard University public relations office; see the P.P.S. here) to simply refuse to even consider the possibility that published results are spurious. The second reason I liked this paper before even reading it is that the second author recently wrote an excellent textbook on Bayesian statistics which in fact I just happened to recommend to a student a few hours ago.

I have some problems with the details of Smaldino and McElreath’s model—in particular, I hate the whole “false positives” thing, and I’d much prefer a model in which effects are variable, as discussed in this recent paper. But overall I think this paper could have useful rhetorical value; I place it in the same category as Ioannidis’s famous “Why most published research findings are false” paper, in that I agree with its general message, even if it’s working within a framework that I don’t find congenial.

In short, I agree with Smaldino and McElreath that there are incentives pushing scientists to conduct, publish, promote, and defend bad science, and I think their model is valuable in demonstrating how that can happen. People like me who have problems with the particulars of the model can create their own models of the scientific process, and I think they (we) will come to similar conclusions.

I hope the Smaldino and McElreath paper gets lots of attention because (a) this can motivate more work in this area, and (b) by giving a systemic explanation for the spread of junk science, it lets individual scientists off the hook somewhat. This might encourage people to change the incentives and it also gives a sort of explanation for why all these well-meaning researchers can be doing so much bad work. One reason I’ve disliked discussions of “p-hacking” is that it makes the perpetrators of junk science out to be bad guys, which in turn leads individual researchers to think, Hey, I’m not a bad guy and my friends aren’t bad guys, we’re not p-hacking, therefore our work is ok. I’m hoping that ideas such as the garden of forking paths and this new paper will give researchers permission to critically examine their own work and the work of their friends and consider the possibility that they’re stuck in a dead end.

I fear it’s too late for everyone involved in himmicanes, beauty and sex ratios, ovulation and clothing, embodied cognition, power pose, etc.—but maybe not too late for the next generation of researchers, or for people who are a little less professionally committed to particular work.