We had a post on this a couple years ago, but the topic came up again, and here are my latest thoughts.

Psychology has several features that contribute to the replication crisis:

– Psychology is a relatively open and uncompetitive field (compared for example to biology). Many researchers will share their data.

– Psychology is low budget (compared to biomedicine). So, again, not so much incentive to hoard data or lab procedures. There’s no “Robert Gallo” in psychology who would take someone’s virus sample in order to get a Nobel Prize.

– The financial rewards are lower within psychology, hence the incentive is not to set up your own company using secret technology but rather to get your idea known far and wide so you can get speaking tours, book contracts, etc. Sure, most research psychologists don’t attempt this, but to the extent there are financial rewards, that’s where they are.

– In psychology, data are generally not proprietary (as in business) or protected (as in medicine). So there’s a norm of sharing. In bio, if you want someone’s data, you have to beg. In psychology, they have to give you a reason not to share.

– In psychology, experiments are easy to replicate (unlike econ or poli sci, where you can’t just run a bunch more recessions or elections) and cheap to replicate (unlike medicine which involves doctors and patients). So replication is a live option, indeed it gets people suggesting that preregistered replication be a requirement in some cases.

– Finally, hypotheses in psychology, especially social psychology, are often vague, and data are noisy. Indeed, there often seems to be a tradition of casual measurement, the idea perhaps being that it doesn’t matter exactly what you measure because if you get statistical significance, you’ve discovered something. This is different from econ where there seems there’s more of a tradition of large datasets, careful measurements, and theory-based hypotheses. Anyway, psychology studies often (not always, but often) feature weak theory + weak measurement, which is a recipe for unreplicable findings.

To put it another way, p-hacking is not the cause of the problem; p-hacking is a symptom. Researchers don’t want to p-hack; they’d prefer to confirm their original hypotheses. They p-hack only because they have to.