The other day someone pointed me to this article by James Kaufman and Vlad Glǎveanu in a psychology journal which begins:

How does the current replication crisis, along with other recent psychological trends, affect scientific creativity? To answer this question, we consider current debates regarding replication through the lenses of creativity research and theory. Both scientific work and creativity require striking a balance between ideation and implementation and between freedom and constraints. However, current debates about replication and some of the emerging guidelines stemming from them threaten this balance and run the risk of stifling innovation.

This claim is situated in the context of a fight in psychology between the traditionalists (who want published work to stand untouched and respected for as long as possible) and replicators (who typically don’t trust a claim until it is reproduced by an outside lab).

Rather than get into this debate right here, I’d like to step back and consider the proposal of Kaufman and Glǎveanu on its own merits.

I’m 100% with them on reducing barriers to creativity, and I think that journals in psychology and elsewhere should start by not requiring “p less than 0.05” to publish things.

Nothing is stopping researchers such as the authors of the above paper from publishing their work without replication. So I’m not quite sure what they’re complaining about. They don’t like that various third parties are demanding they replicate their work, but why can’t they ignore these demands.

Indeed, as I wrote above, I think the barriers to publication should be lowered, not raised. And if an Association for Psychological Science journal doesn’t want to publish your article (perhaps because you don’t have personal connections with the editors; see P.S. below), then you can publish it in some other journal.

If, you flip a coin 6 times and get four heads, and you’d like to count that as evidence for precognition or telekinesis, and publish that somewhere, then go for it.

As long as you clearly and openly present your data, evidence, and argument, it seems fine to me to publish whatever you’ve got. And if others care enough, they can do their own replications. Not your job, no problem.

What strikes me is that the authors of the above article, and other people who present similar anti-replication arguments, are not merely saying they want the freedom to be creative. They, and their colleagues and students, already have that freedom. And they already have the freedom to publish un-preregistered, un-replicated work in top journals; they do it all the time.

So what’s the problem?

It seems that what these people really are pushing for is the suppression of criticism. It’s not that they want to publish in Psychological Science (which, in its online version, could theoretically publish unlimited numbers of papers); it’s that they don’t want the rest of us publishing there.

It’s all about status (and money and jobs and fame). Publishing in Psychological Science and PNAS has value because these journals reject a lot of papers. They’re yammering on about creativity—but nothing’s getting in the way of them being creative and conducting and publishing their unreplicated studies. No, what they want is to be able to: (a) perform these unreplicated studies, (b) publish them as is, (c) get tenure, media exposure, etc., and (d) deny legitimacy to criticism from outside. They key steps are (c) and (d), and for these they need to play gatekeeper, to maintain scarcity by preserving their private journals such as Psychological Science and PNAS for themselves and their friends, and to shout down dissenting voices from inside and outside their profession.

Innovation is not being “stifled.” What’s being stifled is their ability to have their shaky work celebrated without question within academia and the news media, their ability to dole out awards and jobs to their friends, etc.

Freedom of speech means freedom of speech. It does not mean freedom from criticism.

P.S. I have no idea how much reviewing happened on the above-linked paper before it was published. Here’s what it says at the end of the article:

And here’s something that the first author of the article posted on the internet recently:

This last bit is interesting as it suggests that Kaufman does not understand the Javert paradox. He’s criticizing people who “devote their time” to criticism, without recognizing that, in the real world, if you care about something and want it to be understood, you have to “devote time” to it. In the particular case under discussion, people criticized Sternberg’s policies quietly, and Sternberg responded by brushing the criticism aside. Then the critics followed up with more criticism. Sure, they could’ve just given up, but they didn’t, because they thought the topic was important.

Flip it around. Why did Kaufman and Glǎveanu write the above-linked article? It’s because they think psychology is important—important enough that they want to stop the implementation of policies that they think will slow down research in the field. Fair enough. One might disagree with them, but we can all respect the larger goal, and we can all respect that these authors think the larger goal is important enough, that they’ll devote time to it. Similarly, people who have criticized Sternberg’s policy of filling up journals with papers by himself and his friends, and suppressing dissent, have done so because they too feel that psychology is important—important enough that they want to stop the implementation of policies that they think will slow down research in the field. It’s the same damn thing. Susan Matthews put it well: We need to normalize the pursuit of accuracy as a good-intentioned piece of the scientific puzzle.

P.P.S. I think I see another problem. In the reference list to the above-linked paper, I see this book:

Kaufman A. B., Kaufman J. C. (Eds.). (2017). Pseudoscience: The conspiracy against science. Cambridge, MA: MIT Press.

P.P.P.S. Just to clarify my recommendation to “publish everything”: I do think reviewing is valuable. I just think it should be done after publication. Put everything on Arxiv-like servers, then “journals” can do the review process, where the positive outcome of a review is “endorsement,” not “publication.” Post-publication reviewers can even ask for changes as a condition of endorsement, in the same way that journals currently ask for changes as a condition of publication.

The advantages of publishing first, reviewing later, are: (a) papers aren’t sitting in limbo for years during the review process, and (b) post-publication review can concentrate on the most important papers, rather than, as now, so much of the effort going into reading and reviewing papers that just about no one will ever read.

For more, see:

An efficiency argument for post-publication review

and

Post-publication peer review: who’s qualified?