In 2011, an article by the social psychologist Daryl Bem caused a commotion in the science community. Daryl Bem showed in a series of psychological experiments with over 1,000 participants that people on average were able to predict the outcome of future events that could otherwise not be anticipated by “normal” means on an above-chance level. For example, in one of these experiments male, heterosexual college students had to guess behind which of two curtains, presented on a computer screen, an erotic photo was hidden. The chance of guessing the right door with the stimulating photo lay at 50%. The participants were able to significantly predict the right door above chance level, not to a great extent, but in the statistical terminology applied in experimental psychology, significantly so. All but one of nine different experiments in that article showed this small but statistically significant above-chance effect. Following this publication an ongoing debate started about the possibility or impossibility of precognition, a form of looking into the future. Were the students really able to use some information about the future (or their own future state), so they could press the right button?

The discussion of whether these effects are genuine parapsychological effects or artifacts quickly gets very technical. Psychological theories are not easily checked or falsified. Therefore, I will try to summarize this part of the statistically-oriented discussion that evolved over the past few years. Individual researchers have voiced their opinion that regarding parapsychological effects (I am paraphrasing): "if the statistics show a significant effect then the statistics must be wrong." Accordingly, psychologists like Eric–Jan Wagenmakers advise the use of a different set of statistical methods called Bayesian analysis. With this alternative method some of the initially positive effects indeed had to be discarded. But that is not the end of the story. There is now a discussion pertaining to the right use of the Bayesian method. You see, it is not so much a question of whether experiments show positive findings, they often do, but the problem lies in the selection of the statistical method. It is quite complicated and not a question of “let’s do an experiment and see what is the result.”

As usual in science, an initial finding has to be replicated by other researchers. Meta-analyses are a means to decide whether, across all these studies, an effect can be found. Some studies show an effect, others don’t. In meta-analyses one combines all studies and tests whether a presumed effect holds. Such analyses, for example one performed by Daryl Bem and colleagues, show overall positive effects of “looking into the future,” also when applying Bayesian analysis. Actually, there is not the Bayesian analysis, the selection of the method rests on assumptions one can endlessly discuss (for the initiated: the use of priors). Again, there is no clear-cut, rational way of deciding which method to use.

In this blog, I want to highlight an additional criticism, besides the one concerning the right use of statistics. This type of critique becomes apparent in the published discussion in a series of articles that appeared in March 2018 in the journal Psychology of Consciousness: Theory, Research, and Practice.

In their article “Precognition as a form of prospection: A review of the evidence,” Julia Mossbridge and Dean Radin from the Institute of Noetic Sciences in California provide an overview of the current empirical evidence in investigations of precognition. Each chapter in this article on different forms of prospection starts with a historic account of research in the 20th century. It features both positive and null findings concerning potential effects of precognitive and computer experiments on precognition. Two chapters deal with potential physiological and psychological mechanisms to explain the results. Of course, one could say that the way of presentation in this article is in favor of the veridicality of precognitive effects. But then this holds true for any summary where researchers present their ideas. If the results were concerned with a mainstream topic, then the significant results would probably just pass without a roar.

What I want to turn to is the style of arguments voiced in two articles in this article series which represent the critical side of the debate. The articles by Schwarzkopf and by Houran and colleagues are telling in that they categorically deny the possibility of scientific investigations of precognition.

To cite Schwarzkopf, “No matter how strong the statistical evidence, if the hypothesis is impossible, it must necessarily be false … I certainly will not be convinced of its existence by some implausible observations, no matter how significant the meta-analysis.” This then stops any scientific debate. That's ideology. Houran et al. write about the “crap factor” visible in the parapsychological effect sizes (as if they knew what effect size would not fall under a “crap factor”). These are gut reactions, similar to the one Eric–Jan Wagenmaker was purported to have had when he read Daryl Bem’s 2011 paper: “Reading it made me physically unwell.”

Why was there such an uproar concerning positive psi findings? For sure, the presented evidence goes against what we learn in school. It is about an astonishing hypothesis. But should we react in such an emotional manner? The debate on whether there are true parapsychological effects or not will continue. Researchers should work in a non-judgmental way when trying to weigh the evidence of even extreme claims. That is actually the approach of Schooler and colleagues in this series of articles. And that’s how new and astonishing findings in science have been made. That’s also how astonishing hypotheses were falsified. A very personal gut feeling may be a strong in research, but it has to be complemented by rigorously applying scientific methods. The overview given by Julia Mossbridge and Dean Radin on evidence for precognition is a strong claim that demands further research.