Last year a mainstream psychology researcher called Daryl Bem published a competent academic paper, in a well-respected journal, showing evidence of precognition – the ability to see the future. Instead of designing new studies to see whether people could consciously tell you about the future, he ran some classic psychology experiments backwards.

In experiments on subliminal influence, participants are presented with two mirror images of the same picture. They are asked which they prefer, and are likely to choose the images where a subliminal negative image is flashed up for milliseconds, before they make their choice. In the Bem study, the negative images were flashed up after they made their choice, but participants were still less likely to choose the image on the side with the nasty subliminal image.

This was all pretty kosher, and statistically significant, and I wasn't very interested, for the same reasons you weren't. If humans really could see the future, we'd probably know about it already; and extraordinary claims require extraordinary evidence, rather than one-off findings. There's plenty of amazing stuff in our infinitely distracting universe and I'll pay attention to the cheesy precognition stuff when the evidence is good and replicated.

Now the study has been replicated. Three academics – Stuart Richie, Chris French, and Richard Wiseman – have re-run three of these backwards experiments, just as Bem ran them, and found no evidence of precognition. They submitted their negative results to the Journal of Personality and Social Psychology, which published Bem's paper last year, and the journal rejected their paper out of hand. We never, they explained, publish studies that replicate other work.

This squabble illustrates two problems facing all of science, which have never been adequately addressed.

The first is the problem of context: these positive results may have happened purely by chance, against a backdrop of negative results that never reached the light of day. Researchers and academic journals, just like newspaper journalists, are more likely to publish eye-catching positive results. We know that even if you analyse one study's results in lots of different ways, you increase the likelihood of getting a positive finding purely by chance. So replicating these findings was key – Bem himself said so in his paper – and keeping track of the negative replications is vital too. For clinical trials, there is a system of registering your trial before you recruit participants, to reduce the risk of negative results being buried (it's imperfect, as I've written, but it exists). Outside of trials, people tend not to bother, which puts whole fields at risk of spurious positive findings: Wiseman has set up a register for people to declare that they were attempting to replicate Bem's work.

But the second issue is how people find out about stuff. We exist in a blizzard of information, and stuff goes missing. Publishing a follow-up in the same venue that made an initial claim is one way of addressing this problem (and when the journal Science rejected the replication paper, even they said: "Your results would be better received and appreciated by the audience of the journal where the Daryl Bem research was published.")

The New York Times ran a long piece on the original precognition finding, New Scientist covered it twice, the Guardian joined in online, and the Telegraph wrote about it three times over. It's hard to picture many of these outlets giving equal prominence to the new, negative findings now emerging, in the same way that newspapers often fail to return to a debunked scare. The most interesting problems around information today are about how to cope with the overload. For some eye-catching precognition research, this stuff probably doesn't matter. What's interesting is that the information architectures of medicine, academia and popular culture are all broken in the exact same way.