'Danger from just seven cups of coffee a day," said the Daily Express on Wednesday. "Too much coffee can make you hallucinate and sense dead people, say sleep experts. The equivalent of just seven cups of instant coffee a day is enough to trigger the weird responses." The story appeared in almost every national newspaper.

This was weak observational data. That's just the start of our story, but you should know exactly what the researchers did. They sent an email inviting students to fill out an online survey, and 219 agreed.

The survey is still online (in all its time-consuming glory, I just clicked answers randomly to see the next question). It asks about caffeine intake in vast detail, and then uses one scale to measure how prone you are to feeling persecuted, and uses another, the Launay-Slade Hallucination Scale (LSHS), 16 questions designed to measure "predisposition to hallucination-like experiences".

Some of these questions are about having hallucinations and seeing ghosts, but some really are a very long way from there. Heavy coffee drinkers could have got higher scores on this scale by responding positively to questions like: "No matter how hard I try to concentrate on my work, unrelated thoughts always creep into my mind"; "Sometimes a passing thought will seem so real that it frightens me"; or "Sometimes my thoughts seem as real as actual events in my life". That's not seeing ghosts or hearing voices.

There could have been alternative explanations for the observed correlation between caffeine intake and very slightly higher LSHS scores. Maybe some students who drink a lot of coffee are also sleep deprived, and marginally more prone to hallucinations because of that. Maybe they are drinking coffee to help them get over last night's marijuana hangover. Maybe people who take drugs instrumentally to have fun and distort their perceptions also take drugs like caffeine instrumentally to stay alert. You can think of more, I'm sure. The researchers were keen to point out this shortcoming in their paper. The Express and many others didn't seem to care.

If you read the academic paper you find that the associations reported are weak. For the benefit of those who understand "regression" (and it makes anybody's head hurt), 18% of the variance in the LSHS score is explained by gender, age and stress. When you add in caffeine, 21% of the variance in the LSHS score is explained: only an extra 3%, so caffeine adds very little. The finding is statistically significant, as the researchers point out, so it is unlikely to be due to chance, but the fact is that it's still weak, it explains only a tiny amount of the overall variance in scores on the "predisposed-to-hallucinations" scale.

Lastly, most newspapers reported a rather dramatic claim, that seven cups of coffee a day is associated with a three times higher prevalence of hallucinations. This figure does not appear in the paper. It seems to be an ad hoc calculation done afterwards by the researchers, and put into the press release, so you cannot tell you how they did it, or whether they controlled appropriately for problems in the data, like something called "multiple comparisons".

Here is the problem. Apparently this three times greater risk is for the top 10% of caffeine consumers, compared with the bottom 10%. They say that heavy caffeine drinkers were three times more likely to have answered affirmatively to just one LSHS question: "In the past, I have had the experience of hearing a person's voice and then found that no one was there."

Now this poses massive problems. Imagine that I am stood facing a barn, holding a machine gun, blindfolded, firing off shots whilst swinging my whole body from side to side and laughing maniacally. I then walk up to the barn, find three bullet holes which happen to be very close together, and draw a target around them, claiming I am an excellent shot.

You can easily find patterns in your data once it's collected. Why choose 10% as your cut-off? Why not the top and bottom quarters? Maybe they have accounted for this problem. You don't know, I don't know, they say they have, to me, in emails, but it wasn't in the paper, we can't all see the details. I don't think that's satisfactory for a headline finding, and the first claim of a press release.

There is another problem: putting a finding in the press release but not into the paper is a subversion of the peer review process. People will read this coverage, they will be scared, and they will change their behaviour. But the researchers' key reported claim, with massive popular impact, was never peer reviewed, and crucially the technical details behind it are not in the public domain.

I'm sorry to see academics not blameless in this dreary situation.