OK, here's something weird. Every week in Bad Science we either victimise some barking pseudoscientific quack, or a big science story in a national newspaper. Now, tell me, why are these two groups even being mentioned in the same breath? Why is science in the media so often pointless, simplistic, boring, or just plain wrong? Like a proper little Darwin, I've been collecting specimens, making careful observations, and now I'm ready to present my theory.

It is my hypothesis that in their choice of stories, and the way they cover them, the media create a parody of science, for their own means. They then attack this parody as if they were critiquing science. This week we take the gloves off and do some serious typing.

Science stories usually fall into three families: wacky stories, scare stories and "breakthrough" stories. Last year the Independent ran a wacky science story that generated an actual editorial: how many science stories get the lead editorial? It was on research by Dr Kevin Warwick, purporting to show that watching Richard and Judy improved IQ test performance (www.badscience.net/?p=84). Needless to say it was unpublished data, and highly questionable.

Wacky stories don't end there. They never end. Infidelity is genetic, say scientists. Electricity allergy real, says researcher. I've been collecting "scientists have found the formula for" stories since last summer, carefully pinning them into glass specimen cases, in preparation for my debut paper on the subject. So far I have captured the formulae for: the perfect way to eat ice cream (AxTpxTm/FtxAt +VxLTxSpxW/Tt=3d20), the perfect TV sitcom (C=3d[(RxD)+V]xF/A+S), the perfect boiled egg, love, the perfect joke, the most depressing day of the year ([W+(D-d)]xTQ MxNA), and so many more. Enough! Every paper - including this one - covers them: and before anyone bleats excuses on their behalf, these stories are invariably written by the science correspondents, and hotly followed, to universal jubilation, with comment pieces, by humanities graduates, on how bonkers and irrelevant scientists are.

A close relative of the wacky story is the paradoxical health story. Every Christmas and Easter, regular as clockwork, you can read that chocolate is good for you (www.badscience.net/?p=67), just like red wine is, and with the same monotonous regularity, in breathless, greedy tones you will you hear how it's scientifically possible to eat as much fat and carbohydrate as you like, for some complicated reason, but only if you do it at "the right time of day". These stories serve one purpose: they promote the reassuring idea that sensible health advice is outmoded and moralising, and that research on it is paradoxical and unreliable.

At the other end of the spectrum, scare stories are - of course - a stalwart of media science. Based on minimal evidence and expanded with poor understanding of its significance, they help perform the most crucial function for the media, which is selling you, the reader, to their advertisers. The MMR disaster was a fantasy entirely of the media's making (www.badscience.net/?p=23), which failed to go away. In fact the Daily Mail is still publishing hysterical anti-immunisation stories, including one calling the pneumococcus vaccine a "triple jab", presumably because they misunderstood that the meningitis, pneumonia, and septicaemia it protects against are all caused by the same pneumococcus bacteria (www.badscience.net/?p=118).

Now, even though popular belief in the MMR scare is - perhaps - starting to fade, popular understanding of it remains minimal: people periodically come up to me and say, isn't it funny how that Wakefield MMR paper turned out to be Bad Science after all? And I say: no. The paper always was and still remains a perfectly good small case series report, but it was systematically misrepresented as being more than that, by media that are incapable of interpreting and reporting scientific data.

Once journalists get their teeth into what they think is a scare story, trivial increases in risk are presented, often out of context, but always using one single way of expressing risk, the "relative risk increase", that makes the danger appear disproportionately large (www.badscience.net/?p=8). This is before we mention the times, such as last week's Seroxat story, or the ibuprofen and heart attack story last month, when in their eagerness to find a scandal, half the papers got the figures wrong. This error, you can't help noticing, is always in the same direction.

And last, in our brief taxonomy, is the media obsession with "new breakthroughs": a more subtly destructive category of science story. It's quite understandable that newspapers should feel it's their job to write about new stuff. But in the aggregate, these stories sell the idea that science, and indeed the whole empirical world view, is only about tenuous, new, hotly-contested data. Articles about robustly-supported emerging themes and ideas would be more stimulating, of course, than most single experimental results, and these themes are, most people would agree, the real developments in science. But they emerge over months and several bits of evidence, not single rejiggable press releases. Often, a front page science story will emerge from a press release alone, and the formal academic paper may never appear, or appear much later, and then not even show what the press reports claimed it would (www.badscience.net/?p=159).

Last month there was an interesting essay in the journal PLoS Medicine, about how most brand new research findings will turn out to be false (www.tinyurl.com/ceq33). It predictably generated a small flurry of ecstatic pieces from humanities graduates in the media, along the lines of science is made-up, self-aggrandising, hegemony-maintaining, transient fad nonsense; and this is the perfect example of the parody hypothesis that we'll see later. Scientists know how to read a paper. That's what they do for a living: read papers, pick them apart, pull out what's good and bad.

Scientists never said that tenuous small new findings were important headline news - journalists did.

But enough on what they choose to cover. What's wrong with the coverage itself? The problems here all stem from one central theme: there is no useful information in most science stories. A piece in the Independent on Sunday from January 11 2004 suggested that mail-order Viagra is a rip-off because it does not contain the "correct form" of the drug. I don't use the stuff, but there were 1,147 words in that piece. Just tell me: was it a different salt, a different preparation, a different isomer, a related molecule, a completely different drug? No idea. No room for that one bit of information.

Remember all those stories about the danger of mobile phones? I was on holiday at the time, and not looking things up obsessively on PubMed; but off in the sunshine I must have read 15 newspaper articles on the subject. Not one told me what the experiment flagging up the danger was. What was the exposure, the measured outcome, was it human or animal data? Figures? Anything? Nothing. I've never bothered to look it up for myself, and so I'm still as much in the dark as you.

Why? Because papers think you won't understand the "science bit", all stories involving science must be dumbed down, leaving pieces without enough content to stimulate the only people who are actually going to read them - that is, the people who know a bit about science. Compare this with the book review section, in any newspaper. The more obscure references to Russian novelists and French philosophers you can bang in, the better writer everyone thinks you are. Nobody dumbs down the finance pages. Imagine the fuss if I tried to stick the word "biophoton" on a science page without explaining what it meant. I can tell you, it would never get past the subs or the section editor. But use it on a complementary medicine page, incorrectly, and it sails through.

Statistics are what causes the most fear for reporters, and so they are usually just edited out, with interesting consequences. Because science isn't about something being true or not true: that's a humanities graduate parody. It's about the error bar, statistical significance, it's about how reliable and valid the experiment was, it's about coming to a verdict, about a hypothesis, on the back of lots of bits of evidence.

But science journalists somehow don't understand the difference between the evidence and the hypothesis. The Times's health editor Nigel Hawkes recently covered an experiment which showed that having younger siblings was associated with a lower incidence of multiple sclerosis. MS is caused by the immune system turning on the body. "This is more likely to happen if a child at a key stage of development is not exposed to infections from younger siblings, says the study." That's what Hawkes said. Wrong! That's the "Hygiene Hypothesis", that's not what the study showed: the study just found that having younger siblings seemed to be somewhat protective against MS: it didn't say, couldn't say, what the mechanism was, like whether it happened through greater exposure to infections. He confused evidence with hypothesis (www.badscience.net/?p=112), and he is a "science communicator".

So how do the media work around their inability to deliver scientific evidence? They use authority figures, the very antithesis of what science is about, as if they were priests, or politicians, or parent figures. "Scientists today said ... scientists revealed ... scientists warned." And if they want balance, you'll get two scientists disagreeing, although with no explanation of why (an approach at its most dangerous with the myth that scientists were "divided" over the safety of MMR). One scientist will "reveal" something, and then another will "challenge" it. A bit like Jedi knights.

The danger of authority figure coverage, in the absence of real evidence, is that it leaves the field wide open for questionable authority figures to waltz in. Gillian McKeith, Andrew Wakefield, Kevin Warwick and the rest can all get a whole lot further, in an environment where their authority is taken as read, because their reasoning and evidence is rarely publicly examined.

But it also reinforces the humanities graduate journalists' parody of science, for which we now have all the ingredients: science is about groundless, incomprehensible, didactic truth statements from scientists, who themselves are socially powerful, arbitrary, unelected authority figures. They are detached from reality: they do work that is either wacky, or dangerous, but either way, everything in science is tenuous, contradictory and, most ridiculously, "hard to understand".

This misrepresentation of science is a direct descendant of the reaction, in the Romantic movement, against the birth of science and empiricism more than 200 years ago; it's exactly the same paranoid fantasy as Mary Shelley's Frankenstein, only not as well written. We say descendant, but of course, the humanities haven't really moved forward at all, except to invent cultural relativism, which exists largely as a pooh-pooh reaction against science. And humanities graduates in the media, who suspect themselves to be intellectuals, desperately need to reinforce the idea that science is nonsense: because they've denied themselves access to the most significant developments in the history of western thought for 200 years, and secretly, deep down, they're angry with themselves over that.

That's what I'd have said three years ago. But now I'm on the inside, I can add a slightly different element to the story. I'm an all right-looking bloke, I get about: maybe I'm not the most popular bloke at science journalist parties, but I'm certainly talkative. For many months I had a good spirited row with an eminent science journalist, who kept telling me that scientists needed to face up to the fact that they had to get better at communicating to a lay audience. She is a humanities graduate. "Since you describe yourself as a science communicator," I would invariably say, to the sound of derisory laughter: "isn't that your job?" But no, for there is a popular and grand idea about, that scientific ignorance is a useful tool: if even they can understand it, they think to themselves, the reader will. What kind of a communicator does that make you?

There is one university PR department in London that I know fairly well - it's a small middle-class world after all - and I know that until recently, they had never employed a single science graduate. This is not uncommon. Science is done by scientists, who write it up. Then a press release is written by a non-scientist, who runs it by their non-scientist boss, who then sends it to journalists without a science education who try to convey difficult new ideas to an audience of either lay people, or more likely - since they'll be the ones interested in reading the stuff - people who know their way around a t-test a lot better than any of these intermediaries. Finally, it's edited by a whole team of people who don't understand it. You can be sure that at least one person in any given "science communication" chain is just juggling words about on a page, without having the first clue what they mean, pretending they've got a proper job, their pens all lined up neatly on the desk.

Of course a system like that will cock up. The proof is in Bad Science, every week. See you in Berlin.

· Bad Science will be continuing in the Guardian next week