On Channel 4 News, scientists have found a new treatment for Duchenne's muscular dystrophy. "A study in the Lancet today shows a drug injected weekly for three months appears to have reduced the symptoms" they say. "While it's not a cure, it does appear to reduce the symptoms."

Unfortunately, the study shows no such thing. The gene for making a muscle protein called dystrophin is damaged in patients with DMD. The Lancet paper shows a new treatment led to some restoration of dystrophin production in some children in a small unblinded study.

That's not the same as symptoms improving. But Channel 4 reiterates its case, with the mother of two participants in the study. "I think for Jack … it maintained his mobility … with Tom, there's definitely significant changes … more energy, he's less fatigued."

Where did these positive anecdotes come from? Disappointingly, they come from the Great Ormond Street Hospital press release (which was tracked down online by evidence-based policy wonk Evan Harris). It summarises the dystrophin results accurately, but then, once more, they present an anecdotal case study going way further: "Our whole family noticed a marked difference in their quality of life and mobility over that period. We feel it helped prolong Jack's mobility and Tom has been considerably less fatigued."

There are two issues here. Firstly, anecdotes are a great communication tool, but only when they accurately illustrate the data. The anecdotes here plainly go beyond that. Great Ormond Street deny this is problematic (though they have changed their press release online). I strongly disagree (and this is, of course, not the first time an academic press release has been suboptimal).

But this story is also a reminder that we should always be cautious with "surrogate" outcomes. The biological change measured was important, and good grounds for optimism, because it shows the treatment is doing what it should in the body. But things that work in theory do not always work in practice, and while a measurable biological indicator is a hint something is working, such outcomes can often be misleading.

Examples are easy to find, and from some of the biggest diseases in medicine. The Allhat (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) was vast, comparing various blood pressure drugs. One part compared 9,000 patients on doxazosin against 15,000 on chlorthalidone. Both were known to lower blood pressure, and people assumed they would also lower the risk of real-world outcomes, such as stroke and heart attack.

But patients on doxazosin turned out to have a higher risk of stroke, and cardiovascular problems, than patients on chlorthalidone – even though both lowered blood pressure – to such an extent that the trial was stopped early. Blood pressure, in this case, was not a reliable surrogate outcome for assessing the drug's benefits on real-world outcomes.

This is not an isolated example. In the treatment of diabetes, HbA1c is often monitored, as it is an indicator of blood glucose levels over the preceding few weeks. Many drugs, such as rosiglitazone, have been licensed on the grounds they reduce your HbA1c level. But this, again, is just a surrogate outcome: what we really care about in diabetes are real-world outcomes such as heart attacks and death. And when these were finally measured, it turned out rosiglitazone, while lowering HbA1c levels effectively, massively increased your risk of heart attack. (It has now been suspended.)

So improvements on surrogate biological outcomes that can be measured in the body are a strong hint that something works – and I hope this new DMD treatment does turn out to be effective – but even in the most well-established surrogate measures, and drugs, these endpoints can turn out to be misleading. People writing press releases, and shepherding misleading patient anecdotes into our living rooms, might want to bear that in mind.