The BBC, the Guardian and Reuters this week widely reported British researchers published in the Journal of Neuroscience have developed a brain scan which can detect autism in adults with 90% accuracy.

Dr Christine Ecker, the lead author, showed her imaging technique was able to detect which people in her group had autism. "If we get a new case, we will also hopefully be 90% accurate," she said.

Pretty simple then, you turn up, have the test, and you have a 90% chance of finding out whether you have autism.

Well, you couldn't be any further from the truth.

To determine if a test is accurate, it might appear reasonable to recruit a disease positive group and a disease free group which is what happened in the brain scan study. An example of how this strategy raises false hopes is the story of carcino-embryonic-antigen (CEA) which was measured in 36 people with known advanced cancer of the colon or rectum. 35 patients (97%) showed elevated results, slightly more than the 90% in the autistic study. At the same time lower levels were found in people with other diseases and without cancer.

From these results it would seem CEA is a useful diagnostic test. However, later studies in patients with less advanced cancer, or those with similar symptoms to colon cancer, the accuracy of CEA plummeted and its use in diagnosis was abandoned.

The authors of the current study report: "The existence of an ASD biomarker such as brain anatomy might be useful to facilitate and guide the behavioural diagnosis. This would, however, require further extensive exploration in the clinical setting."

To obtain a useful result, a diagnostic study needs to include a broad spectrum of the diseased, from mild to severe. A study also needs to have independent, blind comparison of test results (in this case the brain scan) with a reference standard (the current tests for autism) among a consecutive series of patients suspected (but not known) to have the target disorder and replication of studies in other settings.

But this isn't my main concern with the reporting of the results. If they stand up to scrutiny and brain scans are adopted widely in the population it will be an expensive waste of money. In those with a positive test, autism will be diagnosed with an accuracy of only 5%, potentially leading to more harm than good.

Dr Ecker said she hoped the findings might result in a widely available scan to test for autism.

Wait a minute, what has happened? One minute the world news is reporting a test that has 90% accuracy, and I'm saying it is only 5% accurate.

Gerd Gigerenzer in his classic BMJ paper on simple tools for understanding risks tells us: "A glance at the literature shows a shocking lack of statistical understanding of the outcomes of modern technologies, from standard screening tests for HIV infection to DNA evidence."

How the brain scans results are portrayed is one of the simplest mistakes in interpreting diagnostic test accuracy to make. What has happened is, the sensitivity1 has been taken to be the positive predictive value2, which is what you want to know: if I have a positive test do I have the disease? Not, if I have the disease, do I have a positive test? It would help if the results included a measure called the likelihood ratio (LR), which is the likelihood that a given test result would be expected in a patient with the target disorder compared to the likelihood that the same result would be expected in a patient without that disorder. In this case the LR is 4.5. We've put up an article if you want to know more on how to calculate the LR.

In the general population the prevalence of autism is 1 in 100; the actual chances of having the disease are 4.5 times more likely given a positive test. This gives a positive predictive value of 4.5%; about 5 in every 100 with a positive test would have autism.

For those still feeling confused and not convinced, let's think of 10,000 children. Of these 100 (1%) will have autism, 90 of these 100 would have a positive test, 10 are missed as they have a negative test: there's the 90% reported accuracy by the media.

But what about the 9,900 who don't have the disease? 7,920 of these will test negative (the specificity3 in the Ecker paper is 80%). But, the real worry though, is the numbers without the disease who test positive. This will be substantial: 1,980 of the 9,900 without the disease. This is what happens at very low prevalences, the numbers falsely misdiagnosed rockets. Alarmingly, of the 2,070 with a positive test, only 90 will have the disease, which is roughly 4.5%.

"Some experts say further research will be needed before the new technique can be widely used."

In a direct email communication from Dr Ecker, she states: "It is currently unknown how these values generalise to the entire population, and across all dimensions of the autistic spectrum, which is why we have clearly stated that we are not yet ready to make our approach available in the NHS just yet."

I should hope so.

Carl Heneghan is director of the Centre for Evidence Based Medicine, University of Oxford

Notes:

^ 1. Sensitivity is the proportion of people with a disease who have a positive test.

^ 2. Positive predictive value is the proportion of people with a positive test who have the disease.

^ 3. Specificity is the proportion of people free of a disease who have a negative test.