A consultant anesthetist at Torbay Hospital on England’s south coast has developed statistical methods to help spot signs of fraud in medical research.

Kevin McConway, The Open University







John Carlisle is a consultant anesthetist at Torbay Hospital on England’s south coast. Unless you’ve been one of his patients, you’ve probably never heard of him. But he’s a researcher too, and he’s developed statistical methods to help spot signs of fraud in medical research.

There’s a public image of medical researchers as being trustworthy people, working hard to make us all healthier. Scientists and doctors come right at the top of an international survey on the trustworthiness of professions. I’d say that image is deserved. But there are cases of very questionable research practice, or even downright fraud.

People usually remember the discredited paper published in The Lancet in 1998, linking the MMR vaccine to autism. Because of the major problems with that research, its lead author, Andrew Wakefield, was struck off the medical register.







The Wakefield case is unusual, though. Most cases of research fraud are harder to spot and aren’t so widely publicized.



Read more:

Autism and vaccines: more than half of people in Britain, France, Italy still think there may be a link





Raft of retractions

Carlisle and other anesthetists got suspicious, more than a decade ago, about studies by a Japanese researcher, Yoshitaka Fujii. He published results from a series of randomized, controlled clinical trials (RCTs) investigating medicines to prevent nausea and vomiting in patients after surgery. Carlisle, and others, thought the data was too tidy to be true. He showed that it was extremely unlikely that some of the patterns in Fujii’s data had occurred by chance. Because of this and further investigation, Fujii lost his university job.

No less than 183 of his papers were retracted, that is, effectively “unpublished” by the journals concerned. That’s far more retractions than any other individual has had.







Since then, Carlisle has developed his methods further. In 2017, he produced an analysis of over 5,000 clinical trials. Most were published in journals in his own field, anesthetics, but he also included two top-ranking American medical journals, the Journal of the American Medical Association (JAMA) and the New England Journal of Medicine (NEJM). He found suspect data in about 90 papers.

In some cases, there were innocent explanations. But there were several retractions. For instance, a major Spanish trial, investigating whether a Mediterranean diet could help prevent heart diseases and strokes, had to be retracted. The random allocation of people to different diets had, in some cases, been done wrongly. A revised trial report, omitting the wrongly randomized participants, appeared later.

Adopted by medical journals







Carlisle’s methods are now routinely used by at least two medical journals to screen reports of RCTs that are submitted for publication. They are Anesthesia, in Carlisle’s own specialty, and the prestigious NEJM. Others may well follow.

Read Also: Research Fraud, the Temptation to Lie and the Challenges of Regulation

Carlisle’s method does not definitely say whether a trial report is fraudulent. It’s a screening method that suggests that some trial reports need to be examined more thoroughly to check whether anything untoward is going on. There could, sometimes, be innocent explanations for the unusual patterns of data that the method detects.

Andrew Klein, the Cambridge-based anesthetist who is editor-in-chief of the journal Anesthesia, told me via email that the journal receives about 500 submissions of reports on RCTs each year. These are all checked using Carlisle’s method, and more than one in every 40 is detected as being potentially fraudulent. Not all of these will be fraudulent, but the journal asks to see the original patient data, checks it, and takes appropriate further action if necessary.



Read more:

Retraction of scientific papers for fraud or bias is just the tip of the iceberg





Carlisle’s method builds on particular features of how randomized clinical trials are run. A simple RCT might compare how good two different drugs, A and B, are at curing a certain disease. Patients with the disease are divided into two groups. One group gets drug A, the other drug B. Then they are all followed up to see who is cured.

The key feature is that the division into groups is made at random. This is to ensure that the two groups of patients are similar, on average, in all respects. Then, if patients on drug A do better, one can be confident that this is because they took drug A rather than B, and not because of some other difference.

In publishing the trial results, the researchers must report on “baseline” comparisons between the two groups before they start the treatments. Carlisle’s method uses measures of discrepancies between the groups called p-values, and he then combines all the baseline p-values in a trial into a single measure.







His method will unearth trials where the two groups appear to be too similar to be true, or too different to be true. Both of these indicate that, possibly, the data has been invented or interfered with.

Carlisle’s method does make some statistical assumptions that aren’t always appropriate. But it’s a fairly simple approach that suggests some trials deserve more scrutiny. It’s a valuable part of the efforts to stop fraud in medical research. The question of why a small number of researchers should commit these frauds is complicated. I don’t believe all fraud will ever be eliminated from clinical research, but that’s no reason not to be vigilant.

Kevin McConway, Emeritus Professor of Applied Statistics, The Open University







This article is republished from The Conversation under a Creative Commons license. Read the original article.