It is well known that we cannot trust the data the drug companies publish, and it seems that, in psychiatric drug trials, the manipulations with the data are particularly pronounced. As just one example, half of the deaths and half of the suicides that occur in randomised trials are not published.

When the FDA in 2006 published its meta-analysis of 100,000 patients who had received depression pills or placebo in randomised trials, after having asked the companies how many suicides had occurred in their trials without checking the veracity of the information, the suicide rate on pills was 1 per 10,000 patients. However, five years earlier, Thomas Laughren, who chaired the large FDA meta-analysis, published his own meta-analysis of the drugs, based on data in FDA’s possession, and this time the suicide rate on pills was 10 per 10,000 patients, or 10 times as many. It is difficult to comprehend discrepancies of this magnitude, but what is abundantly clear — and which has been demonstrated by many researchers — is that the companies have deliberately concealed many cases of suicide and suicide attempts in their trials and in their reports to the drug regulators. In many cases, this has amounted to fraud.

When, very rarely, independent researchers get the possibility of analysing the trial data themselves, the results are often markedly different to those the companies have published. This was, for example, the case in the re-analysis of GlaxoSmithKline trial 329 in children and adolescents. The company had reported that paroxetine given to children and adolescents was effective and well tolerated, but none of this was true. Paroxetine was ineffective and harmful, and it is also harmful in adults.

The Lancet 2018 network meta-analysis by Cipriani et al.

Fraud and selective reporting are of course not limited to the most serious outcomes but also affect other trial outcomes. Several of the authors of a 2018 network meta-analysis in the Lancet are well aware that published trial reports of depression pills cannot be trusted. I therefore do not understand why they are authors on this paper. Erick Turner, for example, has been a reviewer for the FDA and he showed in 2008 that the effect of depression pills was 32% larger in published trials than in all trials in FDA’s possession. And John Ioannidis published a paper in 2008 called “Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials?” where he mentions in the abstract that “selective and distorted reporting of results has built and nourished a seemingly evidence-based myth on antidepressant effectiveness.”

Yet the authors included 421 trials from the database search, 86 unpublished studies from trial registries and pharmaceutical company websites, and 15 from personal communication or hand-searching other review articles. By far most of the data came from published trial reports, which we know are seriously unreliable for depression trials. The authors’ meta-analytic exercise is academic, with no clinical value, and they drown the many biases in the trials in statistics that are so complicated that it is impossible to know what all this leads to. But we do know that statistical maneuvers cannot make unreliable trials reliable.

The authors included both head-to-head comparisons of drugs and comparisons of drugs with placebo. They found an effect compared to placebo of an SMD of 0.30, which is very similar to numerous earlier meta-analyses.

However, they went much further than this. Despite the doubtful effect, which is far below what is clinically relevant, they ranked the drugs according to their effect and acceptability (drop-out for any reason).

This is a futile exercise, and when I first saw this network meta-analysis, my thought was that the authors had rewarded those companies that had cheated the most with their trials. My suspicion was strengthened when I looked at the results in their abstract. The authors claim, for example, that in head-to-head trials, agomelatine, escitalopram, and vortioxetine were more effective than other antidepressants and that the same three drugs were also more tolerable than other antidepressants. One doesn’t need to be a clinical pharmacologist to know that this seems too good to be true. Drugs that are more effective than others (which is often a matter of giving them in higher, non-equipotent doses), will usually also be more poorly tolerated. It is highly unlikely that some depression pills are both more effective and better tolerated than others. I therefore took a closer look at these three drugs.

I cannot know of course whether the companies behind these drugs were worse than others or whether the odd findings just reflected fundamental errors in the network meta-analysis. I do not accuse anyone, I just give some facts below.

Agomelatine

Agomelatine is marketed by Servier. It was touted in 2011 in the Lancet as being an outstanding drug by two authors, including leading Australian psychiatrist Ian Hickie, who had numerous financial conflicts of interest. The authors claimed that fewer patients on agomelatine relapsed (24%) than do those on placebo (50%), but a systematic review by other psychiatrists found no effect on relapse prevention, no effect as evaluated on the Hamilton depression scale, and that none of the negative trials had been published. Three pages of letters — which is extraordinarily many — to the editor in Lancet (21 January 2012) pointed out the many flaws in Hickie’s review.

Escilatopram and vortioxetine

These drugs are marketed by Lundbeck. It is really far-fetched to believe that escitalopram can be better than citalopram because the active substance is the same. Citalopram is a stereoisomer consisting of an active part and an inactive mirror molecule, and escitalopram only contains the active substance. When studied by Lundbeck in its own head-to-head trials, the active molecule is better than itself.

However, when independent researchers made a meta-analysis based on indirect comparisons, comparing escitalopram with placebo, and citalopram with placebo, there was no difference. Their results are very telling. In a meta-analysis of seven head-to-head trials (2,174 patients), efficacy was significantly better for escitalopram than citalopram (odds ratio 1.60; 95% confidence interval 1.05 to 2.46). For the adjusted indirect comparison of 10 citalopram and 12 escitalopram placebo-controlled trials (2,984 and 3,777 patients respectively), escitalopram wasn’t any better than citalopram (indirect OR 1.03; 0.82 to 1.30). A similar discrepancy was found for treatment acceptability. Such results cast serious doubts about the reliability of network meta-analyses of depression pills.

Vortioxetine seems to be a very poor drug. When independent researchers compared vortioxetine with duloxetine and venlafaxine in meta-analyses, these drugs were significantly more effective than vortioxetine at three of the four dose levels tested. It is of note that every author in all of the published short-term trials had significant commercial ties to Lundbeck. This is a sure way of controlling that what gets published supports the company’s marketing ambitions.

Such ties were also apparent in Lundbeck’s meta-analysis that compared escitalopram with citalopram. All three authors worked for Forest, Lundbeck’s US partner, one as a consultant and the other two in the company. What are we supposed to make out of a paper published in a bought supplement to a journal edited by a person who is also bought by the company? Nothing.

Network meta-analyses of published trial data are not reliable

In a study of network meta-analyses (NMA), the authors used data from 74 FDA-registered placebo-controlled trials of 12 depression pills and their 51 matching publications. For each dataset, NMA was used to estimate the effect sizes for 66 possible pair-wise comparisons of these drugs. To assess how reporting bias affecting only one drug may affect the ranking of all drugs, they performed 12 different NMAs for hypothetical analysis. For each of these NMAs, they used published data for one drug and FDA data for the 11 other drugs. They found that pair-wise effect sizes for drugs derived from the NMA of published data and those from the NMA of FDA data differed in absolute value by at least 100% in 30 of 66 pair-wise comparisons (45%).

Extreme media hype

The Lancet network meta-analysis contains nothing new and what it claims to be new, is so unreliable that we should ignore it. These facts did not prevent the first author of the network meta-analysis, Andrea Cipriani, to hype the paper to the extreme, e.g. in BBC News:

“Lead researcher Dr Andrea Cipriani, from the University of Oxford, told the BBC: ‘This study is the final answer to a long-standing controversy about whether anti-depressants work for depression’ … Scientists say they have settled one of medicine’s biggest debates after a huge study found that anti-depressants work. The study … showed big differences in how effective each drug is.”

“The authors of the report, published in the Lancet, said it showed many more people could benefit from the drugs … The Royal College of Psychiatrists said the study ‘finally puts to bed the controversy on anti-depressants’.”

“Researchers added … At least one million more people in the UK would benefit from treatments, including anti-depressants.”

What is the reality?

It is still the reality that, despite serious flaws in depression trials — of which the most important ones are lack of blinding because of the conspicuous adverse effects of the pills, cold turkey in the placebo group because people were already on depression pills before they were randomised, industry-funding, selective reporting and data massage — the average effect is considerably below what is clinically relevant. In the Lancet network meta-analysis, the mean baseline severity score on the Hamilton Depression Rating Scale was 25.7, which is considered very severe depression according to the American Psychiatric Association’s Handbook of Psychiatric Measures. The oft-heard claim that these drugs work for very severe depression is wrong. Furthermore, when it is sometimes found in meta-analyses that the effect seems to be larger in severe than in moderate depression, this is likely just a mathematical artefact: the higher the depression score at baseline, the more the unblinding bias will distort the result.

If the balance between benefits and harms of depression pills was positive, fewer people would drop out while on drug than while on placebo. The network meta-analysis did not report anywhere the average drop-out rate for the drugs, but it seemed to be very close to 1, which means that the drugs are no better than placebo. But it is worse than this. We have access to clinical study reports from the European drug regulators for depression pills, which are more reliable than what the companies publish. They also allowed us to include patients the drug companies had excluded from analysis. Despite the many biases in the trials, which included introducing withdrawal effects in the placebo group, we found significantly more drop-outs on drug than on placebo (Tarang Sharma, personal communication, submitted for publication). This means that placebo is the better drug.

Use psychotherapy for depression, not pills, and help people come off the pills

My conclusion is that patients should not be treated with depression pills. I no longer call them antidepressants, as they are ineffective and increase the risk of suicide and violence, which in the worst case can lead to homicide, with no upper age limit. Further, they have numerous other serious harms, e.g. sexual dysfunction in about half of the patients who had a normal sex life before they started on the pills.

The patients should be treated with psychotherapy, which halves the risk of a new suicide attempt in those who have been admitted after a suicide attempt. Those who are currently on depression pills should be offered help to taper off them slowly and safely, and we should all focus on offering withdrawal courses (see www.iipdw.com and www.deadlymedicines.dk). One of my PhD students and I lecture at such courses and we have an approved title and have submitted a protocol for a Cochrane review of studies of withdrawal of depression pills.