A group of researchers recently found serious bias in the reporting of harm due to adverse events in antidepressant medication clinical trials. They report that although dropout rates from studies for both drug and placebo groups were comparable and well reported, participants who were randomly assigned to receive the drug were 2.4 times more likely to leave the study due to adverse events.

More strikingly, serious adverse events were very poorly reported in journal articles. Even when they were reported, discrepancies were often found between data submitted to the FDA and those reported in the published literature, which is a form of “spinning data” or a way of using language to report results in a way that shows favorable outcomes for the drug while minimizing its negative effects.

“In 79% of all journal articles, SAE data was incomplete or missing. Almost two- thirds failed to mention SAEs entirely, and an additional 16% of articles provided incomplete information regarding SAEs. For instance, some articles provided only the number of SAEs, without any description. Given the idiosyncratic nature of SAEs, such numbers have little meaning,” the researchers write.

In clinical trials, the FDA defines adverse events as any “untoward medical occurrence” that can be associated with the use of a drug, whether or not the investigator of a clinical trial thinks that the medical event is drug related. Some examples of adverse events include headaches, drowsiness, nausea, and vomiting. Serious adverse events (SAEs) are those “that results in death, hospitalization, disability or permanent damage, a birth defect, or any other life-threatening situation.”

All such events that occur during the period the drug is being actively taken or up to 30 days after it is stopped must be reported to the FDA. In order for patients and clinicians to make informed decisions about whether or not to take or prescribe a drug, the full picture of possible risks and benefits associated with the drug must be presented.

In this study, the researchers looked at all clinical trials for selective serotonin reuptake inhibitors (SSRIs), serotonin-norepinephrine reuptake inhibitors (SNRIs) as well as other antidepressants mirtazapine, bupropion, and nefazodone that were approved between 1987 and 2008.

“We included a total of 133 trials, consisting of data from 31,296 participants, of whom 18,904 were treated with antidepressants and 12,392 with placebo,” the researchers said.

They compared the reports for these studies published in the academic literature with corresponding reviews of the same studies submitted to the FDA and were interested in two things – the reporting of SAEs in general as well as discrepancies between these two data sources. With respect to reporting, information on SAEs was missing in 43% (57 of 133) trials. This was also reflected in the published articles:

In terms of discrepancies, only 21 of 36 published articles that mentioned SAEs (58%) could be directly compared because of “insufficient information in the FDA review.” Of those 21, only 6 articles had no discrepancies compared to the corresponding FDA review. In 7 of 21 articles, the discrepancies made it seem like the placebo group had more numerous or severe SAEs, thus wrongly indicating that the drug had more favorable outcomes.

The way this was done in published reports ranged from omitting or underreporting SAEs in the paper to not properly describing what the SAE was (e.g using the words “emotional lability” to mask a suicide attempt). One article even overreported the number of SAEs for the placebo group, in direct contrast to the number reported in the FDA review. These discrepancies are often explained as being due to judgments made about not including SAEs that site investigators judge as unrelated to medication – however, the basis of these judgments is highly subjective, leading to further bias in the reporting of SAEs.

The researchers rightly point out that this is an alarming trend given the association of antidepressant medication with suicidality, particularly in children and adolescents and violent crime. Moreover, these results do not even begin to address the long-term side effects of antidepressants, which are troubling given the trend that these drugs are increasingly being prescribed for long-term use.

A topic that this article did not address is the actual measuring of adverse events, and how this is done. For example, given that sexual dysfunction is often reported as a side effect of antidepressant medication, researchers have documented how trials rely on participants spontaneously reporting these effects instead of being intentional about the collection of such data. This leads to an inaccurate overestimation of drug safety.

The researchers mention the limitations of their analysis as having to work with incomplete FDA data and not examining common adverse events. They conclude, “We found that reporting of the actual discontinuation rates was unbiased, but many journal articles conclude, in their abstract, that the antidepressant was ‘safe,’ ‘well-tolerated,’ or both, even though antidepressant-treated participants were, on average, 2.4 times more likely to discontinue due to adverse events than placebo-treated patients.”

****

de Vries, Y. A., Roest, A. M., Beijers, L., Turner, E. H., & de Jonge, P. (2016). Bias in the reporting of harms in clinical trials of second-generation antidepressants for depression and anxiety: A meta-analysis. European Neuropsychopharmacology, doi:10.1016/j.euroneuro.2016.09.370 (Full Text)