After many lawsuits and a 2012 U.S. Department of Justice settlement, last month an independent review found that antidepressant drug Paxil (paroxetine) is not safe for teenagers. The finding contradicts the conclusions of the initial 2001 drug trial, which the manufacturer GlaxoSmithKline had funded, then used its results to market Paxil as safe for adolescents.

The original trial, known as Study 329, is but one high-profile example of pharmaceutical industry influence known to pervade scientific research, including clinical trials the U.S. Food and Drug Administration requires pharma companies to fund in order to assess their products. For that reason, people who read scientific papers as part of their jobs have come to rely on meta-analyses, supposedly thorough reviews summarizing the evidence from multiple trials, rather than trust individual studies. But a new analysis casts doubt on that practice as well, finding that the vast majority of meta-analyses of antidepressants have some industry link, with a corresponding suppression of negative results.

The latest study, published in the Journal of Clinical Epidemiology, which evaluated 185 meta-analyses, found that one third of them were written by pharma industry employees. “We knew that the industry would fund studies to promote its products, but it’s very different to fund meta-analyses,” which “have traditionally been a bulwark of evidence-based medicine,” says John Ioannidis, an epidemiologist at Stanford University School of Medicine and co-author of the study. “It’s really amazing that there is such a massive influx of influence in this field.”

Almost 80 percent of meta-analyses in the review had some sort of industry tie, either through sponsorship, which the authors defined as direct industry funding of the study, or conflicts of interest, defined as any situation in which one or more authors were either industry employees or independent researchers receiving any type of industry support (including speaking fees and research grants). Especially troubling, the study showed about 7 percent of researchers had undisclosed conflicts of interest. “There’s a certain pecking order of papers,” says Erick Turner, a professor of psychiatry at Oregon Health & Science University who was not associated with the research. “Meta-analyses are at the top of the evidence pyramid.” Turner was “very concerned” by the results but did not find them surprising. “Industry influence is just massive. What’s really new is the level of attention people are now paying to it.”

The researchers considered all meta-analyses of randomized controlled trials for all approved antidepressants including selective serotonin reuptake inhibitors, serotonin and norepinephrine reuptake inhibitors, atypical antidepressants, monoamine oxidase inhibitors and others published between 2007 and March 2014.

If the authors did not report any conflict of interest, as is typically required, the researchers examined random samples of articles published by the corresponding author in the same year for relevant declarations of conflicts. Two investigators not aware of the author’s names or potential conflicts assessed whether the meta-analysis included any negative or warning statements about the drug in the abstract or conclusion of the article.

Although a third of the papers were written by industry employees; of the majority of authors, 60 percent were independent, university-affiliated researchers with conflicts of interest. For the 53 meta-analyses where the author was not an industry employee and did not report any conflicts of interest, 25 percent had unreported conflicts of interest that the researchers identified in their search and included in their evaluation. “The meta-analyses that have industry links are very different than those that don’t have industry links,” Ioannidis says. Those with industry ties had much more favorable coverage and fewer caveats. “Conversely, when no employees were involved, almost 50 percent had caveats,” Ioannidis says.

Meta-analyses by industry employees were 22 times less likely to have negative statements about a drug than those run by unaffiliated researchers. The rate of bias in the results is similar to a 2006 study examining industry impact on clinical trials of psychiatric medications, which found that industry-sponsored trials reported favorable outcomes 78 per cent of the time, compared with 48 percent in independently funded trials.

Ioannidis believes that pharmaceutical companies should be restricted from funding meta-analyses to safeguard objectivity. He is fine with industry funding for other types of research, “but not when it comes to the final appraisal of whether should patients take this drug or not,” he says.

All of the major pharmaceutical companies were represented in the review, including GlaxoSmithKline; Eli Lilly and Co., maker of the popular antidepressant Prozac (fluoxetine); and Pfizer, which makes Zoloft (sertraline chloride). “As to meta-analyses,” Pfizer is an “active participant” in the conversation “about how to define scientifically robust frameworks for reanalysis of data,” wrote Dean Mastrojohn, Director of Global Media Relations at Pfizer, when reached for comment.

By definition, a meta-analysis should be “as comprehensive as possible a review,” says Andrea Cipriani, a psychiatry professor at the University of Oxford who was not involved with the study. “Clinicians are bombarded by information” and turn to meta-analyses “because they don’t have the time to do a full critical appraisal for themselves. The word means ‘shortcut to a lot of evidence.’”

Cipriani agrees that it is important to point out the manipulation of meta-analyses are by the pharmaceutical industry. “We need to highlight that these meta-analyses are more a marketing tool than a science,” he says. But Cipriani, who had seven articles flagged in the review for reported conflicts of interest, thinks that it is an oversimplification to condemn all studies with industry ties. Rather, Cipriani advocates transparency and says that the main problem is the lack of disclosure. To his credit, even with conflicts of interest present Cipriani included caveats in the conclusion or abstract in two of his papers. He was one of the few researchers with stated conflicts to do so, however.

According to Cipriani, academic journals, the gatekeepers of scientific evidence, are the ones who should be responsible, both for looking into conflicts of interest and weeding out those studies whose conclusions do not match up with the supplied data. That was part of the problem with Study 329, led by Martin Keller, then a professor of psychiatry and human behavior at Brown University, which reported all data accurately but misleadingly downplayed the teen suicide risk and exaggerated the benefits in the conclusions.

But journals often have their own conflicts of interest, something Cipriani acknowledges. Ioannidis and his colleagues originally tried to publish their latest study in psychiatry journals that they thought would be more pertinent, but the reception was cold. “Some people felt pretty angry about it and many of their editors have strong ties to the industry.” Ioannidis says.

Publication bias, where journals have shown a preference for new, positive and exciting results over replication of past studies—an essential part of the scientific process—is also a widespread problem within scientific publishing. This trend exists regardless of funding source or treatments assessed. In a study also published last month Turner found publication bias and inflated results in several National Institutes of Health–funded studies on psychotherapy.

Antidepressants are one of the largest pharmaceutical markets, with sales of $9.4 billion in the U.S. in 2013. Cipriani and Ioannidis believe the problem extends to other drugs with high market value, such as heart and cancer medications. “The whole field needs some soul searching,” Ioannidis says.