A new review of the abstracts published in leading psychology and psychiatry journals has identified substantial spin in the reporting of non-significant clinical trial results. This cross-sectional review comes in the wake of similar studies and serves as a warning against the uncritical consumption of new research by scientists, clinicians, and the general public.

The authors of this review, led by Dr. Samuel Jellison of Oklahoma State University, specifically looked at studies from January 2012 to December 2017 and found that the majority of abstracts contained some form of spin. This means that the authors of these abstracts alluded to treatment benefits that were unsupported by the evidence. This trend is worrisome as studies show that many clinicians depend on research abstracts to guide their decisions in practice.

“Adding spin to the abstract of an article may mislead physicians who are attempting to draw conclusions about treatment for patients,” the authors write. “Most physicians read only the article abstract the majority of the time, while up to 25% of editorial decisions are based on the abstract alone.”

The presence of spin, which was defined as, “use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results,” is not new, and neither is it restricted to psychology and psychiatry. Other studies have pointed to its presence in health research, research on the use of anti-depressants for anxiety, and even in the famous RAISE study.

This comes at an especially critical time for psychology as the field has lost the trust of many given the recent replication crisis, which raised concerns about its most popular experiments. At the same time, the consumption of science journalism is at an all-time high, partly due to author generated press releases and media reports that thrive on the newest in science, often with embarrassing results.

Ethical standards in the field mandate that researchers report results of their studies clearly and completely, and follow the protocol that issues guidelines on how to report primary and secondary endpoints. Despite this, misrepresentation of data in studies, abstracts, and consequently media reports has been rampant – neither neuroscience nor psychotherapy are immune to researchers, irrespective of their motivation, using language that gives the appearance of benefit where none exists.

While randomized control trials are considered the gold standard in research, the reporting of their results, as this current study found, is not free from bias either. Publication and outcome reporting bias, p-hacking, and misuse of statistical techniques are some of the numerous ways through which results of trials are misrepresented.

In light of these issues, the current study answers some pressing questions about misreporting and spin in the fields of psychology and psychiatry. Authors can often choose how they interpret and report results in their abstracts, and their misreporting can have dire consequences for clinicians who base their treatment and care on these findings.

The current study, along with many others, raises concerns about pressures of funding on researchers and the consumption of medical journalism. Since future funding depends on significant results, and medical manuscripts have to catch the readers’ attention, positive results are often reported despite weak or no evidence to support their claims.

Jellison and other authors of the review utilized the PubMed database to find randomized control trials in top journals like JAMA Psychiatry, American Journal of Psychiatry, Journal of Child Psychology and Psychiatry, Psychological Medicine, British Journal of Psychiatry, among others. The inclusion criteria were randomized human trials where an intervention was tested for statistical significance among two or more groups and resulted in non-significant primary endpoints. The title of the study, result, and conclusion in the abstract, and selected endpoints for reporting were all examined for evidence of spin. The authors explain:

“We considered there to be evidence of spin if trial authors focused on statistically significant results, interpreted statistically nonsignificant results as equivalent or noninferior, used favorable rhetoric in the interpretation of nonsignificant results (e.g., “trend toward significance”), or claimed the benefit of an intervention despite statistically nonsignificant results.”

Significance of results was decided based on the alpha value and confidence intervals established by the study. 116 trials were included in the review and the authors found evidence of spin in 65 (56%) of those studies. Spin was found in titles (2%), abstract results (21%) and conclusions (49%), with conclusions being most riddled with the issues of misrepresentation. Spin was also most prevalent in trials that compared treatment-as-usual and placebo groups to a comparator group.

Researchers used many ways to, intentionally or unintentionally, misreport their results. For example, some chose to focus on secondary endpoints with significant results instead of reporting primary endpoints that showed non-significance. Others resorted to partial reporting where one significant primary endpoint was emphasized but the other which failed to achieve significance was ignored. Few made claims of equivalence for a non-significant result while others used misguiding language that alluded to significance where there was none (“trends towards significance”).

Authors report that they found no association between industry funding and spin; studies were considered to be industry-funded if they represented their funding source as “industry” or “multiple with industry.” In this review, spin was most commonly related to public funding. It is imperative that the effect of industry funding on research outcomes be documented given the industry’s past ethical transgressions and the influence of conflicts of interest on spin (e.g. hiring people on the payroll of a company to be experts in media reports).

Researchers are ethically bound to report their findings accurately and completely. The authors of this review suggest inviting external reviewers to look for spin before studies are published.

While it is true that scientists face immense pressure and that positive results are more likely to get published, they still have an ethical responsibility to the people who are affected by these findings and the clinicians who rely on them. At the same time, these findings also offer a cautionary tale to other scientists, journalists, clinicians, and patients to be aware of personal bias and conflicts of interest in the research they read and consume.

****

Jellison, S.S., Roberts, W., Bowers, A., Combs, T., Beaman, J., Wayant, C., Vassar, M. (2019). Evaluation of spin in abstracts of papers in psychiatry and psychology journals. BMJ: Evidence-Based Medicine. Published Online First: 05 August 2019. doi: 10.1136/bmjebm-2019-111176 (Link)