Abstract Background We explore whether the number of null results in large National Heart Lung, and Blood Institute (NHLBI) funded trials has increased over time. Methods We identified all large NHLBI supported RCTs between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. Trials were included if direct costs >$500,000/year, participants were adult humans, and the primary outcome was cardiovascular risk, disease or death. The 55 trials meeting these criteria were coded for whether they were published prior to or after the year 2000, whether they registered in clinicaltrials.gov prior to publication, used active or placebo comparator, and whether or not the trial had industry co-sponsorship. We tabulated whether the study reported a positive, negative, or null result on the primary outcome variable and for total mortality. Results 17 of 30 studies (57%) published prior to 2000 showed a significant benefit of intervention on the primary outcome in comparison to only 2 among the 25 (8%) trials published after 2000 (χ2=12.2,df= 1, p=0.0005). There has been no change in the proportion of trials that compared treatment to placebo versus active comparator. Industry co-sponsorship was unrelated to the probability of reporting a significant benefit. Pre-registration in clinical trials.gov was strongly associated with the trend toward null findings. Conclusions The number NHLBI trials reporting positive results declined after the year 2000. Prospective declaration of outcomes in RCTs, and the adoption of transparent reporting standards, as required by clinicaltrials.gov, may have contributed to the trend toward null findings.

Citation: Kaplan RM, Irvin VL (2015) Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time. PLoS ONE 10(8): e0132382. https://doi.org/10.1371/journal.pone.0132382 Editor: Silvio Garattini, Mario Negri Institute for Pharmacology Research, ITALY Received: March 25, 2015; Accepted: May 21, 2015; Published: August 5, 2015 This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication Data Availability: The data are available in the Supplemental Materials that are included with the manuscript. Funding: The work was completed while both authors were employees of the National Institutes of Health. The work was supported by the NIH intramural program. Competing interests: The authors have declared that no competing interests exist.

Introduction Large randomized clinical trials (RCTs) provide the best evidence to justify new treatments or to identify treatments that do not improve patient outcomes. Gordon and colleagues reported that most large NHLBI-funded trials produce null results[1], but their analysis only considered papers published after 2000. Considering all large trials over the last 40 years, we explore whether there has been a trend toward null finding in recent years and consider potential explanations for trends in observing null outcomes.

Method Sample of Studies We identified all large RCTs that involved drugs or supplements funded between 1970–2012. To avoid non-publication bias, we focused on large trials where non-reporting of outcomes is rare[1]. The search process is summarized in a PRISMA diagram (S1 Fig). Two independent searches were conducted to improve probability of accurately capturing all related trials–one by the study authors and the second by NHLBI. We searched three different NIH grant databases (QVR, NIH REPORTER, and CRISP) for RCTs that were primarily funded or administered by NHLBI. QVR is an internal NIH data-base, but readers can replicate our search using NIH REPORTER and CRISP which are publically available resources listing all grants and associated publications. Inclusion criteria were: RCT for studies funded from 1970–2012; grants or contracts; direct costs funded were large enough to require special authorization (>$500,000/ year); the word “trial” had to appear in the study objectives or abstract; and primary outcome was cardiovascular risk factor, event or death. Exclusion criteria included: project still active; no human subjects protocol required; pediatric studies; animal studies; non-RCTs (i.e. observational, cohort, case control, genetic or proteomics, measurement, basic clinical research); or interventions that did not involve a drug or supplement (i.e. behavior change, devices, surgeries). An expanded methods section is available in the Supplemental Materials. We coded the following variables: start year (earliest funding noted), publication year of main outcome study, funded through contract or cooperative agreement from NHLBI, type of comparator (placebo, active comparator, usual care), primary outcome specified or not, CONSORT diagram included in publication, whether funding was exclusively from NIH versus joint industry/NIH funded (including industry contributed medications), and if they had listed any other significant results that were neither the primary outcomes or the side effects of the drug. In addition, we considered whether studies were registered in clinicaltrials.gov prior to publication. Each trial was categorized as showing significant benefit, null, or significant harm for the primary outcome and for total mortality (See Tables 1 and 2). Null was defined as a confidence interval for the RR that included 1.0.using a two-tailed test with alpha set at 0.05. The analysis was standardized by re-computing the relative risk (RR) with 95% confidence intervals (CI) for all trials. PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Table 1. Study characteristics and overall effect for main outcome and total mortality for studies not registered in ClinicalTrials.gov prior to publication. https://doi.org/10.1371/journal.pone.0132382.t001 PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Table 2. Study characteristics and overall effect for main outcome and total mortality for studies registered in ClinicalTrials.gov prior to publication. https://doi.org/10.1371/journal.pone.0132382.t002

Results Among 4,089 individual years of grant funding, almost half were excluded as multiple years of the same grant and over 20% were excluded because they were single sites in multi-site trials, coordinating centers, or ancillary studies of the same trial. An additional 1,176 grant abstracts did not match our criteria and were excluded (see S1 Table for detailed reasons). Main outcome papers were searched for 84 trials; 10 were not published and 25 did not match search criteria and were excluded (See S1 Fig for the PRISMA diagram and S1 Table for the number of studies excluded by reason.) Following exclusions, we identified a total of 49 funded grants. Four of these grants resulted in multiple unique trials (ACCORD Blood Pressure, Diabetes, and Lipid; ALLHAT-BP, DOX, LLT; WHI Estrogen and Estrogen-Progestin, and WHS aspirin and vitamin E). A total of 55 trials were analyzed– 30 were published prior to 2000 and 25 were published in 2000 or later (see S2 Table for list of included trials. A complete list of the references also appears in the Supplemental Materials). Fig 1 plots the relative risks of the primary outcome by the publication year of the main outcome paper. Because it was an extreme outlier, the CAST study is excluded from the figure. Prior to publication in 2000, studies often showed benefits of treatments with the notable exception of CAST (not shown in figure). Following 2000, confidence intervals for relative risk ratios included 1.0 in all cases, with the exceptions of the PREVENT and the SANDS trials (benefit) and the Women’s Health Initiative (Harm). In addition, the variability in RRs was considerably reduced after the year 2000 (Fig 2). PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Fig 1. Relative risk of showing benefit or harm of treatment by year of publication for large NHLBI trials on pharmaceutical and dietary supplement interventions. Positive trials are indicated by the plus signs while trials showing harm are indicated by a diagonal line within a circle. Prior to 2000 when trials were not registered in clinical trials.gov, there was substantial variability in outcome. Following the imposition of the requirement that trials preregister in clinical trials.gov the relative risk on primary outcomes showed considerably less variability around 1.0. https://doi.org/10.1371/journal.pone.0132382.g001 PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Fig 2. Summary of results on the primary outcome in NHLBI trials on pharmaceutical and supplement interventions that were not pre-registered in clinical trials.gov (panel A) and pre-registered in clinical trials.gov (panel B). Trials indicated by shading and black boxes had statistically significant effects of intervention while trials not shaded and represented by gray boxes had null effects. https://doi.org/10.1371/journal.pone.0132382.g002 Results for all cause mortality were similar. Prior to 2000, 24 trials reported all cause-mortality and 5 reported significant reductions in total mortality (25%), 18 were null (71%) and one (CAST) reported significant harm (Table 3). Following the year 2000, no study showed a significant benefit for total mortality. An expanded presentation of the results in given in the online supplemental materials, including a figure summarizing results for all cause mortality. PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Table 3. Summary of Published Drug and Supplement NHLBI Trials, 1970–2012. https://doi.org/10.1371/journal.pone.0132382.t003 We considered a variety of explanations for the trend toward null results that emerged around 2000. (Detailed tables are given in online supplemental materials). One possibility is that more recent trials may have evaluated their treatment drug against clinically effective alternatives instead of placebos. We do not find this suggestion likely because 60% of the large NHLBI trials published prior to 2000 used a placebo as the comparator in contrast to 64% trials published after 2000 (see S3 Table). Placebos were used as the comparator at about the same rate prior to and after the year 2000 (p = .979). To investigate the effect of industry co-sponsorship, we tabulated sponsorship for all reports. Unfortunately, industry co-sponsorship was not always reported prior to the year 2000 and journals did not uniformly require disclosure. After the year 2000, when the International Committee of Medical Journal Editors (ICMJE) asked for disclosure, it became apparent that industry co-sponsorship is very common. In our sample, 23 of 25 (92%) of the NHLBI trials published after 2000 had partial industry sponsorship or contribution of medications. All but two of these trials obtained null results. We also looked at previous financial relationships between investigators and industry. Prior to 2000, these relationships were reported in only 1 of the 30 trials (3%). Even after 2000, 28% of the studies did not include a disclosure section. But among articles that included disclosures, there was a financial consulting relationship between at least one author and industry in all (100%) of the cases. Industry influence would produce a bias in favor of positive results, so connections between investigators and industry is not a likely explanation for the trend toward null results in recent years. We considered a variety of aspects of transparent reporting. Prior to 2000, 5 of the 30 published trials (17%) included a diagram that clearly accounted for the number of participants at each phase of the project. Following 2000, publications were significantly more likely to account for patients throughout the study: 14 of the 25 trials (56%) included such a flow diagram (χ2 = 9.22, p = 0.002). After the year 2000 all of the published papers clearly identified primary outcome variable, while the primary outcome variable was not specified in 23% of the publications prior to 2000 (χ2 = 4.75, p = 0.03). A final explanation for the trend toward null reports is that current authors face greater constraints in reporting the results of their studies. In our review, the year 2000 marks the beginning of a natural experiment. After the year 2000, all (100%) of large NHLBI were registered prospectively in ClinicalTrials.Gov prior to publication. Prior to 2000 none of the trials (0%) were prospectively registered. Although many of the earlier studies are in the ClinicalTrials.Gov database, they were registered after the results had been published. Following the implementation of ClinicalTrials.gov, investigators were required to prospectively declare their primary and secondary outcome variables. Prior to 2000, investigators had a greater opportunity to measure a range of variables and to select the most successful outcomes when reporting their results. For trials published before the year 2000, we found that 17 out of 30 (57%) reported significant benefit for their primary outcome. In the new era where primary outcomes are prospectively declared (published post 2000), only 2 of 25 trials (8%) reported a significant benefit (χ2 = 12.2, p = 0.0005). Prospective declaration of the primary outcome variable is important because it eliminates the possibility of selecting for reporting an outcome among many different measures included in the study. In order to investigate this issue, we looked at the statistical significance of other variables not declared as the primary outcomes for preregistered studies. Among the 25 preregistered trials published in 2000 or later, 12 reported significant, positive effects for cardiovascular-related variables other than the primary outcome. Importantly, almost half of the trials might have been able to report a positive result if they had not declared a primary outcome in advance. Had the prospective declaration of a primary outcome not have been required, it is possible that the number of positive studies post-2000 would have looked very similar to the pre-2000 period.

Discussion Beginning in approximately 2000, the likelihood of showing a significant benefit in large NHLBI funded studies declined. Among the explanations we evaluated, the requirement of prospective registration in Clinicaltrials.gov is most strongly associated with the observed trend toward null clinical trials. The decline is not easily explained by the increased use of active comparators or a decline in industry sponsorship. In addition to the explanations at we evaluated using reported characteristics of the trials, we considered several other suggestions. One explanation is that newer clinical trial management methodologies remove error variance and provide more precise estimates of treatment effects. If this were the explanation, refined methodologies and greater precision should have resulted in reductions in error variance, ultimately increasing the likelihood of finding treatments effects. But the probability of finding a treatment benefit decreased rather than increased as studies became more precise. As shown in Fig 1, variability in trial results declined systematically around the year 2000. As a result, we do not find better trial management to be a compelling explanation for the trend toward null results. It is widely noted that journals favor publication of statistically significant findings[2]. Bias in favor of publishing positive outcomes is not a likely explanation for our results. We focused on large trials because previous analyses by NHLBI reported that 97% of trials with annual budgets over $500,000/year were published [3], thus removing publication bias as a rival explanation. In our analysis, 88% of the trials were published, although there may be a slight delay in the date of publication for null trials6. If positive trials are more likely to be published than null trials, we would have expected more positive published reports following 2000. A “file drawer” problem of suppressing null trial findings would result in over reporting positive results. Our observation of a trend toward null results goes in the opposite direction. If there is a bias, it is possible that stricter reporting standards and greater rigor in reporting requirements are suppressing the declaration of positive outcomes. It has been argued that there have been few efficacious drugs in the pipeline[4,5]. Since about 1998, there has been a systematic decline in the number approvals for new cardiovascular drugs[6]. Thus, we would expect more null trials because the rate of developing effective new principals has declined. We believe this explanation unlikely because nearly all of the trials evaluated treatments that had been previously studied. For example, all of the treatments had been approved by the US FDA and these approvals require early phase trial evidence of safety and efficacy. Another explanation for the increase in null trials is the possibility that medical care and supportive therapy have improved since 2000. As a result it has become difficult to demonstrate treatment effects because new approaches must compete with higher quality medical care. In support of this argument is the observation that outcomes in cardiovascular diseases continue to improve despite wide variation in the specific care that patients receive. On the other hand, outcomes of studies that compared treatment to an active standard of care comparison group achieved results quite similar to studies that compared treatment to placebo. However, we do recognize that the quality of background cardiovascular care continues to improve, making it increasingly difficult to demonstrate the incremental value of new treatments. The improvement in usual cardiovascular care could serve as alternative explanation for the trend toward null results in recent years. Our results may also reflect greater involvement by NHLBI in trial design and execution. Prior to 2000, most large NHLBI clinical trials were investigator initiatLaed while nearly 80% of the trials published after 2000 had direct involvement of NHLBI through cooperative agreements. We recognize that industry sponsored trials may have a higher success rate. It is possible that industry conducts trials designed to demonstrate effectiveness while NHLBI uses its resources when there is true equipoise. All post 2000 trials reported total mortality while total mortality was only reported in about 80% of the pre-2000 trials and many of the early trials were not powered to detect changes in mortality. The effects on total mortality were null for both pooled analyses of trials that were registered or not registered prior to publication (see data in online supplement) In addition, prior to 2000 and the implementation of Clinicaltrials.gov, investigators had the opportunity to change the p level or the directionality of their hypothesis post hoc. Further, they could create composite variables by adding variables together in a way that favored their hypothesis. Preregistration in ClinicalTrials.gov essentially eliminated this possibility. Limitations Our analysis is limited to large NHLBI-funded trials and to studies on cardiovascular outcomes in adults. We focused on NHLBI because the Institute has championed transparency and allowed us full access to all trials. We emphasized large trials because we had access to outcomes of nearly all studies, thus reducing the risk of publication bias. Although we focused on cardiovascular trials, null results are common in other areas of medicine. For example, among 221 agents with the potential to modify outcomes for Alzheimer’s disease, all placebo controlled trials registered in clinical trials.gov have failed to identify positive benefits on the declared primary outcome[7]. Our analysis underscores the importance of NHLBI involvement in trials. A greater number of recent trials used direct NHLBI, oversight. The institute is fully vetted for conflict of interest and applies high quality control standards including full transparency, open data access, and registration in ClincalTrials.gov. Our conclusions may not generalize to trials sponsored by industry or to other funding agencies. We cannot say that trend toward null trials to preregistration in ClinicalTrials.gov is causal. Our analysis included only a small number of trials and the design of the study does not allow causal inferences. Most importantly, many variables may have changed around the year 2000. It is likely that other variables that are unknown or unmeasured also correspond to the decline in reports of significant therapeutic treatment effects. Implications The transparency of RCTs is likely to have improved following the FDA Modernization Act of 1997, which created the ClinicalTrials.gov registry[8], a service that required registration of studies that test drugs, biologics, or devices for the treatment of serious or life threatening diseases[9–11]. Registered studies must provide: the study’s purpose, recruitment status, design, eligibility criteria, locations and pre-specified primary and secondary outcomes[11]. The Consolidated Standards of Reporting Trials (CONSORT) were introduced in 1996 but expanded in 2001 to require greater transparency in the reporting of Randomized Clinical Trials (RCTs)[12]. Shortly after 2001, many major journals began requiring prospective registration of clinical trials as a condition for publication and the International Committee of Medical Journal Editors started requiring CONSORT reporting in all major journals beginning in 2004 (icjme.org). NHLBI was an early adopter of trial registration. All of their large trials published after 2000 were preregistered and transparently reported. Although we cannot say that stricter reporting requirements caused the trend toward more null reports from NHLBI trials, we do find the association worthy of more investigation. In conclusion, null findings in large RCTs may be disappointing to investigators, but they are not negative for science. Properly powered trials might identify treatments that will improve public health. A growing collection of trials suggests that promising treatments do not match their potential when systematically tested and transparently reported. Publication of these trials may lead to the protection of patients from treatments that use resources while not enhancing patient outcomes. For example, a recent economic analysis of the Women’s Health Initiative clinical trial suggested that the publication of the study may have resulted in 126,000 fewer breast cancer deaths, and 76,000 deaths from heart disease between 2003 and 2012. The economic analysis estimated that there was about $140 returned for each dollar invested in the study[13]. Transparent and impartial reporting of clinical trial results will ultimately identify the treatments most likely to maximize benefit and reduce harm.

Acknowledgments The authors greatly benefited from feedback on earlier drafts of the paper provided by Michael Lauer and David Gordon from the National Heart, Lung, and Blood Institute, Deborah Zarin, from the National Library of Medicine, Stephanie Chang and Elise Berliner from the Agency for Healthcare Research and Quality and Sheryl Thorburn from Oregon State University. Although the prepublication reviews greatly improved the quality of the manuscript, the authors take full responsibility for the content. Much of the work on this paper was completed while both authors were at the Office of Behavioral and Social Sciences Research, Office of the Director, National Institutes of Health, Bethesda, Maryland, United States of America. The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Agency for Healthcare Research and Quality, National Institutes of Health, or the United States government.

Author Contributions Conceived and designed the experiments: RMK VLI. Performed the experiments: RMK VLI. Analyzed the data: RMK VLI. Contributed reagents/materials/analysis tools: RMK VLI. Wrote the paper: RMK VLI.