Animal studies generate valuable hypotheses that lead to the conduct of preventive or therapeutic clinical trials. We assessed whether there is evidence for excess statistical significance in results of animal studies on neurological disorders, suggesting biases. We used data from meta-analyses of interventions deposited in Collaborative Approach to Meta-Analysis and Review of Animal Data in Experimental Studies (CAMARADES). The number of observed studies with statistically significant results (O) was compared with the expected number (E), based on the statistical power of each study under different assumptions for the plausible effect size. We assessed 4,445 datasets synthesized in 160 meta-analyses on Alzheimer disease (n = 2), experimental autoimmune encephalomyelitis (n = 34), focal ischemia (n = 16), intracerebral hemorrhage (n = 61), Parkinson disease (n = 45), and spinal cord injury (n = 2). 112 meta-analyses (70%) found nominally (p≤0.05) statistically significant summary fixed effects. Assuming the effect size in the most precise study to be a plausible effect, 919 out of 4,445 nominally significant results were expected versus 1,719 observed (p<10 −9 ). Excess significance was present across all neurological disorders, in all subgroups defined by methodological characteristics, and also according to alternative plausible effects. Asymmetry tests also showed evidence of small-study effects in 74 (46%) meta-analyses. Significantly effective interventions with more than 500 animals, and no hints of bias were seen in eight (5%) meta-analyses. Overall, there are too many animal studies with statistically significant results in the literature of neurological disorders. This observation suggests strong biases, with selective analysis and outcome reporting biases being plausible explanations, and provides novel evidence on how these biases might influence the whole research domain of neurological animal literature.

Studies have shown that the results of animal biomedical experiments fail to translate into human clinical trials; this could be attributed either to real differences in the underlying biology between humans and animals, to shortcomings in the experimental design, or to bias in the reporting of results from the animal studies. We use a statistical technique to evaluate whether the number of published animal studies with “positive” (statistically significant) results is too large to be true. We assess 4,445 animal studies for 160 candidate treatments of neurological disorders, and observe that 1,719 of them have a “positive” result, whereas only 919 studies would a priori be expected to have such a result. According to our methodology, only eight of the 160 evaluated treatments should have been subsequently tested in humans. In summary, we judge that there are too many animal studies with “positive” results in the neurological disorder literature, and we discuss the reasons and potential remedies for this phenomenon.

Funding: No specific and/or direct funding was received for this study. No funding bodies played any role in the design, writing or decision to publish this manuscript. The authors were personally salaried by their institutions during the period of writing though no specific salary was set aside or given for the writing of this paper. There are no current external funding sources for this study.

Biases in animal experiments may result in biologically inert or even harmful substances being taken forward to clinical trials, thus exposing patients to unnecessary risk and wasting scarce research funds. It is important to understand the extent of potential biases in this field, as multiple interventions with seemingly promising results in animals accumulate in its literature. Therefore, in this paper, we probed whether there is evidence for excess statistical significance in animal studies of interventions for neurological diseases using a large database of 160 interventions and 4,445 study datasets.

An alternative approach is the excess significance test. This examines whether too many individual studies in a meta-analysis report statistically significant results compared with what would be expected under reasonable assumptions about the plausible effect size [14] . The excess significance test has low power to detect bias in single meta-analyses with limited number of studies, but a major advantage is its applicability to many meta-analyses across a given field. This increases the power to detect biases that pertain to larger fields and disciplines rather than just single topics. Previous applications have found an excess of statistically significant findings in various human research domains [14] – [17] , but it has not been applied to animal research studies.

Detecting these biases is not a straightforward process. There are several empirical statistical methods that try to detect publication bias in meta-analyses. The most popular of these are tests of asymmetry, which evaluate whether small or imprecise studies give different results from larger more precise ones [11] . However, these methods may not be very sensitive or specific in the detection of such biases, especially when few studies are included in a meta-analysis [11] – [13] .

These problems are compounded by different types of reporting biases [8] . First, bias against publication of “negative” results (publication bias) or publication after considerable delay (time lag bias) may exist [9] . Such findings may not be published at all, published with considerable delay, or published in low impact or low visibility national journals in comparison to studies with “positive” findings. Second, selective analysis and outcome reporting biases may emerge when there are many analyses that can be performed, but only the analysis with the “best” results is presented resulting in potentially misleading findings [10] . This can take many different representations such as analyzing many different outcomes but reporting only one or some of them, or using different statistical approaches to analyze the same outcome but reporting only one of them. Third, in theory “positive” results may be totally faked, but hopefully such fraud is not common. Overall, these biases ultimately lead to a body of evidence with an inflated proportion of published studies with statistically significant results.

Several empirical evaluations of the preclinical animal literature have shown limited concordance between treatment effects in animal experiments and subsequent clinical trials in humans [1] – [4] . Systematic assessments of the quality of animal studies have attributed this translational failure, at least in part, to shortcomings in experimental design and in the reporting of results [5] . Lack of randomization, blinding, inadequate application of inclusion and exclusion criteria, inadequate statistical power, and inappropriate statistical analysis may compromise internal validity [6] , [7] .

Animal research studies make a valuable contribution in the generation of hypotheses that might be tested in preventative or therapeutic clinical trials of new interventions. These data may establish that there is a reasonable prospect of efficacy in human disease, which justifies the risk to trial participants.

We plotted the number of studies with a total sample size of at least 500 animals; those which showed a nominally (p≤0.05) statistically significant effect per fixed-effects synthesis; those that had no evidence of small-study effects; and those that had no evidence of excess significance. The numbers represent the studies that have two or more of the above characteristics according to the respective overlapping areas.

Only 46 meta-analyses (29%) found interventions with a nominally significant effect per fixed-effects synthesis and no evidence of small-study effects or excess significance (when this calculation was based on the plausible effect being that of the most precise study) ( Figure 1 ). Of those, only eight had a total sample size of over 500 animals: one pertained to EAE (myelin basic protein [MBP]), four pertained to focal ischemia (minocycline, melatonin, nicotinamide, nitric oxide species [NOS] donors), one pertained to ICH (stem cells), and two to PD (bromocriptine, quinpirole).

Similar results were observed in analyses according to methodological or reporting characteristics of included studies ( Table 4 ). Under the assumption of the effect of the most precise study being the plausible effect, there was evidence of excess significance in all subgroups. However, the strongest excesses of significance (as characterized by the ratio of O over E) were recorded specifically in meta-analyses where small-study effects had also been documented (O/E = 2.94), in those meta-analyses with the least precise studies (O/E = 2.94 in the bottom quartile of weight), and in those meta-analyses where the corresponding studies included a statement about the presence of conflict of interest (O/E = 3.27). Under the assumption of the summary fixed effects being the plausible effect size, excess significance was still formally documented in the large majority of subgroups, but none had such extreme O/E ratios ( Table 4 ).

When the excess of significance was examined in aggregate across all 4,445 studies ( Table 3 ), excess significance was present when assuming as plausible effect the effect of the most precise study (p<1.10 −9 ). The observed number of “positive” studies was O = 1,719, while the expected was E = 919. Excess significance was also documented in studies of each of the six disease categories. An excess of “positive” studies was observed also when assuming the summary fixed effect as the plausible effect size (p<1.10 −9 ).

When the plausible effect was assumed to be that of the most precise study in each meta-analysis, there was evidence (p≤0.10) of excess significance in 49 (31%) meta-analyses (AD n = 2, EAE n = 13, focal ischemia n = 11, ICH n = 10, PD n = 12, SCI n = 1) ( Table 2 ), despite the generally low power of the excess significance test. Under the assumptions of the summary fixed effect being the plausible effect, there was evidence of excess significance in 23 meta-analyses.

There was statistically significant heterogeneity at p≤0.10 for 83 (52%) meta-analyses ( Table S1 ). There was moderate heterogeneity (I 2 = 50%–75%) in 52 (33%) meta-analyses, and high heterogeneity (I 2 >75%) in 22 (14%). The lowest proportion of significant heterogeneity was observed in meta-analyses of ICH (36%) and PD (42%), while all other areas had significant heterogeneities above 70%. Uncertainty around the heterogeneity estimates was often large, as reflected by wide 95% CI of I 2 .

Of the 160 meta-analyses, 112 (70%) had found a nominally (p≤0.05) statistically significant summary effect per fixed-effects synthesis, of which 108 meta-analyses favored the experimental intervention and only four meta-analyses favored the control intervention (94 and four for random effects synthesis). The proportion of the associations that had a nominally statistically significant effect using the fixed-effects summary ranged from 57% for ICH to 100% for AD, focal ischemia, and SCI. Table S1 provides information for all 160 meta-analyses. In 47 (29%) meta-analyses the respective most precise study had a nominally statistically significant result, as described in Table 1 . The effect size of the most precise study in each meta-analysis was more conservative than the fixed-effects summary in 114 (71%) meta-analyses.

Our database included a total of 4,445 pairwise comparisons from 1,411 unique animal studies that were synthesized in 160 meta-analyses ( Table S1 ). Two meta-analyses (n = 1,054 comparisons) pertained to Alzheimer disease (AD), 34 meta-analyses (n = 483) to experimental autoimmune encephalomyelitis (EAE), 16 meta-analyses (n = 1,403) to focal ischemia, 61 meta-analyses (n = 424) to intracerebral hemorrhage (ICH), 45 meta-analyses (n = 873) to Parkinson disease (PD), and two meta-analyses (n = 208) to spinal cord injury (SCI). The median number of comparisons in each meta-analysis was eight (interquartile range [IQR], 3–23). The median sample size in each animal study dataset was 16 (IQR, 11–20), while the median sample size in each meta-analysis was 135 (IQR, 48–376).

Discussion

We evaluated 160 meta-analyses of animal studies describing six neurological conditions, most of which had found a nominally (p≤0.05) statistically significant fixed-effects summary favoring the experimental intervention. The number of nominally statistically significant results in the component studies of these meta-analyses was too large to be true, and this evidence of excess significance was present in studies across all six neurological diseases. Overall, only eight of the 160 meta-analyses had nominally significant results, no suggestion of bias related to small-study effects or excess of significant findings, and evidence procured from over 500 animals.

Animal studies represent a considerable proportion of the biomedical literature with approximately five million papers indexed in PubMed [8]. These studies are conducted to do a first-pass evaluation of the effectiveness and safety of therapeutic interventions. However, there is great discrepancy between the intervention effects found in preclinical animal studies and those found in clinical trials of humans with most of these interventions rarely achieving successful translation [2],[3],[18]. Possible explanations for this failure include differences in the underlying biology and pathophysiology between humans and animals, but also the presence of biases in study design or reporting of the animal literature.

Our empirical evaluation of animal studies on neurological disorders found a significant excess of nominally statistically significant studies, which suggests the presence of strong study design or reporting biases. Prior evaluations of animal studies had also noted that alarmingly the vast majority of the published studies had statistically significant associations, and had suggested high prevalence of publication bias [9],[19], resulting in spurious claims of effectiveness. We observed excessive nominally significant results in all subgroup categories defined by random allocation of treatment, blinded induction of treatment, blinded assessment of the outcome, sample size calculation, or compliance to animal welfare. This suggests that the excess of significance in animal studies of neurological disorders may reflect reporting biases that operate regardless of study design features. It is nevertheless possible that reporting biases are worst in fields with poor study quality, although this was not clear in our evaluation. Deficiencies in random allocation and blinded induction of the treatment or blinded assessment of the outcome have been associated with inflated efficacy estimates in other evaluations of animal research [20],[21].

We also documented very prominent excess of significant results (observed “positive” results being three times the number of expected) for interventions that had also evidence of small-study effects and in meta-analyses with the least precise studies. Both of these observations are commensurate with reporting bias being the explanation for the excess significance, with bias being more prominent in smaller studies and becoming more discernible when sufficiently precise studies are also available.

Conventional publication bias (non-publication of neutral/negative results), may exist in the literature of animal studies on neurological disorders. Our evaluation showed that 46% of the meta-analyses had evidence of small-study effects, which may signal publication bias. However, this association is not specific, and the Egger test used to evaluate small-study effects is underpowered especially when it evaluates few and small studies in a meta-analysis [13]. It is also likely that selective outcome or analysis reporting biases exist. The animal studies on neurological disorders used many different outcomes and methods to measure each outcome as can be seen across the Table S1, and they may have used different statistical analysis techniques and applied several different rules for inclusion and exclusion of data. Thus, it is possible that individual studies may have measured different outcomes, tested a variety of inclusion and exclusion criteria and performed several statistical analyses, but reported only a few findings guided in part by the significance of the results. Detection of such biases is difficult and no formal well-developed statistical test exists. Evidence is usually indirect and requires access to the study protocol or even communication with the investigators.

In contrast to the above, we found eight interventions with strong and statistically significant benefits in animal models and without evidence of small-study effects or excess significance. However, the data for these interventions may still have compromised internal validity; having identified one of these, melatonin, as a candidate treatment for stroke, we tested efficacy in an animal study designed specifically to avoid some of the prevalent sources of bias. Under these circumstances melatonin had no significant effect on outcome [22].

It is interesting to discuss whether human experimental evidence for these interventions is more promising than the generally disappointing results seen for most interventions that have previously given some signal of effectiveness in animals. A meta-analysis of 33 animal studies showed that administration of MBP reduced the severity of EAE, which is an animal model for multiple sclerosis. However, a phase III randomized clinical trial (RCT) in humans showed no significant differences between MBP and placebo [23]. Minocycline, a tetracycline antibiotic with potential neuroprotective effects, showed improvements in stroke scales in two human RCTs, but these were small phase II trials [24],[25] and have not been confirmed in larger studies. Several animal studies of melatonin, an endogenously produced antioxidant, have reported a beneficial effect on infarct volume [26], but RCTs with clinical endpoints in humans don't exist. A small RCT did not show significant differences in oxidative or inflammatory stress parameters between melatonin and placebo [27]. Administration of nicotinamide, the amide of vitamin B3, to animals with focal ischemia reduced the infarct volume [19], but RCTs have not evaluated clinical outcomes in relation to nicotinamide [28]. Several animal studies of NOS donors, like the anti-anginal drug nicorandil, have shown reductions in infarct volume, and RCTs have also shown that nicorandil improves cardiac function in patients with acute myocardial infarction [29],[30]. Granulocyte-colony stimulating factor (G-CSF), a growth factor that stimulates the bone marrow to produce granulocytes and stem cells, has been reported in animal studies to improve the neurobehavioral score in animals with ICH. Some similar evidence exists from RCTs of stroke patients, but it consists of small phase II trials [31],[32], and an unpublished phase III trial was neutral (Ringelstein P et al., International Stroke Conference, Feb 2012). Bromocriptine and quinpirole are dopamine agonists that have been successfully used in animal studies of PD [33]. Bromocriptine is approved to treat PD in humans [34], but no human trial exists for quinpirole. In spite of this patchy record, interventions with strong evidence of efficacy and no hints of bias in animals may be prioritized for further testing in humans.

Some limitations should be acknowledged in our work. First, asymmetry and excess significance tests offer hints of bias, not definitive proof thereof. Most individual animal studies were small with a median total sample size of 16 animals, and a median number of eight comparisons in each meta-analysis. Therefore, the interpretation of the excess significance test for the results of a single meta-analysis should be very cautious. A negative test for excess significance does not exclude the potential for bias [14]. The most useful application of the excess significance test is to give an overall impression about the average level of bias affecting the whole field of animal studies on neurological disorders.

Second, the exact estimation of excess statistical significance is influenced by the choice of plausible effect size. We performed analyses using different plausible effect sizes, including the effect of the most precise study in each meta-analysis, and the summary fixed effect; these yielded similar findings. Effect inflation may affect even the results of the most precise studies, since often these were not necessarily very large or may have had inherent biases themselves, or both. Thus, our estimates of the extent of excess statistical significance are possibly conservative, and the problem may be more severe.

Third, we evaluated a large number of meta-analyses on six neurological conditions, but our findings might not necessarily be representative of the whole animal literature. However, biases and methodological deficits have been described for many animal studies regardless of disease domain [1],[2],[21].

In conclusion, the literature of animal studies on neurological disorders is probably subject to considerable bias. This does not mean that none of the observed associations in the literature are true. For example, we showed evidence of eight strong and statistically significant associations without evidence of small-study effects or excess significance. However, only two (NOS donors and focal ischemia, and bromocriptine and PD) of even these eight associations seem to have convincing RCT data in humans. We support measures to minimize bias in animal studies and to maximize successful translation into human applications of promising interventions [35]. Study design, conduct, and reporting of animal studies can be improved by following published guidelines for reporting animal research [35],[36]. Publication and selective reporting biases may be diminished by preregistering experimental animal studies. Access to the study protocol and also to raw data and analyses would allow verification of their results, and make their integration with other parallel or future efforts easier. Systematic reviews and meta-analyses and large consortia conducting multi-centre animal studies should become routine to ensure the best use of existing animal data, and to aid in the selection of the most promising treatment strategies to enter human clinical trials.