Let the evidence speak for itself. All peer-reviewed scientific reviews and meta-studies published by the U.S. National Library of Medicine.

The findings are: Consistently poor conduct and evaluations, a lack of systematic reviews, poorly-written papers, a lack of transparency, publication bias, and ignoring obvious psychological and physiological differences between species.

Highlights include: a 99.9% failure rate for all drugs, a 99.6% failure rate for Alzheimer’s drugs, and a 90% failure rate overall for all experiments… just to proceed to or match the results of clinical trials, never mind real-world treatments.

1949: Review of Animal Experimentation in Infectious Hepatitis and Serum Hepatitis (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2598889/)

[For] all practical purposes no animal has yet yielded satisfactory results.

2004: Where is the evidence that animal research benefits humans? (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC351856/)

Much animal research into potential treatments for humans is wasted because it is poorly conducted and not evaluated through systematic reviews…Few methods exist for evaluating the clinical relevance or importance of basic animal research… The contribution of animal studies to clinical medicine requires urgent formal evaluation.

2005: Systematic review and metaanalysis of the efficacy of FK506 in experimental stroke

(https://www.ncbi.nlm.nih.gov/pubmed/15703698)



[Our] estimate of effect size might be too high because of factors such as study quality and possible publication bias [for animal experimentation].

2006: Methodological quality of systematic reviews of animal studies: a survey of reviews of basic research

(https://www.ncbi.nlm.nih.gov/pubmed/16533396)



There seems to be a gradient of frequency of methodological weaknesses among reviews… compared to systematic reviews of human clinical trials they are apparently poorer. There is a need for rigour when reviewing animal research.

2006: Comparison of treatment effects between animal experiments and

clinical trials: systematic review (https://www.bmj.com/content/bmj/334/7586/197.full.pdf)

That there is a gap between clinical research and clinical practice is well established. Our work highlights another gap — specifically the lack of communication between those involved in animal research and clinical trialists.

2007: Translating animal research into clinical benefit (https://www.ncbi.nlm.nih.gov/pubmed/17255568)

Poor methodological standards in animal studies mean that positive results rarely translate to the clinical domain.

2007: Comparison of treatment effects between animal experiments and clinical trials: systematic review (https://www.ncbi.nlm.nih.gov/pubmed/17175568)

Discordance between animal and human studies may be due to bias or to the failure of animal models to mimic clinical disease adequately.

2007/8: Systematic reviews of animal experiments demonstrate poor human clinical and toxicological utility (https://www.ncbi.nlm.nih.gov/pubmed/18186670) / (https://www.ncbi.nlm.nih.gov/pubmed/18474018)

In 20 reviews in which clinical utility was examined, the authors concluded that animal models were either significantly useful in contributing to the development of clinical interventions, or were substantially consistent with clinical outcomes, in only two cases, one of which was contentious [a 90% failure rate]… Animal data may not generally be assumed to be substantially useful… Possible causes include interspecies differences, the distortion of outcomes arising from experimental environments and protocols, and the poor methodological quality of many animal experiments…

2009: Why animal studies are often poor predictors of human reactions to exposure (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2746847/)

One reason why animal experiments often do not translate into replications in human trials or into cancer chemoprevention is that many animal experiments are poorly designed, conducted and analysed. Another possible contribution to failure to replicate the results of animal research in humans is that reviews and summaries of evidence from animal research are methodologically inadequate.

2010: Improving Bioscience Research Reporting: The ARRIVE Guidelines for Reporting Animal Research (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2893951/)

The largest and most comprehensive review of published animal research undertaken to date, to our knowledge, has highlighted serious omissions in the way research using animals is reported.

2010: A gold standard publication checklist to improve the quality of animal studies, to fully integrate the Three Rs, and to make systematic reviews more feasible (https://www.ncbi.nlm.nih.gov/pubmed/20507187)

Systematic reviews are generally regarded by professionals in the field of evidence-based medicine as the highest level of medical evidence…However, they are not yet widely used nor undertaken in the field of animal experimentation.

2010: Can animal models of disease reliably inform human studies? (https://www.ncbi.nlm.nih.gov/pubmed/20361020)

The value of animal experiments for predicting the effectiveness of treatment strategies in clinical trials has remained controversial, mainly because of a recurrent failure of interventions apparently promising in animal models to translate to the clinic… [this] failure may be explained in part by methodological flaws in animal studies, leading to systematic bias and thereby to inadequate data and incorrect conclusions… In fact, clinical trials are essential because animal studies do not predict with sufficient certainty what will happen in humans.

2010: Publication bias in reports of animal stroke studies leads to major overstatement of efficacy (https://www.ncbi.nlm.nih.gov/pubmed/20361022)

[P]ublication bias is prevalent in reports of laboratory-based research in animal models of stroke, such that data from as many as one in seven experiments remain unpublished. The result of this bias is that systematic reviews of the published results of interventions in animal models of stroke overstate their efficacy… Nonpublication of data raises ethical concerns…

2012: Ischemic Preconditioning in the Animal Kidney, a Systematic Review and Meta-Analysis (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3289650/)

Key characteristics of scientific practice, and measures to avoid bias, such as characteristics of the subject population, randomization, blinding and exclusion criteria, were infrequently reported. A number of recent systematic reviews show that this is the case in many fields of animals research.

2012: The effects of long-term omega-3 fatty acid supplementation on cognition and Alzheimer’s pathology in animal models of Alzheimer’s disease: a systematic review and meta-analysis (https://www.ncbi.nlm.nih.gov/pubmed/22002791)

Key characteristics of scientific practice such as randomization, blinding, and description of withdrawals/dropouts are routinely published in most human clinical trials, but are often not mentioned in publications of animal studies.

2012: A call for transparent reporting to optimize the predictive value of preclinical research (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3511845/)

These deficiencies in the reporting of animal study design, which are clearly widespread, raise the concern that the reviewers of these studies could not adequately identify potential limitations in the experimental design and/or data analysis, limiting the benefit of the findings.

2012: A call for transparent reporting to optimize the predictive value of preclinical research (https://www.ncbi.nlm.nih.gov/pubmed/23060188)

Numerous publications have called attention to the lack of transparency in reporting, yet studies in the life sciences in general, and in animals in particular, still often lack adequate reporting on the design, conduct and analysis of the experiments.

2013: Threats to Validity in the Design and Conduct of Preclinical Efficacy Studies: A Systematic Review of Guidelines for In Vivo Animal Experiments (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3720257/)

The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical [i.e. animal] research.

2013: Instruments for Assessing Risk of Bias and Other Methodological Criteria of Published Animal Studies: A Systematic Review (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3764080/)

Our review highlights a number of risk of bias assessment criteria that have been empirically tested for animal research, including randomization, concealment of allocation, blinding, and accounting for all animals.

2013: Systematic review and meta-analysis of temozolomide in animal models of glioma: was clinical efficacy predicted? (https://www.ncbi.nlm.nih.gov/pubmed/23321511)

Overall study quality [of animals] was modest; the median number of study quality checklist items scored was 6 (of a possible 12) and no study scored higher than 8.

2013: Systematic Reviews of Animal Models: Methodology versus Epistemology (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3558708/)

Approximately 100 vaccines have been shown effective against an HIV-like virus in animal models, however, none have prevented HIV in humans…The success of the animal model in basic research can also be questioned based on the fact that, according to one report, only 0.004% of basic research papers in leading journals led to a new class of drugs.

2014: Alzheimer’s disease drug-development pipeline: few candidates, frequent failures (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4095696/)

A very high attrition rate was found, with an overall success rate during the 2002 to 2012 period of 0.4% (99.6% failure [rate for Alzheimer’s drugs]).

2015: The Flaws and Human Harms of Animal Experimentation (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4594046/)

The unreliability of animal experimentation across a wide range of areas undermines scientific arguments in favor of the practice… animal experimentation often significantly harms humans through misleading safety studies, potential abandonment of effective therapeutics, and direction of resources away from more effective testing methods… of every 5,000–10,000 potential drugs investigated [through animal experiments], only about 5 proceed to Phase 1 clinical trials [a 99.9% failure rate].

2018: Nimodipine in animal model experiments of focal cerebral ischemia: a systematic review (https://www.ncbi.nlm.nih.gov/pubmed/11588338)

The methodological quality of the studies was poor… Surprisingly, we found that animal experiments and clinical studies ran simultaneously.