A striking contrast runs through the last 60 years of biopharmaceutical discovery, research, and development. Huge scientific and technological gains should have increased the quality of academic science and raised industrial R&D efficiency. However, academia faces a "reproducibility crisis"; inflation-adjusted industrial R&D costs per novel drug increased nearly 100 fold between 1950 and 2010; and drugs are more likely to fail in clinical development today than in the 1970s. The contrast is explicable only if powerful headwinds reversed the gains and/or if many "gains" have proved illusory. However, discussions of reproducibility and R&D productivity rarely address this point explicitly. The main objectives of the primary research in this paper are: (a) to provide quantitatively and historically plausible explanations of the contrast; and (b) identify factors to which R&D efficiency is sensitive. We present a quantitative decision-theoretic model of the R&D process. The model represents therapeutic candidates (e.g., putative drug targets, molecules in a screening library, etc.) within a “measurement space", with candidates' positions determined by their performance on a variety of assays (e.g., binding affinity, toxicity, in vivo efficacy, etc.) whose results correlate to a greater or lesser degree. We apply decision rules to segment the space, and assess the probability of correct R&D decisions. We find that when searching for rare positives (e.g., candidates that will successfully complete clinical development), changes in the predictive validity of screening and disease models that many people working in drug discovery would regard as small and/or unknowable (i.e., an 0.1 absolute change in correlation coefficient between model output and clinical outcomes in man) can offset large (e.g., 10 fold, even 100 fold) changes in models’ brute-force efficiency. We also show how validity and reproducibility correlate across a population of simulated screening and disease models. We hypothesize that screening and disease models with high predictive validity are more likely to yield good answers and good treatments, so tend to render themselves and their diseases academically and commercially redundant. Perhaps there has also been too much enthusiasm for reductionist molecular models which have insufficient predictive validity. Thus we hypothesize that the average predictive validity of the stock of academically and industrially "interesting" screening and disease models has declined over time, with even small falls able to offset large gains in scientific knowledge and brute-force efficiency. The rate of creation of valid screening and disease models may be the major constraint on R&D productivity.

Competing interests: The authors of this manuscript have the following competing interests: JWS is a director and shareholder of JW Scannell Analytics Ltd., which sells consulting services related to biopharmaceuticals. JB is a partner and employee of Clerbos LLC which sells consulting services related to systems biology. These companies did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries, dividends, research materials, and publication costs. This does not alter the authors' adherence to PLOS ONE policies on sharing data and materials.

Funding: The authors received funding from JW Scannell Analytics Ltd (JWS) and from Clerbos LLC (JB). JW Scannell Analytics Ltd provided research materials for JWS and paid the PLOS ONE Publication Fee for the paper. Clerbos LLC provided support in the form of salary for JB. The funders did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the 'author contributions' section.

Copyright: © 2016 Scannell, Bosley. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Readers who are less familiar with statistics and DT may prefer to read the Discussion section before returning to the Methods and Results. The Discussion is in three parts. Part 1 frames headwinds to R&D productivity in terms of the progressive exploitation, exhaustion, and abandonment of disease models with high predictive validity (PV). Part 2 considers the reproducibility crisis in similar terms. Part 3 sets out some practical suggestion to improve PV evaluation and raise PV.

We believe that a variety of standard tools from the fields of decision theory and decision analysis (DT) [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] shed light on the headwinds and may help distinguish the kind of gains that are likely to be real. The Methods and Results section of the paper presents a DT-based model of biopharmaceutical R&D and quantitative analyses that explore the factors to which R&D decisions are sensitive. The model is described in terms of commercial R&D, but we think the framework and the results are generalizable to the academic setting, and to “translation” from academia to industry; in fact to many situations where positives (e.g., good drug targets, good candidate therapeutic mechanisms) are rare and where a large universe of possibilities is filtered via a series of measurements and decisions to a small set of possibilities. In statistical or DT terms, the mechanics of the model are fairly standard. The model is a classifier in the presence of multiple, or multistep, predictors. However, the application is, we think, novel.

The contrasts [ 12 ] between huge gains in input efficiency and quality, on one hand, and a reproducibility crisis and a trend towards uneconomic industrial R&D on the other, are only explicable if powerful headwinds have outweighed the gains [ 1 ], or if many of the “gains” have been illusory [ 24 ] [ 25 ] [ 26 ].

These kinds of improvements should have allowed larger biological and chemical spaces to be searched for therapeutic conjunctions with ever higher reliability and reproducibility, and at lower unit cost. That is, after all, why many of the improvements were funded in the first place. However, in contrast [ 12 ], many results derived with today’s powerful tools appear irreproducible[ 13 ] [ 14 ][ 15 ] [ 16 ]; today’s drug candidates are more likely to fail in clinical trials than those in the 1970s [ 17 ] [ 18 ]; R&D costs per drug approved roughly doubled every ~9 years between 1950 and 2010 [ 19 ] [ 20 ] [ 1 ], with costs dominated by the cost of failures [ 21 ]; and some now even doubt the economic viability of R&D in much of the drug industry [ 22 ] [ 23 ].

The scope, quality and cost efficiency of the scientific and technological tools that are widely believed to be important for progress in biopharmaceutical discovery and research have improved spectacularly. To quote a review from 2012 [ 1 ]: “… combinatorial chemistry increased the number of drug-like molecules that could be synthesized per chemist per year by perhaps 800 times through the 1980s and 1990s [ 2 ] [ 3 ] [ 4 ] and greatly increased the size of chemical libraries [ 5 ]. DNA sequencing has become over a billion times faster since the first genome sequences were determined in the 1970s [ 6 ] [ 7 ] aiding the identification of new drug targets. It now takes at least three orders of magnitude fewer man-hours to calculate three-dimensional protein structure via x-ray crystallography than it did 50 years ago [ 8 ] [ 9 ], and databases of three-dimensional protein structure have 300 times more entries than they did 25 years ago [ 10 ] [ 9 ], facilitating the identification of improved lead compounds through structure-guided strategies. High throughput screening (HTS) has resulted in a tenfold reduction in the cost of testing compound libraries against protein targets since the mid-1990s [ 11 ]. Added to this are new inventions (such as the entire field of biotechnology, computational drug design and screening, and transgenic mice) and advances in scientific knowledge (such as an understanding of disease mechanisms, new drug targets, biomarkers, and surrogate endpoints).”

Methods and Results

The Compounding Effects of True and False Positive Rates Fig 1A shows a series of decisions acting on an initial sample of therapeutic candidates of which A would be approved if fully developed and then scrutinized by the regulator, and of which U would not. The objective of the subsequent R&D process is to increase the ratio of approvable to unapprovable candidates. The ratios of approvable to unapprovable candidates through the process are given by Eqs 1–4. The equations show the importance of the spread between the TPR and FPR of each decision, and the compounding effect of sequential TPRs and FPRs, in achieving the objective. (1) (2) (3) (4) Here, Q start is the ratio of approvable to unapprovable candidates in the initial starting set; Q D→P is the ratio among candidates leaving Discovery and entering Preclinical; and Q P→C is the ratio leaving Preclinical and entering Clinical Trials; etc. TPR D and FPR D are true and false positive rates for classifier D using the gold standard of regulatory approval (the FDA) as the reference (Fig 1A); TPR P and FPR P are stepwise true and false positive rates for classifier P using the FDA as the reference; etc. With a series of high TPRs and low FPRs, Q will tend to be high. With a series of low TPRs and high FPRs, Q will tend to be low. While this is clearly apparent in some R&D productivity analyses [61] [49], the importance of the TPR versus FPR spread is not captured by other sets of metrics that have been influential in the drug industry[21] [65]. As Cook et al. [65] point out, management metrics that focus on the quantity of R&D activity, not on decision quality, have sometimes proven counterproductive. Eqs 1–4 also show the importance of starting with the right set of therapeutic candidates (i.e., a sufficiently high A to U ratio). This topic is already the focus of a large body of literature in, for example, the fields of chemoinformatics, screening library design, and structure-based design, and we do not consider it further in this paper.

Presentation of the Quantitative Decision Model We have produced a quantitative decision model that can be applied to the process shown in Fig 1. Each decision or reference variable (the random variables X, Y, Z, …, R, Table 1) corresponds to one axis of a multidimensional measurement space. The individual scores of the therapeutic candidates, molecules a, b, c, d, etc., on each variable are coordinates in the space. Thus candidate molecule a occupies position (x a , y a , z a …), molecule b occupies position (x b , y b , z b …), etc. One can apply one or more decision thresholds (thresholds x t , y t , z t , etc.)–or other decision rules–to divide the space and to assess the quantitative relationships between decision performance (e.g., PPV, FDR, or TPR), and a variety of factors such as the proportion of positives at the start of the process (i.e., A/(A + U) in Fig 1), the throughput or brute-force power of each PM, and the degree to which each PM yields decision variables that are correlated with other decision variables and with R, the gold standard reference variable (Fig 1B). For the analyses shown in the body of this paper, the probability density of molecules within the measurement space is a multivariate normal distribution. More formally, we use a random vector of standardized covariates x = [X, Y, Z, …, R] distributed as a multivariate normal distribution, where μ = [0, 0, 0,…, 0] and the covariance matrix, ∑, is equal to the correlation matrix, corr[X, Y, Z, …, R]: (5) We have repeated the analysis for other probability density functions, with sometimes identical, often similar, but sometimes predictably different results (S2 File). The model can be applied to multiple decision variables and classification steps (see later), but we start with a single decision step (Fig 2). Here, the random vector x = [Y, R] is distributed as a bivariate normal distribution, and the correlation coefficient between decision variable Y and reference variable R is ρ Y,R . The correlation parameter, ρ Y,R , (Fig 2, Eq 8) operationalises the concept of the predictive validity (PV) of the reference variable. When the correlation between the reference variable and decision variable is high, the ordering of candidates on the decision variable will tend to match the ordering of candidates on the reference variable. It would, of course, be possible to operationalize the concept of PV in other ways (Table 1). PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Fig 2. Quantitative classifier model. Bivariate normal probability density function determined by the correlation, ρ Y,R , between decision variable, Y, and reference variable, R. Lighter colours indicate high probability density (candidate molecules more likely to lie here), and darker colours indicate a low probability density (molecules less likely to lie here). The units on the horizontal and vertical axes are one standard deviation. We apply a decision threshold, y t (vertical dotted line) to the decision variable and then apply a reference test and a reference threshold, r t ,(horizontal dotted line) to molecules that exceed the decision threshold y t . In the sensitivity analyses (see later) decision and reference thresholds are varied as is ρ Y,R . True positives (TP) and false positives (FP) correspond to the probability mass in the upper right and lower right quadrants, respectively. (A) When ρ Y,R is high, PPV is high. (B) When ρ Y,R is low, PPV tends to be low. https://doi.org/10.1371/journal.pone.0147215.g002 A molecule will be classified as a yes, and receive further scrutiny, if its score on the decision variable meets or exceeds a threshold y t (Fig 2). The decision threshold y t can be regarded both as a measure of the rate of attrition or stringency of the decision and also as a measure of throughput. This point may not be obvious, but it is important. As y t rises, fewer candidate molecules are deemed to be yeses, so one has to screen more therapeutic candidates for each yes. When y t = 2.32 standard deviation units (horizontal axis, Fig 2), only the top hundredth of molecules will be yeses. One would expect to screen one hundred candidates per yes. When y t = 3.09 standard deviation units (Fig 2), only the top thousandth of molecules will be yeses. One would expect to screen one thousand molecules per yes. Thus, higher decision thresholds depend on higher throughput, and it is higher throughput that makes higher decision thresholds possible. In some parts of the paper we express stringency or throughput in terms of the probability that a randomly selected candidate lies at or above the decision threshold, y t . This is shown in Eq 6, where Φ is the cumulative distribution function of the standard normal distribution: (6) To be deemed to be a true positive, a candidate that is a yes on the basis of its score on the decision variable must then meet or exceed a threshold r t on the gold standard reference variable R. When r t is high, fewer candidate molecules within the set that is being searched by the R&D process have the potential to succeed (i.e., A/(A + U) declines as r t increases). Our definition of r t is statistical and is not discussed in terms of a specific trial endpoint or experimental outcome. However r t is realistic in the sense that it will tend to move up and down with common-sense measures of regulatory stringency, or with a common-sense view of the competitive intensity within a therapy area. In some parts of the paper we express the difficulty of the search process in terms of the probability that a randomly selected candidate lies at or above the reference threshold, r t : (7)

Measures of Decision Quality The proportion of molecules which meets or crosses the decision threshold, y t , and which receives further scrutiny, corresponding to the probability mass to the right of the vertical dotted line in Fig 2, is: (8) The proportion of true positives, corresponding to the probability mass in the upper right quadrant of Fig 2, is given by: (9) The proportion of progression decisions which yield true positives is the positive predictive value, or PPV. The PPV of the classifier is: (10) PPV is an important measure of decision quality in drug R&D because the unit costs per surviving therapeutic candidate tend to rise through the R&D process [21]. Thus, real-world portfolio management processes often seek to maximize PPV. Furthermore, PPV is equal to (1-FDR) where FDR is the false discovery rate. Health authorities such as the FDA and the European Medicines Agency (EMA) are often concerned to minimise the FDR, which is equivalent to maximising PPV.

A Single Decision Step Fig 3 illustrates of the performance of single decision step. When PV is high, the classifier can effectively distinguish between positives and negatives. When PV is low, it cannot. Fig 3 also illustrates some other typical classifier properties. There is usually a trade-off between TPR and FPR. When the classifier is stringent (i.e., applies a high decision threshold, which in turn requires a high throughput), the FPR tends to be low, but the TPR tends to be low too PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Fig 3. Predictive validity and classifier performance. (A) The bivariate normal probability density function for decision variable Y (horizontal axis) and reference variable R (vertical axis). The correlation between Y and R is high (ρ Y,R = 0.95) so the decision variable has high PV. The graph shows only the positive quadrant of the distribution. The reference threshold, expressed here in units of standard deviation, is r t = 0.5 (dotted line) so positives are common, accounting for P(R ≥ r t ) ≈ 30% of the probability mass. (B) shows TPR (solid line) and FPR (dotted line) as the decision threshold, y t , varies. At some thresholds, the spread between the TPR and FPR is wide. (C) shows PPV vs. decision threshold, y t . (D) to (F) repeat the analyses with a decision variable with lower PV (ρ Y,R = 0.4). PPV declines vs. panel (C) but PPV remains high because positives are common. (G) to (I) repeat that analysis at ρ Y,R = 0.95 but with a high reference threshold (2.5 standard deviation units) and rare positives (P(R ≥ r t ) ≈ 0.6% of the probability mass). It is possible to achieve a high PPV, but only at a high decision threshold when the TPR is low, which would require screening a large number of items per positive detected. (J) to (L) show the situation with the same high reference threshold (i.e., rare positives) but with a decision variable with low PV. In this case, PPV is low, even with a very high decision threshold and a very low TPR. https://doi.org/10.1371/journal.pone.0147215.g003 Fig 3 shows that stringency tends to raise PPV (and lower FDR), but setting a high decision threshold may not, in practical terms at least, rescue the performance of a classifier if the decision variable has low PV (Fig 3L). A more effective way to tune the decision process to raise parameter Q, the ratio of approvable to non approvable candidates at each step (Eqs 1–4), may be to improve the predictive validity of PMs (or to choose therapeutic problems where PV is likely to be high). Fig 3 also shows that decision performance is sensitive to the reference threshold. When r t increases and positives become rarer, decision performance tends to becomes worse. Thus, as therapeutic standards within a therapy area rise, a constant set of PMs may appear to perform less well.

Sensitivity Analysis of a Single Decision Step Fig 4 shows the PPV of the classifier as y t (stringency or throughput) and as ρ Y,R (predictive validity of the decision variable) vary. It shows two conditions, one where the positives are relatively common (P(R ≥ r t ) = 0.01, or one percent of the candidates entering the classifier) and one where positives are rare (P(R ≥ r t ) = 10−5, or one hundred thousandth of the candidates entering the classifier). PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Fig 4. Decision performance as y t (throughput) and ρ Y,R (predictive validity) vary. Shading shows the PPV of the classifier (log 10 units, with lighter shades showing better performance). The vertical axis represents both decision threshold and screening throughput. The scale is in log 10 units. 7 represents a throughput of 107 and a decision threshold that accepts only the top 107th of candidates (P(Y ≥ y t ) = 10−7, Eq 6); 6 represents a throughput of 106 and a decision threshold that accepts only the top 106th of candidates (P(Y ≥ y t ) = 10−6, Eq 6); etc. The horizontal axis represents PV as the correlation coefficient, ρ Y,R , between Y and R, with the right hand end of each axis representing high PV (ρ Y,R = 0.98), and the left hand end of each axis representing low PV (ρ Y,R = 0). Our choice of scale for each axis is discussed in the main text. In (A), positives are relatively common. Here, P(R ≥ r t ) = 0.01, or one percent of the candidates entering the classifier. In (B), positives are relatively rare. Here, P(R ≥ r t ) = 10−5, or one hundred thousandth of the candidates entering the classifier. The spacing and orientation of the contours show the degree to which PPV changes with throughput and with ρ Y,R . PPV is relatively sensitive to throughput when ρ Y,R is high and when positives are very rare (lower right hand side of panel B.). However, PPV is relatively insensitive to throughput when ρ Y,R is low (left hand side of both panels). For much of the parameter space illustrated, an absolute 0.1 change in ρ Y,R (e.g., from 0.4 to 0.5, or 0.5 to 0.6 on the horizontal axis) has a larger effect on PPV than a 10x change in throughput (e.g., from 4 log 10 units to 5 log 10 units on the vertical axis). https://doi.org/10.1371/journal.pone.0147215.g004 For the single decision step, one can imagine the decision variable, Y, as representing an aggregate measure derived from the progressive screening, optimisation, and preclinical assessment of a large number of potential drug candidates. We think such aggregation is reasonable for the purposes of illustration. This is for two reasons. First, the FPR and TPR of a chain of classifiers are the products of the individual stepwise FPRs and TPRs (Eqs 1–4). Second, we find similar results for combinations of decision variables across multiple classification steps (see later). Note also that the results we show use parameters that are relevant for discovery and preclinical phases of commercial drug R&D, from which few candidates are selected for clinical trials and from which few randomly selected candidates would succeed in trials (i.e., P(R ≥ r t ) ≤ 0.1 and P(Y ≥ y t ) ≤ 0.1). The general model would be applicable to situations where many or even most molecules are positives, in late stage clinical development, for example. However, the quantitative results and conclusions would be different. Furthermore, there is already a mature literature that applies DT-related ideas to clinical development (see, for example: [35] [37] [36] [49]) The scale and range of the vertical axis in Fig 4 can be regarded as representing the range in brute force power or efficiency of PMs in drug R&D. One can conceptualize this in several ways, such as the growth over time in size of compound libraries that can be used in a screening campaign (e.g., from in vivo screening in the 1930s to high throughput screening circa 2015), or as the range in the cost efficiency (1/unit cost per therapeutic candidate tested) of PMs today (e.g., from human trials, via in vivo primate disease models, via in vitro cellular models to in silico protein structure based screening) [1] [74]. Several of the results in Fig 4 are unsurprising. First, PPV increases as ρ Y,R , the correlation between Y and R, increases. Second, PPV increases if one applies very high y t thresholds (very high throughputs). Third, PPV is higher when the reference threshold for positives, r t , is lower. In other words, and rather obviously, there will be a lot of correct decisions to initiate clinical trials when we have PMs with very high PV, which can be reasonably be applied to a very large number of therapeutic candidates, a high proportion of which would have been good enough in the first place to yield successful clinical outcomes. However, there are results which are less obvious but which appear important for the conduct of decision processes such as drug R&D. The first is the strength of the effect of ρ Y,R on PPV (see orientation of the PPV contours in Fig 4, and note both the logarithmic vertical axis and the logarithmic colour scale). For much of the parameter space illustrated, an absolute 0.1 change in ρ Y,R , the correlation coefficient, has a larger effect on PPV than a ten-fold or 1 log 10 unit change in throughput (vertical axis). We suggest that for many, perhaps most, people working with PMs in drug discovery, an 0.1 absolute change in the correlation between the output of two PMs, or between the decision variable from a PM and the reference variable, would often–even if it were known or knowable–be viewed as small; a difference that would be lost in the general experimental noise. On the other hand, most people would regard a 10 fold increase in throughput or a 10 fold decrease in the unit cost of a PM as a large change. The second important result is the interaction between y t and ρ Y,R on PPV (see how the orientation of the contours changes in Fig 4). Increasing throughput by several orders of magnitude has a minimal positive effect on PPV when ρ Y,R is very low. Increasing throughput has a large positive effect on PPV only when ρ Y,R is high. Modest gains in ρ Y,R can have very large positive effect on PPV when throughput is high. In practical terms, there is little point in investing to increase the throughput of a poor PM or the stringency of the classifier based on that PM. It makes more sense to invest to achieve high PV first. Furthermore, increasing the throughput of a good PM or the speed or stringency of R&D decisions only makes sense if such changes do not cause a meaningful reduction in PV.