Appendix 1: Numeric Method to Calculating VoI Statistics

The numeric example presented here is based around a hypothetical (and much simplified) decision model constructed in Microsoft Excel (ESM, Appendix 3). The layout of the spreadsheet model is broadly consistent with that used in examples elsewhere (e.g. [33]). Some familiarity with Excel and macro programming is required to follow the steps described, for which plenty of guides can be found on the World Wide Web.

Suppose a new treatment had been developed for a given disease. Current data suggest it leads to a slightly lower response rate to the disease, but due to its mechanism of action avoids the risk of side effects.

The decision problem is structured as a decision tree (Fig. 5). A patient prescribed ‘Old’ has a 20 % probability of experiencing side effects and 80 % probability of responding to treatment. A patient prescribed ‘New’ is not at risk of the side effects, but has only a 75 % probability of responding. A patient responding to treatment is assumed to have a remaining quality-adjusted life expectancy equivalent to eight QALYs. Those that do not respond accrue only four QALYs. Patients experiencing side effects from the Old treatment lose an additional two QALYs. Old costs £500 per patient, but an additional £1,000 to treat side effects should they occur, whilst New costs £2,500.

Input parameters and associated uncertainty are described in Table 1 as well as the complete worked example in the ESM, Appendix 3. Parameter uncertainty is propagated through the model to characterise decision uncertainty using Monte Carlo simulation, generating an empirical distribution of incremental net benefit. Details of how to do this are available from numerous textbooks (e.g. [24, 34]).

Table 1 Input parameters to decision model Full size table

Expected Value of Perfect Information

The EVPI can be expressed as the expected maximum net benefit with perfect information less the maximum expected net benefit with current information (Eq. 42).

Example

Table 2 illustrates the method showing the results from just five simulations (1,000 or more are generally recommended to fully characterise model uncertainty, although the required number is dependent on a number of factors including the complexity of the model and level of uncertainty in input parameters). The net benefit for each treatment (New and Old) at a threshold of £20,000 is calculated as per Eq. 43 for each of the five simulations. The final row is the mean (expected) net benefit with each treatment. In this case, New has the highest (i.e., maximum) expected net benefit (£142,430 vs £141,125), thus the decision is to choose New, and ‘N’ is entered in the final row of the column ‘Decision’. In three of the five iterations, choosing New would indeed have been correct, however for iterations 3 and 5, Old yielded a higher net benefit. There is thus an opportunity loss: in iteration 3, the maximum net benefit would have been with Old (£149,245), but as New is chosen, there is a loss of £3,403 (£149,245 − £145,842, shown in column ‘Opp. Loss’). Likewise in simulation 5, the opportunity loss would be £4,837. The expected opportunity loss is therefore £1,647 (final row of column ‘Opp. Loss’), which is by definition the EVPI (per patient). This must be multiplied by the beneficial population to estimate the overall EVPI to society.

Table 2 EVPI via simulation Full size table

Worksheet ‘PSA’ in ESM, Appendix 3 illustrates the EVPI calculation with 1,000 Monte Carlo simulations. The macro ‘ew_PSA’ samples from the input distributions, recalculates the model and inserts the resulting cost and QALYs gained from each intervention into columns B to E. The net benefit at the threshold set in cell I1 is calculated in columns I and J, with the maximum shown in column M. The expectations are in cells I3, J3 and M3, respectively, with the EVPI calculated in cell N3. (Note, running the macro ‘ew_CEAC’ is required to update the cost-effectiveness acceptability curve displayed on the worksheet.)

Expected Value of Perfect Parameter Information

The EVPPI is defined in Eq. 44. Note, firstly the similarity with Eq. 42, and secondly the nested expectations: the process to estimate the EVPPI is shown in Fig. 6. The first step is to sample a value from the target parameter or group of parameters, ϕ. This is one possible realisation of the ‘true’ value of the parameter(s). A value is then sampled from the remaining parameters, ψ. The parameter set is then inserted into the model and the net benefit from each treatment calculated. A new set of values for ψ is then drawn and, along with the previously drawn values of ϕ, are inserted into the model again and the net benefit recorded. This ‘inner loop’ is repeated a ‘large’ number of times (e.g., 1,000 or 5,000), from which the expected net benefit of each treatment is calculated and kept. The outer loop now iterates where a new (set of) value(s) for ϕ is drawn, and the inner loop is repeated. After repeating the outer loop a ‘large’ number of times, there will be many estimates of the (expected) net benefit from each treatment. Taking the expectation of these, and choosing the maximum is the maximum expected net benefit with current information; i.e., the second term inside the brackets of Eq. 44. The expected maximum net benefit (first term inside the brackets of Eq. 44) is calculated as for the EVPI as the expectation of the maximum net benefit from each iteration.

Fig. 6 Process for EVPPI. ϕ set of target parameter(s) of interest, ψ remaining parameters in decision model. EVPI expected value of perfect information, EVPPI expected value of perfect parameter information, NB net benefit Full size image

Example

The summary table for calculating the EVPPI has exactly the same format as for the EVPI (Table 2), where each row represents one iteration of the outer loop, and the numbers recorded are the expected net benefit estimated from the inner loop. ESM, Appendix 3 illustrates a worked example with 100 outer loops and 1,000 inner, and a macro ‘ew_EVPPI’ which calculates the EVPPI for three groups of parameters: probabilities, costs and QALYs. The macro first samples a value for the response rate on Old, the odds ratio of response on New and the risk of side effects on Old (sheet ‘Inputs’, cells J5, J7 and J11). The model is then run for 1,000 iterations holding these values constant whilst values for costs and QALYs are sampled and the results entered into the sheet ‘PSA’ as before. The expectations, calculated in cells I3 and I4 on sheet ‘PSA’, are then copied to cells B5 and B6 on sheet ‘EVPPI’. As described above, the outer loop then reiterates with a new set of values chosen for the probabilities. After 100 outer loops, the expectation of the expected net benefits for each treatment are calculated in sheet 'EVPPI', cells B3 and B4, with the expected maximum in cell B5. The EVPPI is then calculated as per the EVPI and is in cell D2. When repeated for the three groups of parameters, the results can be shown as a chart (Fig. 7). In this case, the EVPPI is concentrated in uncertainty in probabilities, with very little value to reducing uncertainty in QALYs, and none at all in reducing uncertainty in cost. Again, this per patient EVPPI must be multiplied by the beneficial population to estimate the societal EVPPI.

Fig. 7 EVPPI per patient by group of model parameters @ £20,000 per QALY gained Full size image

Expected Value of Sample Information

The EVSI can be considered as the expected maximum expected net benefit with the new information yielded from a study of sample size n per arm less the maximum expected net benefit with current information, multiplied by the beneficial population less those enrolled in the study (Eq. 45). The second term in the equation is common to Eqs. 42 and 44. The first term is again calculated via simulation with a nested inner and outer loop.

The general approach is to repeatedly predict the results of a trial collecting data on the target parameter(s) based on the prior distributions and incorporating that into a predicted posterior which is then sampled from repeatedly (along with other input parameters in the model, Fig. 8). This entire process must be repeated for a wide range of values of n. The distribution of the sampled data will be related to the prior (a relationship known as conjugacy). A detailed discussion of conjugate distributions may be found by elsewhere [14, 35], but Ades et al. [36] provide a useful set of algorithms for a number of distributional forms.

Fig. 8 Process for EVSI. ϕ set of target parameter(s) of interest, ψ remaining parameters in decision model Full size image

Example

ESM, Appendix 3 illustrates an example implementing the EVSI for a beta, normal and gamma distribution, as well as methods for calculating the EVSI for an odds ratio (see worksheet 'EVSI').

The baseline response rate illustrates the method for calculating EVSI with a beta prior and binomial likelihood. The prior information, based on 100 observations, is in cells B5:B6. These are simply taken from cells G5:H5 on worksheet 'Inputs' and shown in Table 1. A possible value for the ‘true mean’ is sampled in cell D5. The macro ‘ew_EVSI_BLResp’ inserts a proposed sample size for a new study, (ranging between 1 and 2,000). Cell F5 samples a possible trial result from the binomial likelihood using the ‘BINOM.INV’ function as a possible number of responders out of the total ‘n’. The preposterior distribution is defined in cells G5:H5, simply by adding the number of responders to the prior ‘a’ parameter and non-responders to the ‘b’ parameter. The macro then inserts this preposterior into cells G5:H5 in worksheet ‘Inputs’, runs the probabilistic sensitivity analysis (macro ‘ew_PSA’), and records the expected net benefit from cells I3:J3 in worksheet ‘PSA’ in the cells I5:J5 of worksheet ‘EVSI’. Cell K5 then chooses the maximum of the two. After this the macro copies and pastes the cells B5:K5 to row 6 before selecting a new possible value for the ‘true mean’ in cell B5 again and repeating a ‘large’ number of times (currently set to 500, but in reality many more than this may be required). The EVSI is then calculated in cell K2, based on the summaries calculated in cells I3:K3. The macro copies the EVSI to cell N5.

At this point the entire process is repeated with a new proposed sample size (in the example a study with n = 10).

Columns P:AF illustrate the same process but for a normally distributed parameter (here the QALYs gained for a responder). The macro programming is identical (macro ‘ew_EVSI_QALYsResp’). The difference is in the calculation of the preposterior distribution. Suppose the prior data are based on a sample size of n = 100 (Cell S5). Where the sample size is known this can be entered directly. However, where the sample size is unknown, or based on some structured elicitation exercise (e.g., [37]), a notional sample size can be inferred from the square of the ratio of the standard error and standard deviation as described in the manuscript for the analytic approach (the standard error being elicited along with the mean, and the standard deviation being estimated from a review of the literature of similar parameters).

The third example estimates the EVSI on a cost study looking at treatment side effects with a gamma distribution. This is most easily handled by sampling from the distribution of the natural log of prior costs as this will be approximately normal, [38, 39] and is indeed what the code shown in cells AH5:AU5 does. The method is then identical to that described for data on QALYs as described above. Where the parameter of interest is count data (e.g., health service contacts), it is possible to program Excel to sample from a Poisson distribution, but the command is not currently inbuilt. However, code is available online to do this (e.g., [40]).

The final example estimates the EVSI of a trial to predict the odds ratio of response to New versus Old. This is done as a two-stage approach [36] whereby firstly the number of responders with baseline treatment (Old) is predicted from the respective prior, then the number of responders in the patients treated with New is predicted by sampling from the log odds ratio. The prior and the data are combined and the resulting preposterior parameters of the log odds ratio are then calculated as per cells CA5:CB5.

Expected Net Gain of Sampling

The approach to calculating ENGS is identical to the analytic method described in the manuscript Sect. 3.4.

$${\text{EVPI}} = N\left[ {E_{\theta } { \hbox{max} }_{j} {\text{NB}}\left( {j,\theta } \right) - { \hbox{max} }_{j} E_{\theta } {\text{NB}}\left( {j,\theta } \right)} \right]$$ (43)

where: N = beneficial population (manuscript Eq. 26), Θ = set of input parameters to the decision model, j = intervention/arm, NB j = net benefit from treatment j, derived from Eqs. 1 and 2 as:

$${\text{NB}}_{j} = \lambda E_{j} - C_{j}$$ (44)

$${\text{EVPPI}}_{\varphi } = N\left[ {E_{\varphi } { \hbox{max} }_{j} E_{\psi |\varphi } {\text{NB}}\left( {j,\varphi ,\psi } \right) - { \hbox{max} }_{j} E_{\theta } \left( {j,\theta } \right)} \right]$$ (45)

where: \(\varphi\) is parameter(s) of interest, \(\psi\) is other parameters such that \(\varphi \cup \psi = \theta\), N, j and NB j are as per Eq. 27