National Academies Press: OpenBook

Modern Methods of Clinical Investigation (1990)

Chapter: 9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method

« Previous: 8. Meta-Analysis: A Quantitative Approach to Research Integration
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 101
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 102
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 103
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 104
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 105
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 106
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 107
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 108
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 109
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 110
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 111
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 112
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 113
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 114
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 115
Suggested Citation:"9. An Introduction to a Bayesian Method for Meta-Analysis: The Confidence Profile Method." Institute of Medicine. 1990. Modern Methods of Clinical Investigation. Washington, DC: The National Academies Press. doi: 10.17226/1550.
×
Page 116

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

9 An Introduction to a Bayesian Method for Meta-Analysis: * The Confidence Profile Method DAVID M. EDDY, VIC HASSELBLAD, and ROSS SHACHTER The Confidence Profile Method is a form of meta-analysis. It is a Bayesian method for interpreting, adjusting, and combining evidence to estimate a proba- bility distribution for a parameter. Examples of parameters are health outcomes, economic outcomes, and variables that might be used in models, such as the sensitivity of a diagnostic test or the prevalence of a risk factor. This paper introduces some of the mathematics, indicates the scope of the method, and gives a few examples of formulas. Additional information can be found in Eddy (1~; Eddy, Hasselblad, and Shachter (21; and Shachter, Eddy, and Hasselblad (3~. BASIC FORMULAS Let £ be the parameter of interest. Designate as X~ the results of a piece of evidence about £, say, the results of an experiment. Our task is to estimate the distribution for £, conditional on the results of the experiment, Xl. Using the conventional notation for a conditional probability, we denote this distribution as ~~£ ~ X'). By Bayes's formula, this posterior distribution is calculated as the product of a prior distribution for £ [which we denote as ~~£~] and the likelihood function for the experiment. ~~£ ~ Xl) = k L(XI ~ £) ~~) *This paper was previously published in Medical Decision Making 1990;10:15-23. 101 (1)

102 DAVID M. EDDY ET AL. The likelihood function, L(X~ ~ £), gives the likelihood of observing the actual results of the experiment (X'), conditional on any possible value of the true effect of the technology (en. "k" is a normalizing constant. Equation 1 is quite general. A specific example is the formula for analyzing the effect of a single diagnostic test on the probability that a patient has a dis- ease. 1 P(Disease ~ Test Pos) = P(Test Pos) P(Test Pos ~ Disease) P(Disease) The "predictive value positive" [P(Disease ~ Test Positive)] corresponds to the posterior distribution, the sensitivity of the test [P(Test Positive ~ Disease)] cor- responds to the likelihood function, the prior probability of disease [P(Disease)] corresponds to the prior distribution, and the denominator [P(Test Positive)] corresponds to the normalizing constant. Now suppose a second piece of evidence gives results X2. The updated pos- terior distribution for £ that incorporates both pieces of evidence can be calcu- lated by inserting its likelihood function in the equation. ~£ ~ Xl,X2) = k L2(X2 ~ £,X`)L`(X' ~ £) ~£) (2) If the experiments are dependent, the likelihood function for the second experi- ment is conditional on the results of the first experiment, as shown in Equation 2. If the experiments are independent, which is very frequently the case, then: and: L2 (X2 ~ £,X~) = L2(X2 ~ £) 1~£ ~ Xl,X2) = k L2 (X2 ~ £)L'~X' ~ £) ~£) Biases (3) An important problem in the evaluation of evidence is the presence of bias- es. An important difference between the Confidence Profile Method and other meta-analysis techniques is the explicit modeling of biases and their incorpora- tion in the distribution for the parameter of interest. Again designate £ as the parameter of interest. For a variety of reasons, a particular experiment might estimate a related but slightly different parameter. Call this £' or the "study parameter." If the study parameter is not identical to the parameter of interest (i.e., if £ ~ £'), the evidence is biased. A wide variety of factors can bias an experiment. For example, biases to internal validity of a two-arm prospective controlled trial include: · Inaccurate measurement of outcomes · Incorrect determination of who actually received a technology · Crossover: some patients who are offered a technology might not receive

THE CONFIDENCE PROFILE METHOD 103 it ("dilution") and some patients in the control group might receive it anyway ("contamination") · Differences in the patients in the two groups ("patient-selection bias") · Loss of patients to follow-up, and · Uncertainty about the actual number of cases or outcomes. Biases to external validity include: · Differences between the population involved in the experiment and the population of interest · Differences between the technology used in the experiment and the tech- nology of interest (e.g., type of equipment, dose of a drug, skill of practitioners) · Differences in follow-up times across experiments, and · Differences in effect measures across experiments. If biases exist, indiscriminate use of meta-analytic methods that fail to adjust for them will be incorrect. In the case of the Bayesian approach, if an experi- ment contains biases to internal validity, the likelihood function will apply to £' rather than £. That is, ~(£ ~ X) ~ L (X ~ ~ ) ~(£) The Confidence Profile Method can correct for this by defining a function that relates the study parameter (£') to the parameter of interest (£). Call this func- tion: f(£). This function can be substituted for c' in the likelihood function, restoring the correctness of Bayes's formula. ~(£ ~ X) = L[X ~ i(~)] ~(£) This last formula illustrates the three basic ingredients of the Confidence Profile Method. The method requires prior distributions, likelihood functions, and functions that describe biases. It also requires functions that define the measures of effect (which will be introduced below). Prior Distributions The most conservative and widely used approach uses noninformative prior distributions. The choice of a prior distribution then has a minimal effect on the posterior distribution. Berger (4) has described methods for determining nonin- formative prior distributions, depending on the interval over which the parame- ter of interest is defined. For parameters defined on the entire real line ~ ~ (me, so) the appropriate prior distribution is () = 1. For parameters defined on the positive real line, ~ ~ (O. or), the appropriate prior is () = 1/~. For probabili- ties defined on the interval (0,1), the method of Jeffreys (5) gives beta distribu- tion with parameters 1/2, 1/2. For the multinomial model, the comparable prior for the Hi ~ (0,1) is a Dirichlet distribution with parameters 1/2, 112, . . . 1/2.

104 DAVID M. EDDY ET AL. Likelihood Functions At the heart of the Confidence Profile Method are likelihood functions. A different likelihood function is needed for each type of experiment, each type of outcome, and each type of effect measure. The possible combinations are shown in Table 9.1. TABLE 9.1 Likelihood functions for various types of experimental designs, outcomes, and effect measures Outcomes Designs Dichotomous Categorical Count Continuous One-Arm Rate Rate Mean Count Mean Score Prospective Score Median Score l~vo-Arm Difference Difference Difference Difference Prospective Ratio Ratio Ratio Ratio Odds Ratio % Difference e-Arm Prospective Coefficients of Coefficients of Coefficients of Coefficients of Logistic Linear Linear Linear Regression Regression Regression Regression Equation, pi Equation, pi Equation, pi Equation, pi 2x2Case Odds Ratio NA NA NA Control 2 x n Case Coefficients of NA NA NA Control Logistic Regression Equation, pi Matched Case Odds Ratio NA NA NA Control Cross Coefficients of Coefficients of Coefficients of Coefficients of Sectional Logistic Linear Linear Linear Regression Regression Regression Regression Equation, pi Equation, pi Equation, pi Equation, pi NA, not applicable.

THE CONFIDENCE PROFILE METHOD TABLE 9.2 Results of a hypothetical randomized controlled clinical trial 105 Study No. Design No. Survive Controls Treated No. Survive RCT 100 53 104 72 There are four basic outcomes: dichotomous, categorical, counts, and con- tinuous. There is also a large number of experimental designs, including one- arm prospective trials (e.g., clinical series), two-arm prospective trials (e.g., ran- domized and non-randomized controlled trials), multi-arm prospective trials (e.g., multi-dose drug trials), 2 x 2 case control studies, 2 x n case control stud- ies, matched case control studies, and cross-sectional studies. Finally, there are a variety of measures of effect. For example, in a two-arm controlled trial involving dichotomous outcomes, the effect of the intervention can be measured as the difference in rates of the outcomes in the two groups, the ratio of rates, the odds ratio, and the percent difference. For case control studies, the measure of effect usually is the odds ratio. For multi-arm prospective studies, 2 x n case control studies, and cross-sectional studies, the parameters of interest might be the coefficients of a logistic regression equation, and so forth. The Confidence Profile Method includes likelihood functions for each type of outcome, experi- mental design, and effect measure (2~. ILLUSTRATION Imagine a randomized controlled trial with 100 patients in the control group and 104 patients in the group offered treatment (see Table 9.21. Imagine that 53 of the patients in the control group survive five years, compared with 72 patients in the treatment group. Suppose we are interested in the probability that the difference in survival resulted from the treatment. That is, let £ be the difference in survival rates in the two groups. To derive the appropriate likelihood function for the difference in survival, we begin by looking at the outcomes in each group. Let Be be the true survival rate in the control group, let at be the true survival rate in the treated group, and let £ be the difference in rates caused by treatment, £ = at - ~C- A joint likelihood function for Be and at based on observing 53 survivors of 100 patients in the control group and 72 survivors of 104 patients in the treated group can be derived from the binomial distribution. L(53 of 100, 72 of 104 1 ~c, at) ~ ~C53 (1 - )47 ~t72 (1 - ~t)32 The probability of success in the control group (8c) is raised to the power of the observed number of successes in the control group (53), and so forth. Using

106 DAVID M. EDDY ETAL. the definition of £ = fit - ~C' we can solve for fit in tees of £ and Bc, and sub- stitute to obtain a joint likelihood for as and £. L(53 of 100, 72 of 104 ~ ~c' £) a' ~C53 (1—~c)47~£ + ~C)72(l—£—~C)32 The likelihood function for £ can be obtained by integrating over Bc (6,7), using a beta distribution with parameters a = 1/2, ~ = 1/2 as a noninformative prior for tic L(53 of 100, 72 of 104 1 £) of; ~C53 (1—~c)47~£ + ~C)72(l—£—~C)32 p1/2,l/2~ec' d`eCy (4) This likelihood function can be used in Bayes's formula to calculate a poste- rior distribution for £. The result is illustrated in Figure 9.1. The horizontal axis shows the range of possible values for £. Because as and at can each range from O to 1, the range of £, which is at - Bc, is from -1 to +1. In this case the distribution for £ is centered approximately over 0.16, indicating that treatment increases the probability of survival by approximately 16 percent. The uncertainty about that estimate is indicated by the shape of the distribution. From this distribution it is easy to calculate the probability that the true effect, £, lies between any set of limits the assessor cares to specify. The distri- bution itself can be used directly in any additional calculations the assessor cares to perform (e.g., decision trees, mathematical models). Experiment 1 Face Value -0.5 0 0.5 1 FIGURE 9.1 Probability distribution A for an increase in five-year survival as a result of treat- ment. Based on a randomized controlled Dial of 204 patients.

THE CONFIDENCE PROFILE METHOD 107 TABLE 9.3 Results of a hypothetical randomized controlled clinical trial win dilution Study Controls Treated No. Design No. Survive No. Survive Biases RCT 100 53 104 72 Dilution 20% Now, suppose there is a bias in this trial. Suppose the best available informa- tion indicates that 20 percent of the patients offered the treatment did not get it. That is, there is a dilution bias of approximately 20 percent (see Table 9.3~. If that is true, the likelihood function just derived (Equation 4) no longer esti- mates the parameter of interest, i.e., the effect of treatment in people who actu- ally receive treatment. Rather, the trial estimates a different parameter, £', which is the effectiveness of offering treatment in the setting of the trial. L(53 of 100, 72 of 104 ~ £ ~ = ~ ecs3 (1—oC)47~£ + oC)72~1 - £ —~C)32 pl12,l128C) d(8c) (5) This likelihood function cannot be used for £ in Equation 1 without furler work. To adjust for this dilution, we need a model for how dilution affects the results of the trial. As before, let 0' be the true probability of survival in people who actually receive treatment. Let 8'' be the true probability of survival in the people who are offered treatment in the trial. Finally, let a be the fraction of people who are offered treatment but do not receive it. In that case, the probability of survival in patients offered treatment, 8'', is the probability of sur- vival in people who actually receive treatment, 8', multiplied by the proportion who do receive treatment, (1 - a), plus the probability of survival in people who do not receive treatment, Bc, multiplied by the proportion who do not receive it, a. 8'' = (1 - abet + Alec If the dilution is thought to be 20 percent, set a to 0.2 to obtain a formula for 0~' in terms of 8~ and Bc 8~ = 0.80~ + 0.23 Substituting for 8~' in the formula for the effect measured by the experiment, £' = 8'' - Oc implies that the dilution causes £' to be equal to 0.8£. The formula for £' can then be substituted in the right side of Equation 5 to obtain a likelihood function in terms of £, the parameter of interest. L(53 of 100, 72 of 104 ~ £) = ~ ecs3 (1—~C)47~0.8£ + ~C)72(l—0.8£—~C/2 p~l2,ll2~8c) d(8c) (6)

08 Adjusted for Fixed Dilution DAVID M. EDDY ET AL. Experiment 1 Face Value . -0.5 0 0.5 FIGURE 9.2 Probability distribution B for an increase in five-year survival as a result of treatment. Based on a randomized controlled trial of 204 patients in which 20 percent of the patients offered treatment did not actually receive Reagent (dilution bias of 20 percent). Use of this "adjusted" likelihood function in Bayes's formula results in a poste- rior distribution that corrects for the bias (see Figure 9.2~. The result is shown as the solid line in the figure, which includes for comparison the original distri- bution that took the experiment at face value, without adjusting for dilution. The presence of dilution caused the experiment to underestimate the true effect of the treatment in patients who actually receive treatment; the best estimate is now a 20 percent increase in survival for people who receive treatment. Now suppose we are uncertain about the magnitude of dilution. Suppose all we can say is that we are 95 percent confident that the proportion of patients offered treatment who did not receive it (a) is between 6 percent and 42 percent (see Table 9.41. TABLE 9.4 Results of a hypothetical randomized controlled clinical trial with dilution and uncertainty Study Controls Treated . No. Design No. Survive No. Survive Biases 1 RCT 100 53 104 72 Dihedron 20~o (6 - 42%)

THE CONFIDENCE PROFILE METHOD 109 This uncertainty can be incorporated in the likelihood function by using a distribution for a [say, ·~3a beady and integrating over that distribution. L(53 of 100, 72 of 104 1 £) = 11 ~cS3 (1—)47(0.8£ + ~C)72(1 - 0.8£—)32 p1l2'll2(8c) pa b(a) do dca The result is shown in Figure 9.3. The doped line represents the posterior distribution if the study is taken at face value; the dashed line takes into account a dilution factor of 0.2; the solid line incorporates uncertainty about the magni- tude of that dilution. Additional biases and nested biases can be incorporated in the analysis. For example, in addition to dilution, there might be errors in measurement of out- comes (e.g., there might be a 5 percent probability that a patient in the control group labeled as dead from the disease actually died of other causes). Or we might suspect that patients who dilute from the group offered treatment have an inherently lower risk of the outcome. Or some who dilute might have gotten a modified treatment that was, say, halfway between the treatment offered the "treated" and control groups. As in the illustration, it is possible to incorporate uncertainty about any parameter used to define a bias. Now consider a second experiment that has 50 patients in the control group with 23 survivors, and 50 patients in the group offered treatment win 38 sur- vivors (see Table 9.5). Experiment 1 Face Value Adjusted for Fixed Dilution Adjusted for Uncertain Dilution · ~ · ~ · if. t , ~ -0.5 0 0.5 FIGURE 93 Probability distribution C for an increase in five-year survival as a result of treat- ment. Based on a randomized controlled trial of 204 patients assuming (1) no biases (dotted line), (2) dilution bias of 20 percent (dashed line), and (3) dilution bias of uncertain magnitude (solid line).

110 DAVID M. EDDY ET AL. TABLE 9.5 Results of two hypothetical randomized controlled clinical trials Study Controls Treated No. Design No. Survive No. Survive Biases RCT 100 53 104 72 Dilution 20% (6 - 42%) RCT 50 23 50 38 None Suppose there are no biases in this expenment. The likelihood function for this experiment [L2 (X2 ~ c)] is also based on the binomial distribution and is derived in the same fashion as for the first experiment (Equation 4~. The results are indicated in Figure 9.4, which includes for comparison the first shady, after adjustment for dilution (the dotted line). Bayes's formula can be used to combine Me information in the two expen- ments to derive a new posterior distribution (Equation 3~. This distribution is shown as the solid line in Figure 9.5, with the distributions for We two individu- al studies shown as the dashed lines. Experiment 1 Adjusted for Uncertain Dilution Experiment 2 Face Value : I l l 61~\ \ · ~ \ \ \ ..\ J -1 -0.5 0 0.5 FIGURE 9.4 Probability distribution D for an increase in five-year survival as a result of treat- ment. Based on a randomized controlled trial of 1,000 patients (solid line), compared with a ran- domized controlled trial of 204 patients adjusted for dilution bias (dotted line). 1

THE CONFIDENCE PROFILE METHOD Experiment 1 n Adjusted for Uncertain Dilution——_ | | Experiment 2 l l Face Value ——— | | Combined Evidence I ~ \~ 1' !\ 1 L ~ <> -0.5 0 0.5 111 FIGURE 95 Probability distribution E for an increase in five-year survival as a result of treatment. Based on He combined results of two randomized controlled trials (solid line). The probability dis~i- budons based on die results of Be individual randomized controlled trials are shown as dashed lines. BASIC FORMULAS IN THE CONFIDENCE PROFILE METHOD The Confidence Profile Method contains likelihood functions for all the experimental designs, outcome measures, and effect measures shown in Table 9.1 (21. There is no requirement that all the studies to be combined have the same design. In general, likelihood functions for studies with dichotomous out- comes are based on the binomial distribution; those with categorical outcomes are based on the multinomial distribution; those with counts are based on the Poisson distribution; and those with continuous outcomes are based on the nor- mal distribution. This paper illustrated one likelihood function: a two-arm prospective study with dichotomous outcomes, whose effect is measured as the difference in rates of outcomes. The Confidence Profile Method also contains models for all the biases listed previously (1, 2), one of which (dilution) was illus- trated in this paper. It also incorporates models for compound or nested biases (2~. ADDITIONAL FORMULAS The Confidence Profile Method contains a number of formulas for handling problems that are more complex than the ones just described. These include a hierarchical Bayes method, formulas for analyzing indirect evidence, and for- mulas for analyzing technology families.

112 DAVID M. EDDY ET AL. Hierarchical Bayes The hierarchical Bayes method addresses the following problem. Again, let ~ be the true effect in which we are interested. However, it is possible that Mother Nature does not have a single particular value for this effect. For exam- ple, the success rate of a surgical procedure might be slightly different in New York than in Chicago, due to factors that we cannot identify or adjust for explic- itly. In such cases, it is reasonable to act as though Mother Nature has a distri- bution for the true effect; our task is to estimate the distribution. The hierarchi- cal Bayes method accomplishes that (8~. An analogous approach using classical statistical techniques (called the "random effects model") has been described by DerSimonian and Laird (9~. Indirect Evidence The problem posed by indirect evidence is that experiments frequently relate a technology (e.g., exercise), not to the health outcomes in which we are really interested (e.g., a heart attack), but to an intermediate outcome (e.g., blood pres- sure, obesity, or serum cholesterol). Another body of evidence must then be used to relate the intermediate outcomes to health outcomes. Diagram of indirect evidence: Technology ~ Intermediate Outcomes ~ Health Outcomes The Confidence Profile Method includes formulas for combining the two bodies of evidence, including the possibility that the intermediate outcome is not a perfect indicator of the health outcome (1~. For example, exercise might have an independent effect on the chance of a heart attack not mediated through a change in serum cholesterol. Technology Families The formulas for analyzing technology families address another common problem of technology assessment. Frequently, there are a variety of technolo- gies for the same health problem. For example, breast cancer can be treated with many different combinations of surgery, radiation, chemotherapy, and hor- monal therapy. A review of the literature might uncover studies that relate many pairs of technologies, represented as the solid lines in Figure 9.6, but not all. For example, suppose we are interested in comparing technology B with technology E, as indicated by the dashed line in Figure 9.6. Even though there is no direct evidence for this comparison, it is possible to compare these two technologies using information about other technologies that have been compared. The Confidence Profile Method contains formulas for accomplishing that (1~.

THE CONFIDENCE PROFILE METHOD technology B ) (Technology Dy / ~3 113 X -/ Am/ / \ /, \K - \~Technology C) - FIGURE 9.6 Diagram of technology families. Solid lines indicate the existence of trials relating two technologies; dashed line indicates the two technologies to be compared. Research Planning The posterior distribution for the parameter of interest, estimated from exist- ing information, can be used as a prior distribution for calculating the probabili- ty that future experiments of various types (e.g., different designs, different sample sizes) will yield certain results. The simplest example arises when cal- culating the power of an experiment. Power calculations require postulation of a particular magnitude of effect; the formulas calculate the probability of a sta- tistically significant result at a specified level of significance, conditional on the assumed magnitude of the effect. The distribution for the effect calculated by the Confidence Profile Method can be used in these calculations to obtain a power conditional on the existing evidence for the effect, rather than a hypothe- sized effect. Because the Confidence Profile Method delivers a distribution, it can also calculate the probability an experiment will yield results within a speci- fied range (rather than simply a statistically significant result, as in a power cal- culation). For example, the Confidence Profile Method can be used to estimate the probability that a third randomized controlled trial with 100 patients in each group will show that treatment increases survival between 15 percent and 25 percent, taking into account the evidence from the first two trials. Additional techniques in the Confidence Profile Method enable calculation of the covariance matrix for all parameters incorporated in the analysis. For

114 DAVID M. EDDY ET AL. example, the covariance matrix indicates how a change in the variance of the distribution for a, the dilution in the first experiment, affects the posterior distri- bution for the parameter of interest. This feature enables calculation of the sen- sitivity of the result to the magnitude and range of uncertainty about any param- eters used in the calculations. IMPLEMENTATION To apply the method a problem must be formulated in a way that uses these ingredients accurately and efficiently, and a solution must be calculated. There are two basic approaches, which we call the stepwise approach and the integrat- ed approach. The stepwise approach, described in this paper, basically consists of evaluating one experiment at a time, adjusting each to ensure that it estimates the parameter of interest, and combining them according to Bayes's formula. This approach works well for problems that are relatively straightforward. For more complex assessment problems, the Confidence Profile Method uses an integrated approach that takes into account the multivariate nature of many assessment problems, with dependencies between parameters, biases, and pieces of evidence. The integrated approach is extremely powerful, although more dif- ficult to conceptualize (5~. Both approaches involve considerable mathematics. We are producing a number of aids to help make the Confidence Profile Method available. These include a book that pulls all the information together, with examples; software that implements the stepwise approach; and a comput- er-based, interactive tutorial that will lead a novice through a complete exposi- tion of the method. RELATIONSHIP TO OTHER META-ANALYSIS TECHNIQUES The Confidence Profile Method differs from meta-analysis techniques based on classical statistics in several important ways. First, because it is based on Bayesian statistics, the Confidence Profile Method gives marginal probability distributions for the parameters of interest and, if the integrated approach is used, a joint probability distribution for all the parameters. Other meta-analysis techniques calculate a point estimate for a single effect measure and confidence intervals for the estimate under an assumption of large sample sizes. The value of probability distributions is that they can be used to calculate the probability that the "true value" of a parameter lies within any specified range. Probability distributions also can be used in models of varying complexity, including sim- ple transformations (e.g., logs, powers), simple operations (e.g., addition, sub- traction by convolution), decision trees, and stochastic models (e.g., Markov chains). A second distinguishing feature is that the Confidence Profile Method allows the assessor to derive probability distributions for parameters that are functions

THE CONFIDENCE PROFILE METHOD 115 of other parameters. Classical meta-analysis, as currently formulated, enables one to combine evidence about a single parameter. For example, the production of probability distributions enables the Confidence Profile Method to analyze indirect evidence and technology families, neither of which can be analyzed by other meta-analysis techniques. A third distinguishing feature of the Confidence Profile Method, again enabled by the use of Bayesian statistics, is the explicit modeling of biases to internal and external validity. Other meta-analysis techniques take biases into account either by a "take it or leave it" approach, or by assigning weights. In the latter approach, the assessor assigns each study a weight designed to decrease its influence compared with the other studies being synthesized. The main problem with this approach is that weights do not accurately correct for the effects of biases. Biases cause a piece of evidence to misestimate the mag- nitude and range of uncertainty of a parameter. The use of weights assumes the study is correctly estimating the magnitude of the parameter; the effect of the weight is only to modify the variance of the estimate. A second problem with weights is largely due to the first; there is no theoretical basis for estimating the appropriate weights to adjust for a specific bias or collection of biases. In the "take it or leave it" approach, the assessor decides whether to accept a study for inclusion in a synthesis, which is tantamount to assuming it has no biases, or decides to reject it, which is tantamount to assuming its biases invalidate its results. This is equivalent to assigning a weight of either 1 or 0. In contrast, the Confidence Profile Method models biases explicitly and incorporates the models in the formulas that synthesize the evidence. These models allow the assessor to think about each bias individually, in natural units. For example, an assessor who wants to adjust a randomized controlled trial for dilution describes the proportion of people who "dilute" who are offered treat- ment but do not receive it. To estimate the effect of possible errors in measure- ment of outcomes (e.g., errors in claims data, chart notes, or patient recall), the assessor can describe the applicable error rates. The estimates of the magni- tudes of biases can be based on records, separate experiments, or if necessary, subjective judgments. The Confidence Profile Method also allows for the nest- ing of biases and dependencies between biases. Finally, the method enables the assessor to describe uncertainty about the magnitude of any bias. Uncertainty can be present if a bias is estimated empirically, due to the inherent imprecision of the experiment (e.g., sample size), or if a bias must be estimated subjectively. The ability of the Confidence Profile Method to incorporate subjective judg- ments about biases is one example of its third important feature, which is to pro- vide a formal, axiomatically based method for incorporating subjective judg- ments in a meta-analysis. The fourth main difference that distinguishes the Confidence Profile Method from other meta-analysis methods is that it is a unified set of techniques. The assessor can describe a system of equations that incorporates simultaneously all

116 DAVID M. EDDY ET AL. the basic parameters (e.g., population parameters), functional parameters (parameters that are functions of other parameters), experimental evidence, and subjective judgments. This enables the assessor to represent the multivariate nature of the assessment problem, taking into account dependencies between variables and pieces of evidence, and functional relationships as complicated as the assessor cares to define. The solution of the system of equations yields a joint probability distribution for all the parameters. SUMMARY To summarize, the Confidence Profile Method can be used to assess tech- nologies when the available evidence involves a variety of experimental designs, types of outcomes, and effect measures; a variety of biases; combina- tions of biases and nested biases; uncertainty about biases; an underlying vari- ability in the parameter of interest; indirect evidence; and technology families. The result of an analysis with the Confidence Profile Method is a posterior dis- tribution for the parameter of interest, posterior distributions for other parame- ters, and a covariance matrix for all the parameters in the model. The posterior distributions incorporate all the uncertainty He assessor chooses to describe about any parameter used in the analysis. REFERENCES 1. Eddy DM. The Confidence Profile Method: A Bayesian method for assessing health technologies. Operations Research 1989;37:21~228. 2. Eddy DM, Hasselblad V, Shachter R. A Bayesian method for synthesizing evi- dence: The Confidence Profile Method. International Journal of Technology Assessment in Health Care, in press. 3. Shachter R. Eddy DM, Hasselblad V. An influence diagram approach to the Confidence Profile Method for health technology assessment. Technical Report, Center for Health Policy Research and Education, Duke University, Durham, N.C., 1988. 4. Berger JO. Statistical Decision Theory and Bayesian Analysis. New York: Springer-Verlag, 1985. 5. Jeffreys H. Theory of Probability. London: Oxford University Press, 1961. 6. Basu D. On the elimination of nuisance parameters. Journal of the American Statistical Association 1977;72:355-366. 7. Berger J. Wolpert R. The Likelihood Principle (2nd edition). Hayward, Calif.: Institute of Mathematical Statistics, 1988. 8. Wolpert RL, Hasselblad V, Eddy DM. Hierarchical Bayes methods for confidence profiles. Technical Report, Center for Health Policy Research and Education, Duke University, Durham, N.C., 1987. 9. DerSimonian R. Laird NM. Meta-analysis in clinical trials. Controlled Clinical Trials 1986;7:177-188.

Next: 10. Should We Change the Rules for Evaluating Medical Technologies? »
Modern Methods of Clinical Investigation Get This Book
×
Buy Hardback | $49.95
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The very rapid pace of advances in biomedical research promises us a wide range of new drugs, medical devices, and clinical procedures. The extent to which these discoveries will benefit the public, however, depends in large part on the methods we choose for developing and testing them.

Modern Methods of Clinical Investigation focuses on strategies for clinical evaluation and their role in uncovering the actual benefits and risks of medical innovation.

Essays explore differences in our current systems for evaluating drugs, medical devices, and clinical procedures; health insurance databases as a tool for assessing treatment outcomes; the role of the medical profession, the Food and Drug Administration, and industry in stimulating the use of evaluative methods; and more.

This book will be of special interest to policymakers, regulators, executives in the medical industry, clinical researchers, and physicians.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!