Cover Image

PAPERBACK
$162.25



View/Hide Left Panel

Page 160

9
Uncertainty

The need to confront uncertainty in risk assessment has changed little since the 1983 NRC report Risk Assessment in the Federal Government. That report found that:

The dominant analytic difficulty [in decision-making based on risk assessments] is pervasive uncertainty. … there is often great uncertainty in estimates or the types, probability, and magnitude of health effects associated with a chemical agent of the economic effects of a proposed regulatory action, and of the extent of current and possible future human exposures. These problems have no immediate solutions, given the many gaps in our understanding of the causal mechanisms of carcinogenesis and other health effects and in our ability to ascertain the nature or extent of the effects associated with specific exposures.

Those gaps in our knowledge remain, and yield only with difficulty to new scientific findings. But a powerful solution exists to some of the difficulties caused by the gaps: the systematic analysis of the sources, nature, and implications of the uncertainties they create.

Context Of Uncertainty Analysis

EPA decision-makers have long recognized the usefulness of uncertainty analysis. As indicated by former EPA Administrator William Ruckelshaus (1984):

First, we must insist on risk calculations being expressed as distributions of estimates and not as magic numbers that can be manipulated without regard to



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 160
Page 160 9 Uncertainty The need to confront uncertainty in risk assessment has changed little since the 1983 NRC report Risk Assessment in the Federal Government. That report found that: The dominant analytic difficulty [in decision-making based on risk assessments] is pervasive uncertainty. … there is often great uncertainty in estimates or the types, probability, and magnitude of health effects associated with a chemical agent of the economic effects of a proposed regulatory action, and of the extent of current and possible future human exposures. These problems have no immediate solutions, given the many gaps in our understanding of the causal mechanisms of carcinogenesis and other health effects and in our ability to ascertain the nature or extent of the effects associated with specific exposures. Those gaps in our knowledge remain, and yield only with difficulty to new scientific findings. But a powerful solution exists to some of the difficulties caused by the gaps: the systematic analysis of the sources, nature, and implications of the uncertainties they create. Context Of Uncertainty Analysis EPA decision-makers have long recognized the usefulness of uncertainty analysis. As indicated by former EPA Administrator William Ruckelshaus (1984): First, we must insist on risk calculations being expressed as distributions of estimates and not as magic numbers that can be manipulated without regard to

OCR for page 160
Page 161 what they really mean. We must try to display more realistic estimates of risk to show a range of probabilities. To help do this, we need new tools for quantifying and ordering sources of uncertainty and for putting them into perspective. Ten years later, however, EPA has made little headway in replacing a risk-assessment "culture" based on "magic numbers" with one based on information about the range of risk values consistent with our current knowledge and lack thereof. As we discuss in more depth in Chapter 5, EPA has been skeptical about the usefulness of uncertainty analysis. For example, in its guidance to those conducting risk assessments for Superfund sites (EPA, 1991f), the agency concludes that quantitative uncertainty assessment is usually not practical or necessary for site risk assessments. The same guidance questions the value and accuracy of assessments of the uncertainty, suggesting that such analyses are too data-intensive and "can lead one into a false sense of certainty." In direct contrast, the committee believes that uncertainty analysis is the only way to combat the "false sense of certainty," which is caused by a refusal to acknowledge and (attempt to) quantify the uncertainty in risk predictions. This chapter first discusses some of the tools that can be used to quantify uncertainty. The remaining sections discuss specific concerns about EPA's current practices, suggest alternatives, and present the committee's recommendations about how EPA should handle uncertainty analysis in the future. Nature Of Uncertainty Uncertainty can be defined as a lack of precise knowledge as to what the truth is, whether qualitative or quantitative. That lack of knowledge creates an intellectual problem—that we do not know what the "scientific truth" is; and a practical problem—we need to determine how to assess and deal with risk in light of that uncertainty. This chapter focuses on the practical problem, which the 1983 report did not shed much light on and which EPA has only recently begun to address in any specific way. This chapter takes the view that uncertainty is always with us and that it is crucial to learn how to conduct risk assessment in the face of it. Scientific truth is always somewhat uncertain and is subject to revision as new understanding develops, but the uncertainty in quantitative health risk assessment might be uniquely large, relative to other science-policy areas, and it requires special attention by risk analysts. These analysts need to allow questions such as: What should we do in the face of uncertainty? How should it be identified and managed in a risk assessment? How should an understanding of uncertainty be forwarded to risk managers, and to the public? EPA has recognized the need for more and better uncertainty assessment (see EPA memorandum in Appendix B), and other investigators have begun to make substantial progress with the difficult computations that are often required (Monte Carlo

OCR for page 160
Page 162 methods, etc.). However, it appears that these changes have not yet affected the day-to-day work of EPA. Some scientists, mirroring the concerns expressed by EPA, are reluctant to quantify uncertainty. There is concern that uncertainty analysis could reduce confidence in a risk assessment. However, that attitude toward uncertainty may be misguided. The very heart of risk assessment is the responsibility to use whatever information is at hand or can be generated to produce a number, a range, a probability distribution—whatever expresses best the present state of knowledge about the effects of some hazard in some specified setting. Simply to ignore the uncertainty in any process is almost sure to leave critical parts of the process incompletely examined, and hence to increase the probability of generating a risk estimate that is incorrect, incomplete, or misleading. For example, past analyses of the uncertainty about the carcinogenic potency of saccharin showed that potency estimates could vary by a factor as large as 1010. However, this example is not representative of the ranges in potency estimates when appropriate models are compared. Potency estimates can vary by a factor of 1010 only if one allows the choice of some models that are generally recognized as having no biological plausibility and only if one uses those models for a very large extrapolation from high to low doses. The judicious application of concepts of plausibility and parsimony can eliminate some clearly inappropriate models and leave a large but perhaps a less daunting range of uncertainties. What is important, in this context of enormous uncertainty, is not the best estimate or even the ends of this 1010-fold range, but the best-informed estimate of the likelihood that the true value is in a region where one rather than or another remedial action (or none) is appropriate. Is there a small chance that the true risk is as large as 10-2, and what would be the risk-management implications of this very small probability of very large harm? Questions such as these are what uncertainty analysis is largely about. Improvements in the understanding of methods for uncertainty analysis—as well as advances in toxicology, pharmacokinetics, and exposure assessment—now allow uncertainty analysis to provide a much more accurate, and perhaps less daunting, picture of what we know and do not know than in the past. Taxonomies Before discussing the practical applications of uncertainty analysis, it may be best to step back and discuss it as an intellectual endeavor. The problem of uncertainty in risk assessment is large, complex, and nearly intractable, unless it is divided into smaller and more manageable topics. One way to do so, as seen in Table 9-1 (Bogen, 1990a), is to classify sources of uncertainty according to the step in the risk assessment process in which they occur. A more abstract and generalized approach preferred by some scientists is to partition all uncertainties into the three categories of bias, randomness, and true variability. This method

OCR for page 160
Page 163 TABLE 9-1 Some Generic Sources of Uncertainty in Risk Assessment I. HAZARD IDENTIFICATION Unidentified hazards Definition of incidence of an outcome in a given study (positive-negative association of incidence with exposure) Different study results Different study qualities — conduct — definition of control population — physical-chemical similarity of chemical studied to that of concern Different study types — prospective, case-control, bioassay, in vivo screen, in vitro screen — test species, strain, sex, system — exposure route, duration Extrapolation of available evidence to target human population II. DOSE-RESPONSE ASSESSMENT Extrapolation of tested doses to human doses Definition of "positive responses" in a given study — independent vs. joint events — continuous vs. dichotomous input response data Parameter estimation Different dose-response sets — results — qualities — types Model selection for low-dose risk extrapolation — low-dose functional behavior of dose-response relationship (threshold, sublinear, linear, supralinear, flexible) — role of time (dose frequency, rate, duration; age at exposure; fraction of lifetime exposed) — pharmacokinetic model of effective dose as a function of applied dose — impact of competing risks (Table continues on following page.)

OCR for page 160
Page 164 TABLE 9-1 Continued III. EXPOSURE ASSESSMENT Contamination-scenario characterization (production, distribution, domestic and industrial storage and use, disposal, environmental transport, transformation and decay, geographic bounds, temporal bounds) — environmental-fate model selection (structural error) — parameter estimation error — field measurement error Exposure-scenario characterization — exposure-route identification (dermal, respiratory, dietary) — exposure-dynamics model (absorption, intake processes) Target-population identification — potentially exposed populations — population stability over time Integrated exposure profile IV. RISK CHARACTERIZATION Component uncertainties — hazard identification — dose-response assessment — exposure assessment SOURCE: Adapted from Bogen, 1990a. of classifying uncertainty is used by some research methodologists, because it provides a complete partition of types of uncertainty, and it might be more productive intellectually: bias is almost entirely a product of study design and performance; randomness a problem of sample size and measurement imprecision; and variability a matter for study by risk assessors but for resolution in risk management (see Chapter 10). However, a third approach to categorizing uncertainty may be more practical than this scheme, and yet less peculiar to environmental risk assessment than the taxonomy in Table 9-1. This third approach, a version of which can be found in EPA's new exposure guidelines (EPA, 1992a) and in the general literature on risk assessment uncertainty (Finkel, 1990; Morgan and Henrion, 1990), is adopted here to facilitate communication and understanding in light of present EPA practice. Although the committee makes no formal recommendation on which taxonomy to use, EPA staff might want to consider the alternative classification above (bias,

OCR for page 160
Page 165 randomness, and variability) to supplement their current approach in future documents. Our preferred taxonomy consists of: • Parameter uncertainty. Uncertainties in parameter estimates stem from a variety of sources. Some uncertainties arise from measurement errors; these in turn can involve random errors in analytic devices (e.g., the imprecision of continuous monitors that measure stack emissions) or systematic biases (e.g., measuring inhalation from indoor ambient air without considering the effect of volatilization of contaminants from hot water used in showering). A second type of parameter uncertainty arises when generic or surrogate data are used instead of analyzing the desired parameter directly (e.g., the use of standard emission factors for industrialized processes). Other potential sources of error in estimates of parameters are misclassification (e.g., incorrect assignment of exposures of subjects in historical epidemiological studies due to faulty or ambiguous information), random sampling error (e.g., estimation of risk to laboratory animals or exposed workers from outcomes observed in only a small sample), and nonrepresentativeness (e.g., developing emission factors for dry cleaners based on a sample that included predominantly ''dirty" plants due to some quirk in the study design).1 • Model uncertainty. These uncertainties arise because of gaps in the scientific theory that is required to make predictions on the basis of causal inferences. For example, the central controversy over the validity of the linear, nothreshold model for carcinogen dose-response is an argument over model uncertainty. Common types of model uncertainties include relationship errors (e.g., incorrectly inferring the basis for correlations between chemical structure and biologic activity) and errors introduced by oversimplified representations of reality (e.g., representing a three-dimensional aquifer with a two-dimensional mathematical model). Moreover, any model can be incomplete if it excludes one or more relevant variables (e.g., relating asbestos to lung cancer without considering the effect of smoking on both those exposed to asbestos and those unexposed), uses surrogate variables for ones that cannot be measured (e.g., using wind speed at the nearest airport as a proxy for wind speed at the facility site), or fails to account for correlations that cause seemingly unrelated events to occur much more frequently than would be expected by chance (e.g., two separate components of a nuclear plant are both missing a particular washer because the same newly hired assembler put both of them together). Another example of model uncertainty concerns the extent of aggregation used in the model. For example, to fit data on the exhalation of volatile compounds adequately in physiologically based pharmacokinetic (PBPK) models, it is sometimes necessary to break up the fat compartment into separate compartments reflecting subcutaneous and abdominal fat (Fiserova-Bergerova, 1992). In the absence of enough data to indicate the inadequacy of using a single aggregated variable (total body fat), the modeler might construct an unreliable model. The uncertainty in risk

OCR for page 160
Page 166   that results from uncertainty about models might be as high as a factor of 1,000 or even greater, even if the same data are used to determine the results from each. This can occur, for example, when the analyst must choose between a linear multistage model and a threshold model for cancer dose-response relations. Problems With EPA's Current Approach To Uncertainty EPA's current practice on uncertainty is described elsewhere in this report, especially in Chapter 5, as part of the risk-characterization process. Overall, EPA tends at best to take a qualitative approach to uncertainty analysis, and one that emphasizes model uncertainty rather than parameter uncertainties. The uncertainties in the models and the assumptions made are listed (or perhaps described in a narrative way) in each step of the process; these are then presented in a nonquantitative statement to the decision-maker. Quantitative uncertainty analysis is not well explored at EPA. There is little internal guidance for EPA staff about how to evaluate and express uncertainty. One useful exception is the analysis conducted for the National Emission Standards for Hazardous Air Pollutants (NESHAPS) radionuclides document (described in Chapter 5), which provides a good initial example of how uncertainty analysis could be conducted for the exposure portion of risk assessment. Other EPA efforts, however, have been primarily qualitative, rather than quantitative. When uncertainty is analyzed at EPA, the analysis tends to be piecemeal and highly focused on the sensitivity of the assessment to the accuracy of a few specified assumptions, rather than a full exploration of the process from data collection to final risk assessment, and the results are not used in a systematic fashion to help decision-makers. The major difficulty with EPA's current approach is that it does not supplant or supplement artificially precise single estimates of risk ("point estimates") with ranges of values or quantitative descriptions of uncertainty, and that it often lacks even qualitative statements of uncertainty. This obscures the uncertainties inherent in risk estimation (Paustenbach, 1989; Finkel, 1990), although the uncertainties themselves do not go away. Risk assessments that do not include sufficient attention to uncertainty are vulnerable to four common and potentially serious pitfalls (adapted from Finkel, 1990): 1. They do not allow for optimal weighing of the probabilities and consequences of error for policy-makers so that informed risk-management decisions can be made. An adequate risk characterization will clarify the extent of uncertainty in the estimates so that better-informed choices can be made. 2. They do not permit a reliable comparison of alternative decisions, so that appropriate priorities can be established by policy-makers comparing several different risks.

OCR for page 160
Page 167 3. They fail to communicate to decision-makers and the public the range of control options that would be compatible with different assessments of the true state of nature. This makes informed dialogue between assessors and stakeholders less likely, and can cause erosion of credibility as stakeholders react to the overconfidence inherent in risk assessments that produce only point estimates. 4. They preclude the opportunity for identifying research initiatives that might reduce uncertainty and thereby reduce the probability or the impact of being caught by surprise. Perhaps most fundamentally, without uncertainty analysis it can be quite difficult to determine the conservatism of an estimate. In an ideal risk assessment, a complete uncertainty analysis would provide a risk manager with the ability to estimate risk for each person in a given population in both actual and projected scenarios of exposures; it would also estimate the uncertainty in each prediction in quantitative, probabilistic terms. But even a less exhaustive treatment of uncertainty will serve a very important purpose: it can reveal whether the point estimate used to summarize the uncertain risk is "conservative," and if so, to what extent. Although the choice of the "level of conservatism" is a risk-management prerogative, managers might be operating in the dark about how "conservative" these choices are if the uncertainty (and hence the degree to which the risk estimate used may fall above or below the true value) is ignored or assumed, rather than calculated. Some Alternatives To EPA's Approach A useful alternative to EPA's current approach is to set as a goal a quantitative assessment of uncertainty. Table 9-2, from Resources for the Future's Center for Risk Management, suggests a sequence of steps that the agency could follow to generate a quantitative uncertainty estimate. To determine the uncertainty in the estimate of risk associated with a source probably requires an understanding of the uncertainty in each of the elements shown in Table 9-3. The following pages describe more fully the development of probabilities and the method of using probabilities as inputs into uncertainty analysis models. Probability Distributions A probability density function (PDF) describes the uncertainty, encompassing objective or subjective probability, or both, over all possible values of risk. When the PDF is presented as a smooth curve, the area under the curve between any two points is the probability that the true value lies between the two points. A cumulative distribution function (CDF), which is the integral or sum of the PDF up to each point, shows the probability that a variable is equal to or less than each of the possible values it can take on. These distributions can some-

OCR for page 160
Page 168 TABLE 9-2 Steps That Could Improve a Quantitative Uncertainty Estimate 1. Determine the desired measure of risk (e.g., mortality, life years lost, risk to the individual who is maximally exposed, number of persons at more than arbitrary "unacceptable" risk.) More than one measure will often be desired, but the remaining steps will need to be followed de novo for each method. 2. Specify one or more "risk equations," mathematical relationships that express the risk measure in terms of its components. For example, R = C × I × P (risk equals concentration times intake times potency) is a simple "risk equation" with three independent variables. Care must be taken to avoid both an excess and an insufficiency of detail. 3. Generate an uncertainty distribution for each component. This will generally involve the use of analogy, the use of statistical inference, of expert opinion, or a combination of these. 4. Combine the individual distributions into a composite uncertainty distribution. This step will often require Monte Carlo simulation (described later). 5. "Recalibrate" the uncertainty distributions. At this point, inferential analysis should enter or re-enter the process to corroborate or correct the outputs of step 4. In practice, it might involve altering the range of the distribution to account for dependence among the variables or truncating the distributions to exclude extreme values that are physically or logically impossible. Repeat steps 3, 4, and 5 as needed. 6. Summarize the output, highlighting important implications for risk management. Here the decision-maker and uncertainty analyst need to work together (or at least to understand each other's needs and limitations). In all written and oral presentations, the analyst should strive to ensure that the manager understands the following four aspects of the results:   • Their implications for supplanting any point estimate that might have been produced without consideration of uncertainty. In particular, presentations of uncertainty will help in advancing the debate over whether the standardized procedures used to generate point estimates of risk are too "conservative" in general or particular cases.   • Their insights regarding the balance between the costs of overestimating and underestimating risk (i.e., the shape and breadth of the uncertainty distribution informs the manager about how prudent various risk estimates might be).   • Their sensitivity to fundamentally unresolved scientific controversies.   • Their implications for research, identifying which uncertainties are most important and which uncertainties are amenable to reduction by directed research efforts. As part of this process, the analyst should attempt to quantify in absolute terms how much total effort might be put into reducing uncertainty before a control action is implemented (i.e., estimate the value of information using standard techniques). SOURCE: Adapted from Finkel, 1990. times be estimated empirically with statistical techniques that can analyze large sets of data adequately. Sometimes, especially when data are sparse, a normal or lognormal distribution is assumed and its mean and variance (or standard deviation) are estimated from available data. When data are in fact normally distributed over the whole range of possible values, the mean and variance completely characterize the distribution, including the PDF and CDF. Thus, with certain assumptions (such as normality), only a few points might be needed to estimate the whole distribution for a given variable, although more points will both im-

OCR for page 160
Page 169 TABLE 9-3 Some Key Variables in Risk Assessment for Which Probability Distributions Might Be Needed Model Component Output Variable Independent Parameter Variable Transport Air concentration Chemical emission rate Stack exit temperature Stack exit velocity Mixing heights Deposition Deposition rate Dry-deposition velocity Wet-deposition velocity Fraction of time with rain Overland Surface-water load Fraction of chemical in overload runoff Water Surface-water concentration River discharge Chemical decay coefficient in river Soil Surface-soil concentration Surface-soil depth Exposure duration Exposure period Cation-exchange capacity Decay coefficient in soil Food Chain Plant concentration Plant interception fraction Weathering elimination rate Crop density Soil-to-plant bioconcentration factor   Fish concentration Water-to-fish bioconcentration factor Dose Inhalation dose Inhalation rate Body weight   Ingestion dose Plant ingestion rate Soil ingestion rate Body weight   Dermal-absorption dose Exposed skin surface area Soil absorption factor Exposure frequency Body weight Risk Total carcinogenic risk Inhalation carcinogenic potency factor Ingestion carcinogenic potency factor Dermal-absorption carcinogenic potency factor SOURCE: Adapted from Seigneur et al., 1992.

OCR for page 160
Page 170 prove the representation of the uncertainty and allow examination of the normality assumption. However, the problem remains that apparently minor deviations in the extreme tails may have major implications for risk assessment (Finkel, 1990). Furthermore, it is important to note that the assumption of normality may be inappropriate. When data are flawed or not available or when the scientific base is not understood well enough to quantify the probability distributions of all input variables, a surrogate estimate of one or more distributions can be based on analysis of the uncertainty in similar variables in similar situations. For example, one can approximate the uncertainty in the carcinogenic potency of an untested chemical by using the existing frequency distribution of potencies for chemicals already tested (Fiering et al., 1984). Subjective Probability Distributions A different method of probability assessment is based on expert opinion. In this method, the beliefs of selected experts are elicited and combined to provide a subjective probability distribution. This procedure can be used to estimate the uncertainty in a parameter (cf., the subjective assessment of the slope of the dose-response relationship for lead in Whitfield and Wallsten, 1989). However, subjective assessments are more often used for a risk assessment component for which the available inference options are logically or reasonably limited to a finite set of identifiable, plausible, and often mutually exclusive alternatives (i.e., for model uncertainty). In such an analysis, alternative scenarios or models are assigned subjective probability weights according to the best available data and scientific judgment; equal weights might be used in the absence of reliable data or theoretical justifications supporting any option over any other. For example, this approach could be used to determine how much the risk assessor should rely on relative surface area vs. body weight in conducting a dose-response assessment. The application of particular sets of subjective probability weights in particular inference contexts could be standardized, codified, and updated as part of EPA's implementation of uncertainty analysis guidelines (see below). Objective probabilities might seem inherently more accurate than subjective probabilities, but this is not always true. Formal methods (Bayesian statistics)2exist to incorporate objective information into a subjective probability distribution that reflects other matters that might be relevant but difficult to quantify, such as knowledge about chemical structure, expectations of the effects of concurrent exposure (synergy), or the scope of plausible variations in exposure. The chief advantage of an objective probability distribution is, of course, its objectivity; right or wrong, it is less likely to be susceptible to major and perhaps undetectable bias on the part of the analyst; this has palpable benefits in defending a risk assessment and the decisions that follow. A second advantage is that objec-

OCR for page 160
Page 177   degree of potency if it is not zero. In application, that might result in one of the following three decisions:   — If the data are sufficient to use the BM model, specify its parameters, and conclude scientifically (using whatever principles and evidentiary standards EPA sets forth in response to the committee's recommendation that it develop such principles) that this model is appropriate, the BM model could be used. Such occurrences are likely to be uncommon in the near term because of the need for extensive data of special types.   — If the data lead to a scientific conclusion that there is a substantial possibility that the low-dose potency is zero, the potency distributions from the BM and LMS models could be presented separately, perhaps with a narrative or quantitative statement of the probability weights to be assigned to each model.   — If the data do not suggest a substantial possibility of zero risk at low doses, the LMS model would continue to be used exclusively. Statistical Analysis of Generated Probabilities Once the needed subjective and objective probability distributions are estimated for each variable in the risk assessment, the estimates can be combined to determine their impact on the ultimate risk characterization. Joint distributions of input variables are often mathematically intractable, so an analyst must use approximating methods, such as numerical integration or Monte Carlo simulation. Such approximating methods can be made arbitrarily precise by appropriate computational methods. Numerical integration replaces the familiar operations of integral calculus by summarizing the values of the dependent variable(s) on a very fine (multivariate) grid of the independent variables. Monte Carlo methods are similar, but sum the variables calculated at random points on the grid; this is especially advantageous when the number or complexity of the input variables is so large that the costs of evaluating all points on a sufficiently fine grid would be prohibitive. (For example, if each of three variables is examined at 100 points in all possible combination, the grid would require evaluation at 1003 = 1,000,000 points, whereas a Monte Carlo simulation might provide results that are almost as accurate with only 1,000-10,000 randomly selected points.) Barriers to Quantitative Uncertainty Analysis The primary barriers to determining objective probabilities are lack of adequate scientific understanding and lack of needed data. Subjective probabilities are also not always available. For example, if the fundamental molecular-biologic bases of some hazards are not well understood, the associated scientific

OCR for page 160
Page 178 uncertainties cannot be reasonably characterized. In such a situation, it would be prudent public-health policy to adopt inference options from the conservative end of the spectrum of scientifically plausible available options. Quantitative dose-response assessment, with characterization of the uncertainty in the assessment, could then be conducted conditional on this set of inference options. Such a "conditional risk assessment" could then routinely be combined with an uncertainty analysis for exposure (which might not be subject to fundamental model uncertainty) to yield an estimate of risk and its associated uncertainty. The committee recognizes the difficulties of using subjective probabilities in regulation. One is that someone would have to provide the probabilities to be used in a regulatory context. A "neutral" expert from within EPA or at a university or research center might not have the knowledge needed to provide a well-informed subjective probability distribution, whereas those who might have the most expertise might have or be perceived to have a conflict of interest, such as persons who work for the regulated source or for a public-interest group that has taken a stand on the matter. Allegations of conflict of interest or lack of knowledge regarding a chemical or issue might damage the credibility of the ultimate product of a subjective assessment. We note, however, that most of the same problems of real or perceived bias pervade EPA's current point-estimation approach. At bottom, what matters is how risk managers and other end-users of risk assessments interpret the uncertainty in risk analysis. Correct interpretation is often difficult. For example, risks expressed on a logarithmic scale are commonly misinterpreted by assuming that an error of, say, a factor of 10 in one direction balances an error of a factor of 10 in the other. In fact, if a risk is expressed as 10-5 within a factor of 100 uncertainty in either direction, the average risk is approximately 1/2,000, rather than 1/100,000. In some senses, this is a problem of risk communication within the risk-assessment profession, rather than with the public. Uncertainty Guidelines Contrary to EPA's statement that the quantitative techniques suggested in this chapter "require definition of the distribution of all input parameters and knowledge of the degree of dependence (e.g., covariance) among parameters," (EPA, 1991f) complete knowledge is not necessary for a Monte Carlo or similar approach to uncertainty analysis. In fact, such a statement is a tautology: it is the uncertainty analysis that tells scientists how their lack of "complete knowledge" affects the confidence they can have in their estimate. Although it is always better to be able to be precise about how uncertain one is, an imprecise statement of uncertainty reflects how uncertain the situation is—it is far better to acknowledge this than to respond to the ''lack of complete knowledge" by holding fast to a "magic number" that one knows to be wildly overconfident. Uncer-

OCR for page 160
Page 179 tainty analysis simply estimates the logical implications of the assumed model and whatever assumed or empirical inputs the analyst chooses to use. The difficulty in documenting uncertainty can be reduced by the use of uncertainty guidelines that will provide a structure for how to determine uncertainty for each parameter and for each plausible model. In some cases, objective probabilities are available for use. In others, a subjective consensus about the uncertainty may be based on whatever data are available. Once these decisions are documented, many of the difficulties in determining uncertainty can be alleviated. However, it is important to note that consensus might not be achieved. If a "first-cut" characterization of uncertainty in a specific case is deemed to be inappropriate or superseded by new information, it can be changed by means of such procedures as those outlined in Chapter 12. The development of uncertainty guidelines is important, because a lack of clear statements as to how to address uncertainty in risk assessment might otherwise lead to continuing inconsistency in the extent to which uncertainty is explicitly considered in assessments done by EPA and other parties, as well as to inconsistencies in how uncertainty is quantified. Developing guidelines to promote consistency in efforts to understand the uncertainty in risk assessment should improve regulatory and public confidence in risk assessment, because guidelines would reduce inappropriate inconsistencies in approach, and where inconsistencies remain, they could help to explain why different federal or state agencies come to different conclusions when they analyze the same data. Risk Management And Uncertainty Analysis The most important goal of uncertainty analysis is to improve risk management. Although the process of characterizing the uncertainty in a risk analysis is also subject to debate, it can at a minimum make clear to decision-makers and the public the ramifications of the risk analysis in the context of other public decisions. Uncertainty analysis also allows society to evaluate judgments made by experts when they disagree, an especially important attribute in a democratic society. Furthermore, because problems are not always resolved and analyses often need to be repeated, identification and characterization of the uncertainties can make the repetition easier. Single Estimates of Risk Once EPA succeeds in supplanting single point estimates with quantitative descriptions of uncertainty, its risk assessors will still need to summarize these distributions for risk managers (who will continue to use numerical estimates of risk as inputs to decision-making and risk communication). It is therefore crucial to understand that uncertainty analysis is not about replacing "risk numbers" with risk distributions or any other less transparent method; it is about con-

OCR for page 160
Page 180 sciously selecting the appropriate numerical estimate(s) from out of an understanding of the uncertainty. Regardless of whether the applicable statute requires the manager to balance uncertain benefits and costs or to determine what level of risk is "acceptable," a bottom-line summary of the risk is a very important input, as it is critical to judging how confident the decision-maker can be that benefits exceed costs, that the residual risk is indeed "acceptable," or whatever other judgments must be made. Such summaries should include at least three types of information: (1) a fractile-based summary statistic, such as the median (the 50th percentile) or a 95th-percentile upper confidence limit, which denotes the probability that the uncertain quantity will fall an unspecified distance above or below some associated value; (2) an estimate of the mean and variance of the distribution, which along with the fractile-based statistic provides crucial information about how the probabilities and the absolute magnitudes of errors interrelate; and (3) a statement of the potential for errors and biases in these estimates of fractiles, mean, and variance, which can stem from ambiguity about the underlying models, approximations introduced to fit the distribution to a standard mathematical form, or both. One important issue related to uncertainty is the extent to which a risk assessment that generates a point estimate, rather than a range of plausible values, is likely to be too "conservative" (that is, to excessively exaggerate the plausible magnitude of harm that might result from specified environmental exposures). As the two case studies that include uncertainty analysis (Appendixes F and G) illustrate, these investigations can show whether "conservatism" is in fact a problem, and if so, to what extent. Interestingly, the two studies reach opposite conclusions about "conservatism'' in their specific risk-assessment situations; perhaps this suggests that facile conclusions about the "conservatism" of risk assessment in general might be off the mark. On the one hand, the study in Appendix G claims that EPA's estimate of MEI risk (approximately 10-1) is in fact quite "conservative," given that the study calculates a "reasonable worst-case risk" to be only about 0.0015.6However, we note that this study essentially compared different and incompatible models for the cancer potency of butadiene, so it is impossible to discern what percentile of this unconditional uncertainty distribution any estimate might be assigned (see the discussion of model uncertainty above). On the other hand, the Monte Carlo analysis of parameter uncertainty in exposure and potency in Appendix F claims that EPA's point estimate of risk from the coal-fired power plant was only at the 83rd percentile of the relevant uncertainty distribution. In other words, a standard "conservative" estimate of risk (the 95th percentile) exceeds EPA's value, in this case by a factor of 2.5. It also appears from Figure 5-7 in Appendix F that there is about a 1% chance that EPA's estimate is too low by more than a factor of 10. Note that both case studies (Appendixes F and G) fail to distinguish sources of uncertainty from sources of interindividual variability, so the corresponding "uncertainty" distributions obtained cannot be used to properly characterize uncertainty either

OCR for page 160
Page 181 in predicted incidence or in predicted risk to some particular (e.g., average, highly exposed, or high-risk) individual (see Chapter 11 and Appendix I-3). As discussed above, access to the entire PDF allows the decision-maker to assess the amount of "conservatism" implicit in any estimate chosen from the distribution. In cases where the risk manager asks the analyst to summarize the PDF via one or more summary statistics, the committee suggests that EPA might consider a particular kind of point estimate to summarize uncertain risks, in light of the two distinct kinds of "conservatism" discussed in Appendix N-1 (the "level of conservatism,'' the relative percentile at which the point estimate of risk is located, and the "amount of conservatism," the absolute difference between the point estimate and the mean). Although the specific choice of this estimate should be left to EPA risk managers, and may also need to be flexible enough to accommodate case-specific circumstances, estimates do exist that can account for both the percentile and the relationship to the mean in one single number. For example, EPA could choose to summarize uncertain risks for reporting the mean of the upper five percent of the distribution. It is a mathematical truism that (for right-skewed distributions commonly encountered in risk assessment) the larger the uncertainty, the greater the chance that the mean may exceed any arbitrary percentile of the distribution (see Table 9-4). Thus, the mean of the upper five percent is by definition "conservative" both with respect to the overall mean of the distribution and to its 95th percentile, whereas the 95th percentile may not be a "conservative" estimate of the mean. In most situations, the amount of "conservatism" inherent in this new estimator will not be as extreme as it would be if a very high percentile (e.g. the 99.9th) was chosen without reference to the mean. Thus, the issue of uncertainty subsumes the issue of conservatism in point estimates. Point estimates chosen without regard to uncertainty provide only the barest beginnings of the story in risk assessment. Excessive or insufficient conservatism can arise out of inattention to uncertainty, rather than out of a particular way of responding to uncertainty. Actions taken solely to reduce or eliminate potential conservatism will not reduce and might increase the problem of excessive reliance on point estimates. In summary, EPA's position on the issue of uncertainty analysis (as represented in the Superfund document) seems plausible at first glance, but it might be somewhat muddled. If we know that "all risk numbers are only good to within a factor of 10," why do any analyses? The reason is that both the variance and the conservatism (if any) are case-specific and can rarely be estimated with adequate precision until an honest attempt at uncertainty analysis is made. Risk Communication Inadequate scientific and technical communication about risk is sometimes a source of error and uncertainty, and guidance to risk assessors about what to

OCR for page 160
Page 182 TABLE 9-4 Calculation Showing How Mean of Upper 5% of Lognormal Distribution (M95) Relates to Other Distribution Statistics Inx Uncertainty factor M95 Mean M95 / Mean X95 M95 / X95 Percentile location of mean Percentile location of M95 0.25 1.3 1.75 1.03 1.7 1.51 1.16 54 98.8 0.5 1.6 2.95 1.13 2.6 2.28 1.29 60 99.2 0.75 2.1 5.03 1.32 3.8 3.43 1.46 65 98.5 1 2.7 8.57 1.65 5.2 5.18 1.65 69 98.4 1.5 4.5 27.72 3.08 9 11.79 2.35 77 98.6 1.645 5.2 38.70 3.87 10 14.97 2.59 79 98.7 1.75 5.8 49.94 4.62 10.8 17.79 2.81 81 98.7 2 7.4 94.6 7.39 12.8 26.84 3.52 84 98.8 2.5 12.2 364.16 22.76 16 61.10 5.9 89 99 3 20.1 1647.3 90.02 18.3 139.07 11.84 93.3 99.2 4 54.6 59023 2981 19.8 720.54 81.92 97.7 99.7

OCR for page 160
Page 183 include in a risk analysis should include guidance about how to present it. The risk assessor must strive to be understood (as well as to be accurate and complete), just as risk managers and other users must make themselves understood when they apply concepts that are sometimes difficult. This source of uncertainty in interprofessional communication seems to be almost untouched by EPA or any other official body (AIHC, 1992). Comparison, Ranking, And Harmonization Of Risk Assessments As discussed in Chapter 6, EPA makes no attempt to apply a single set of methods to assess and compare default and alternative risk estimates with respect to parameter uncertainty. The same deficiency occurs in the comparison of risk estimates. When EPA ranks risks, it usually compares point estimates without considering the different uncertainties in each estimate. Even for less important regulatory decisions (when the financial and public-health impacts are deemed to be small), EPA should at least make sure that the point estimates of risk being compared are of the same type (e.g., that a 95% upper confidence bound for one risk is not compared with a median value for some other risk) and that each assessment has an informative (although perhaps sometimes brief) analysis of the uncertainty. For more important regulatory decisions, EPA should estimate the uncertainty in the ratio of the two risks and explicitly consider the probabilities and consequences of setting incorrect priorities. For any decisions involving risk-trading or priority-setting (e.g., for resource allocation or "offsets"), EPA should take into account information on the uncertainty in the quantities being ranked so as to ensure that such trades do not increase expected risk and that such priorities are directed at minimizing expected risk. When one or both risks are highly uncertain, EPA should also consider the probability and consequences of greatly erring in trading one risk for another, because in such cases one can lower the risk on average and yet introduce a small chance of greatly increasing risk. Finally, EPA sometimes attempts to "harmonize" risk-assessment procedures between itself and other agencies, or among its own programs, by agreeing on a single common model assumption, even though the assumption chosen might have little more scientific plausibility than alternatives (e.g., replacing FDA's body-weight assumption and EPA's surface-area assumption with body weight to the 0.75 power). Such actions do not clarify or reduce the uncertainties in risk assessment. Rather than "harmonizing" risk assessments by picking one assumption over others when several assumptions are plausible and none of the assumptions is clearly preferable, EPA should use the preferred models for risk calculation and characterization, but present the results of the alternative models (with their associated parameter uncertainties) to further inform decision-makers and the public. However, ''harmonization" does serve an important

OCR for page 160
Page 184 purpose in the context of uncertainty analysis—it will help, rather than hinder, risk assessment if agencies cooperate to choose and validate a common set of uncertainty distributions (e.g., a standard PDF for the uncertain exponent in the "body weight to the X power" equation or a standard method for developing a PDF from a set of bioassay data). Findings And Recommendations The committee strongly supports the inclusion of uncertainty analysis in risk assessments despite the potential difficulties and costs involved. Even for lower-tier risk assessments, the inherent problems of uncertainty need to be made explicit through an analysis (although perhaps brief) of whatever data are available, perhaps with a statement about whether further uncertainty analysis is justified. The committee believes that a more explicit treatment of uncertainty is critical to the credibility of risk assessments and to their utility in risk management. The committee's findings and recommendations are summarized briefly below. Single Point Estimates and Uncertainty EPA often reports only a single point estimate of risk as a final output. In the past, EPA has only qualitatively acknowledged the uncertainty in its estimates, generally by referring to its risk estimates as "plausible upper bounds" with a plausible lower bound implied by the boilerplate statement that "the number could be as low as zero." In light of the inability to discern how "conservative" an estimate might be unless one does an uncertainty analysis, both statements might be misleading or untrue in particular cases. • Use of a single point estimate suppresses information about sources of error that result from choices of model, data sets, and techniques for estimating values of parameters from data. EPA should not necessarily abandon the use of single-point estimates for decision-making, but such numbers must be the product of a consideration of both the estimate of risk and its uncertainties, not appear out of nowhere from a formulaic process. In other words, EPA should be free to choose a particular point estimate of risk to summarize the risk in light of its knowledge, uncertainty, and its desire to balance errors of overestimation and underestimation; but it should first derive that number from an uncertainty analysis of the risk estimate (e.g., using a summary statistic such as the "mean of the upper 5% of the distribution"). EPA should not simply state that its generic procedures yield the desired percentile. For example (although this is an analogous procedure to deal with variability, not uncertainty), EPA's current way of

OCR for page 160
Page 185   calculating the "high-end exposure estimate" (see Chapter 10) is ad hoc, rather than systematic, and should be changed. • EPA should make uncertainties explicit and present them as accurately and fully as is feasible and needed for risk management decision-making. To the greatest extent feasible, EPA should present quantitative, as opposed to qualitative, representations of uncertainty. However, EPA should not necessarily quantify model uncertainty (via subjective weights or any other technique), but should try to quantify the parameter and other uncertainty that exists for each plausible choice of scientific model. In this way, EPA can give its default models the primacy they are due under its guidelines, while presenting useful, but distinct alternative estimates of risk and uncertainty. In the quantitative portions of their risk characterizations (which will serve as one important input to standard-setting and residual-risk decisions under the Act), EPA risk assessors should consider only the uncertainty conditional on the choice of the preferred models for dose-response relationships, exposure, uptake, etc. • In addition, uncertainty analyses should be refined only so far as improvements in the understanding of risk and the implications for risk management justify the expenditure of the professional time and other resources that are required. Uncertainty Guidelines EPA committed itself in a 1992 internal memorandum (see Appendix B) to doing some kind of uncertainty analysis in the future, but the memorandum does not define when or how such analysis might be done. In addition, it does not distinguish between the different types of uncertainty or provide specific examples. Thus, it provides only the first, critical step toward uncertainty analysis. • EPA should develop uncertainty analysis guidelines—both a general set and specific language added to its existing guidelines for each step in risk assessment (e.g., the exposure assessment guidance). The guidelines should consider in some depth all the types of uncertainty (model, parameter, etc.) in all the stages of risk assessment. The uncertainty guidelines should require that the uncertainties in models, data sets, and parameters and their relative contributions to total uncertainty in a risk assessment be reported in a written risk-assessment document. Comparison of Risk Estimates EPA makes no attempt to apply a consistent method to assess and compare default and alternative risk estimates with respect to parameter uncertainty. Presentations of numerical values in an incomplete form lead to inappropriate and possibly misleading comparisons among risk estimates.

OCR for page 160
Page 186 • When an alternative model is plausible enough to be considered for use in risk communication, or for potentially supplanting the default model when sufficient evidence becomes available, EPA should analyze parameter uncertainty at a similar level of detail for the default and alternative models. For example, in comparing risk estimates derived from delivered-dose versus PBPK models, EPA should qualify uncertainty in the interspecies scaling factor (for the former case) and in the parameters used to optimize the PBPK equations (for the latter case). Such comparisons may reveal that given current parameter uncertainties, the risk estimate chosen would not be particularly sensitive to the judgment about which model is correct. Harmonization of Risk Assessment Methods EPA sometimes attempts to "harmonize" risk-assessment procedures between itself and other agencies or among its own programs by agreeing on a single common model assumption, even though the assumption chosen might have little more scientific plausibility than alternatives, (e.g., replacing FDA's body-weight assumption and EPA's surface-area assumption with body weight to the 0.75 power). Such actions do not clarify or reduce the uncertainties in risk assessment. • Rather than "harmonizing" risk assessments by picking one assumption over others when several assumptions are plausible and none of the assumptions is clearly preferable, EPA should maintain its own default assumption for regulatory decisions but indicate that any of the methods might be accurate and present the results as an uncertainty in the risk estimate or present multiple estimates and state the uncertainty in each. However, "harmonization" does serve an important purpose in the context of uncertainty analysis—it will help, rather than hinder, risk assessment if agencies cooperate to choose and validate a common set of uncertainty distributions (e.g., a standard PDF for the uncertain exponent in the "body weight to the X power" equation or a standard method for developing a PDF from a set of bioassay data). Ranking of Risk When EPA ranks risks, it usually compares point estimates without considering the different uncertainties in each estimate. • For any decisions involving risk-trading or priority-setting (e.g., for resource allocation or "offsets"), EPA should take into account information on uncertainty in quantities being ranked so as to ensure that such trades do not increase expected risk and such priorities are directed at minimizing expected risk. When one or both risks are highly uncertain, EPA should also consider the probability and consequences of greatly erring in trading one risk for another,

OCR for page 160
Page 187   because in such cases one can lower the risk on average and yet introduce a small chance of greatly increasing risk. Notes 1. Although variability in a risk-assessment parameter across different individuals is itself a type of uncertainty and is the subject of the following chapter, it is possible that new parameters might be incorporated into a risk assessment to model that variability (e.g., a parameter for the standard deviation of the amount of air that a random person breathes each day) and that those parameters themselves might be uncertain (see "uncertainty and variability" section in Chapter 11). 2. It is important to note that the distributions resulting from Bayesian models include various subjective judgments about models, data sets, etc. These are expressed as probability distributions but the probabilities should not be interpreted as probabilities of adverse effect but, rather, as expressions of strengths of conviction as to what models, data sets, etc. might be relevant to assessing risks of adverse effect. This is an important distinction which should be kept in mind when interpreting and using such distributions in risk management as a quantitative way of expressing uncertainty. 3. Assume that to convert from risk to the test animals to the predicted number of deaths in the human population, one must multiply by 10,000. Perhaps the laboratory dose is 10,000 times larger than the dose to humans, but 100 million humans are exposed. Thus, for example, 4. Note that characterizing risks considering only the parameter uncertainty under the preferred set of models might not be as restrictive as it appears at first glance, in that some of the model choices can be safely recast as parameter uncertainties. For example, the choice of a scaling factor between rodents and humans need not be classified as a model choice between body weight and surface area that calls for two separate "conditional PDFs," but instead can be treated as an uncertain parameter in the equation Rhuman Rrodent BWa, where a might plausibly vary between 0.5 and 1.0 (see our discussion in Chapter 11). The only constraint in this case is that the scaling model is some power function of BW, the ratio of body weights. 5. It is not always clear what percent of the distribution someone is referring to by "correct to within a factor of X." If instead of assuming that the person means with 100% confidence, we assumed that the person means 98% confidence, then the factor of X would cover two standard deviations on either side of the median, so one geometric standard deviation would be equal to X. 6. We arrive at this figure of 0.0015, or 1.5 × 10-3, by noting that the "base case" for fenceline risk (Table 3-1 in Appendix G) is 5 × 10-4 and that "worst case estimates were two to three times higher than base case estimates."