Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 93
Science and Decisions: Advancing Risk Assessment 4 Uncertainty and Variability: The Recurring and Recalcitrant Elements of Risk Assessment INTRODUCTION TO THE ISSUES AND TERMINOLOGY Characterizing uncertainty and variability is key to the human health risk-assessment process, which must engage the best available science in the presence of uncertainties and difficult-to-characterize variability to inform risk-management decisions. Many of the topics in the committee’s statement of task (Appendix B) address in some way the treatment of uncertainty or variability in risk analysis. Some of those topics have existed since the early days of environmental risk assessment. For example, Risk Assessment in the Federal Government: Managing the Process (NRC 1983), referred to as the Red Book, addressed the use of inference guidelines or default assumptions. Science and Judgment in Risk Assessment (NRC 1994) provided recommendations on defaults, use of quantitative methods for uncertainty propagation, and variability in exposure and susceptibility. The role of expert elicitation in uncertainty analysis has been considered in other fields for decades, although it has only been examined and used in select recent cases by the Environmental Protection Agency (EPA). Other topics identified in the committee’s charge whose improvement requires new consideration of the best approaches for addressing uncertainty and variability include the cumulative exposures to contaminant mixtures involving multiple sources, exposure pathways, and routes; biologically relevant modes of action for estimating dose-response relationships; models of environmental transport and fate, exposure, physiologically based pharmacokinetics, and dose-response relationships; and linking of ecologic risk-analysis methods to human health risk analysis. Much has been written that addresses the taxonomy of uncertainty and variability and the need and options for addressing them separately (Finkel 1990; Morgan et al. 1990; EPA 1997a,b; Cullen and Frey 1999; Krupnick et al. 2006). There are also several useful guidelines on the mechanics of uncertainty analysis. However, there is an absence of guidelines on the appropriate degree of detail, rigor, and sophistication needed in an uncertainty or variability analysis for a given risk assessment. The committee finds this to be a critical is-
OCR for page 94
Science and Decisions: Advancing Risk Assessment sue. In presentations to the committee (Kavlock 2006; Zenick 2006) and recent evaluations of emerging scientific advances (NRC 2006a, 2007a,b), there is the promise of improved capacity for assessing risks posed by new chemicals and risks to sensitive populations that are left unaddressed by current methods. The reach and depth of risk assessment are sure to improve with expanding computer tools, additional biomonitoring data, and new toxicology techniques. But such advances will bring new challenges and an increased need for wisdom and creativity in addressing uncertainty and variability. New guidelines on uncertainty analysis (NRC 2007c) can help enormously in the transition, facilitating the introduction of the new knowledge and techniques into agency assessments. Characterizing each stage in the risk assessment process—from environmental release to exposure to health effect (Figure 4-1)—poses analytic challenges and includes dimensions of uncertainty and variability. Consider trying to understand the possible dose received by individuals and, on the average, by a population from the application of a pesticide. The extent of release during pesticide application may not be well characterized. Once the pesticide is released, the exposure pathways leading to an individual’s exposure are complex and difficult to understand and model. Some of the released substance may be transformed in the environment to a more or less toxic substance. The resulting overall exposure of the community near where the pesticide is released can vary substantially among individuals by age, geographic location, activity patterns, eating habits, and socioeconomic status. Thus, there can be considerable uncertainty and variability in how much pesticide is received. Those factors make it difficult to establish reliable exposure estimates for use in a risk assessment, and they illustrate how the characterization of exposure with a single number can be misleading. Understanding the dose-response relationship—the relationship between the dose and risk boxes in Figure 4-1—is as complex and similarly involves issues of uncertainty and variability. Quantifying the relationship between chemical exposure and the probability of an adverse health effect is often complicated by the need to extrapolate results from high doses to lower doses relevant to the population of interest and from animal studies to humans. Finally, there are interindividual differences in susceptibility that are often difficult to portray with confidence. Those issues can delay the completion of a risk assessment (for decades in the case of dioxin) or undermine confidence in the public and those who use risk assessments to inform and support their decisions. Discussions of uncertainty and variability involve specific terminology. To avoid confusion, the committee defines in Box 4-1 key terms as it has used them. The importance of evaluating uncertainty and variability in risk assessments has long been acknowledged in EPA documents (EPA 1989a, 1992, 1997a,b, 2002a, 2004a, 2006a) and National Research Council reports (NRC 1983, 1994). From the Red Book framework and the committee’s emphasis on the need to consider risk management options in the design of risk assessments (Chapters 3 and 8), it is evident that risk assessors must establish procedures that build confidence in the risk assessment and its results. EPA builds confidence in its risk assessments by ensuring that the assessment process handles uncertainty and variability in ways that are predictable, scientifically defensible, consistent with the agency’s statutory mission, and responsive to the needs of decision-makers (NRC 1994). For example, several environmental statutes speak directly to the issue of protecting susceptible and highly exposed people (EPA 2002a, 2005c, 2006a). EPA has accordingly developed risk-assessment practices for implementing these statutes, although, as noted below and in Chapter 5, the overall treatment of uncertainty and variability in risk assessments can be insufficient. Box 4-2 provides examples of why uncertainty and variability are important to risk assessment.
OCR for page 95
Science and Decisions: Advancing Risk Assessment FIGURE 4-1 Illustration of key components evaluated in human health risk assessment, tracking pollutants from environmental release to health effects. In the sections below, the committee first reviews approaches to address uncertainty and variability and comments on whether and how the approaches have been applied to EPA risk assessments. The committee then focuses on uncertainty and variability as applied to each of the stages of the risk-assessment process (as illustrated in Figure 4-1, which expands beyond the four steps from the Red Book to consider subcomponents of risk assessment). The chapter concludes by articulating principles for uncertainty and variability analysis, leaving detailed recommendations on specific aspects of the risk-assessment process to Chapters 5 through 7. The committee notes that elements of exposure assessment are not addressed extensively
OCR for page 96
Science and Decisions: Advancing Risk Assessment BOX 4-1 Terminology Related to Uncertainty and Variabilitya Accuracy: Closeness of a measured or computed value to its “true” value, where the “true” value is obtained with perfect information. Owing to the natural heterogeneity and stochastic nature of many biologic and environmental systems, the “true” value may exist as a distribution rather than a discrete value. Analytic model: A mathematical model that can be solved in closed form. For example, some model algorithms that are based on relatively simple differential equations can be solved analytically to provide a single solution. Bias: A systematic distortion of a model result or value due to measurement technique or model structure or assumption. Computational model: A model that is expressed in formal mathematics with equations, statistical relationships, or a combination of the two and that may or may not have a closed-form representation. Values, judgment, and tacit knowledge are inevitably embedded in the structure, assumptions, and default parameters, but computational models are inherently quantitative, relating phenomena through mathematical relationships and producing numerical results. Deterministic model: A model that provides a single solution for the stated variables. This type of model does not explicitly simulate the effects of uncertainty or variability, as changes in model outputs are due solely to changes in model components. Domain (spatial and temporal): The limits of space and time that are specified in a risk assessment or risk-assessment component. Empirical model: A model that has a structure based on experience or experimentation and does not necessarily have a structure informed by a causal theory of the modeled process. This type of model can be used to develop relationships that are useful for forecasting and describing trends in behavior but may not necessarily be mechanistically relevant. Empirical dose-response models can be derived from experimental or epidemiologic observations. Expert elicitation: A process for obtaining expert opinions about uncertain quantities and probabilities. Typically, structured interviews and questionnaires are used in such elicitation. Expert elicitation may include “coaching” techniques to help the expert to conceptualize, visualize, and quantify the quantity or understanding being sought. Model: A simplification of reality that is constructed to gain insights into select attributes of a particular physical, biologic, economic, or social system. Mathematical models express the simplification in quantitative terms. aCompiled or adapted from NRC (2007d) and IPCS (2004). in further chapters, as compared with other steps in the risk-assessment process, given our judgment that previous reports had sufficiently addressed many key elements of exposure assessment and that the exposure-assessment methods that EPA has developed and used in recent risk assessments generally reflect good technical practice, other than the overarching issues related to uncertainty and variability analysis and decisions about the appropriate analytic scope for the decision context.
OCR for page 97
Science and Decisions: Advancing Risk Assessment Parameters: Terms in a model that determine the specific model form. For computational models, these terms are fixed during a model run or simulation, and they define the model output. They can be changed in different runs as a method of conducting sensitivity analysis or to achieve calibration goals. Precision: The quality of a measurement that is reproducible in amount or performance. Measurements can be precise in that they are reproducible but can be inaccurate and differ from “true” values when biases exist. In risk-assessment outcomes and other forms of quantitative information, precision refers specifically to variation among a set of quantitative estimates of outcomes. Reliability: The confidence that (potential) users should have in a quantitative assessment and in the information derived from it. Reliability is related to both precision and accuracy. Sensitivity: The degree to which the outputs of a quantitative assessment are affected by changes in selected input parameters or assumptions. Stochastic model: A model that involves random variables (see definition of variable below). Susceptibility: The capacity to be affected. Variation in risk reflects susceptibility. A person can be at greater or less risk relative to the person in the population who is at median risk because of such characteristics as age, sex, genetic attributes, socioeconomic status, prior exposure to harmful agents, and stress. Variable: In mathematics, a variable is used to represent a quantity that has the potential to change. In the physical sciences and engineering, a variable is a quantity whose value may vary over the course of an experiment (including simulations), across samples, or during the operation of a system. In statistics, a random variable is one whose observed outcomes may be considered outcomes of a stochastic or random experiment. Their probability distributions can be estimated from observations. Generally, when a variable is fixed to take on a particular value for a computation, it is referred to as a parameter. Variability: Variability refers to true differences in attributes due to heterogeneity or diversity. Variability is usually not reducible by further measurement or study, although it can be better characterized. Vulnerability: The intrinsic predisposition of an exposed element (person, community, population, or ecologic entity) to suffer harm from external stresses and perturbations; it is based on variations in disease susceptibility, psychological and social factors, exposures, and adaptive measures to anticipate and reduce future harm, and to recover from an insult. Uncertainty: Lack or incompleteness of information. Quantitative uncertainty analysis attempts to analyze and describe the degree to which a calculated value may differ from the true value; it sometimes uses probability distributions. Uncertainty depends on the quality, quantity, and relevance of data and on the reliability and relevance of models and assumptions. UNCERTAINTY IN RISK ASSESSMENT Uncertainty is foremost among the recurring themes in risk assessment. In quantitative assessments, uncertainty refers to lack of information, incomplete information, or incorrect information. Uncertainty in a risk assessment depends on the quantity, quality, and relevance of data and on the reliability and relevance of models and inferences used to fill data gaps.
OCR for page 98
Science and Decisions: Advancing Risk Assessment BOX 4-2 Some Reasons Why It Is Important to Quantify Uncertainty and Variability Uncertainty Characterizing uncertainty in risk informs the affected public about the range of possible risks from an exposure that they may be experiencing. Risk estimates sometimes diverge widely. Characterizing the uncertainty in risk associated with a given decision informs the decision-maker about the range of potential risks that result from the decision. That helps in evaluating any decision alternative on the basis of the possible risks, including the most likely and the worst ones; it also informs the public. Mathematically, it is often not possible to understand what may occur on average without understanding what the possibilities are and how probable they are. The value of new research or alternative research strategies can be assessed by considering how much the research is expected to reduce the overall uncertainty in the risk estimate and how the reduction in uncertainty leads to different decision options. Although the committee is not aware of any research to prove it, there is a strong sense among risk assessors that acknowledging uncertainty adds to the credibility and transparency of the decision-making process. Variability Assessing variability in risk enables the development of risk-management options that focus on the people at greatest risk rather than on population averages. For example, the risk from exposures to particular vehicle emissions varies in a population and can be much higher in those close to roadways than the population average. That has implications for zoning and school-siting decisions. Understanding how the population may vary in risk can facilitate understanding of the shape of the dose-response curve (see Chapter 5). Greater use of genetic markers for factors contributing to variability can support this effort. It is often not possible to estimate an average population risk without knowing how risk varies among individuals in the population. On the basis of understanding how different exposures may affect risk, people might alter their own level of risk, for example, by filtering their drinking water or eating fewer helpings of swordfish (which is high in methyl mercury). The aims of environmental justice are furthered when it becomes clear that some community groups are at greater risk than the overall group and policy initiatives are undertaken to rectify the imbalance. For example, the quantity, quality, and relevance of data on dietary habits and a pesticide’s fate and transport will affect the uncertainty of parameter values used to assess population variability in the consumption of the pesticide in food and drinking water. The assumptions and scenarios applied to address a lack of data on how frequently a person eats a particular food affect the mean and variance of the intake and the resulting risk distribution. It is the risk assessor’s job to communicate not only the nature and likelihood of possible harm but the uncertainty in the assessment. One of the more significant types of uncertainties in EPA risk assessments can be characterized as “unknown unknowns”—factors that the assessor is not aware of. These uncertainties cannot be captured by standard quantitative uncertainty analyses, but can only be addressed with an interactive approach that allows timely and effective detection, analysis, and correction. EPA’s practices in uncertainty analysis are reviewed below. The discussion of practice begins by considering EPA’s use of defaults. An expanded treatment of uncertainty beyond
OCR for page 99
Science and Decisions: Advancing Risk Assessment defaults requires additional techniques. Specific analytic techniques that EPA has used or could use in these contexts are discussed below, including Monte Carlo analysis for quantitative uncertainty analysis, expert elicitation, methods for addressing model uncertainty, and addressing uncertainty in risk comparisons. In parallel, the conduct of assessments (including uncertainty analysis) that are appropriate in complexity for risk-management decisions is discussed with considerations for uncertainty analyses used to support risk-risk, risk-benefit, and cost-benefit comparisons and tradeoffs. The Environmental Protection Agency’s Use of Available Methods for Addressing Uncertainty EPA’s treatment of uncertainty is evident both in its guidance documents and from a review of important risk assessments that it has conducted (EPA 1986, 1989a,b, 1997a,b,c, 2001, 2004a, 2005b). The agency’s guidance follows in large part from recommendations in the Red Book (NRC 1983) and other National Research Council reports (for example, NRC 1994, 1996). Use of Defaults As described in the Red Book, because of large inherent uncertainties, human health risk assessment “requires judgments to be made when the available information is incomplete” (NRC 1983, p. 48). To ensure that the judgments are consistent, explicit, and not unduly influenced by risk-management considerations, the Red Book recommended that so-called “inference guidelines,” commonly referred to as defaults, be developed independently of any particular risk assessment (p. 51). Science and Judgment in Risk Assessment (NRC 1994) reaffirmed the use of defaults as a means of facilitating the completion of risk assessments. EPA often relies on default assumptions when “the chemical- and/or site-specific data are unavailable (i.e., when there are data gaps) or insufficient to estimate parameters or resolve paradigms … to continue with the risk assessment” (EPA 2004a, p. 51). Defaults which are the focus of controversy and debate are often needed to complete cancer-hazard identification and dose-response assessment. Because of their importance and the need to address some of the above concerns, the committee devotes Chapter 6 to default assumptions. Consideration is given to how risk assessments can use emerging methods to characterize uncertainties more explicitly while conveying the information needed to inform near-term risk-management decisions. Some approaches based on defaults lead to confusion about levels of uncertainty. For example, EPA estimates cancer risk from the results of animal studies based on default assumptions and then applies likelihood methods to fit models to tumor data and characterizes the dose-response relationship with the lower 95% confidence bound typically on a dose that causes a 10% tumor response beyond background (see Chapter 5). In the past, it estimated the upper 95% confidence bound in the linear term in the multistage polynomial, that is, the “cancer potency.” It usually does not show the opposite bound or other points in the distribution. EPA’s approach is reasonable, but it can lead to misunderstanding when the bounds on the final risk calculations are overinterpreted, for example, when bounds are discussed as characterizing the full range of uncertainty in the assessment. When a new study shows a higher upper bound on the potency or a lower bound on the risk-specific dose, it may appear that uncertainty has increased with further study. From a strictly Bayesian perspective, additional information can never increase uncertainty if the underlying distributional structure of uncertainty is correctly specified. However, when mischaracter-
OCR for page 100
Science and Decisions: Advancing Risk Assessment ized and misunderstood, the framework for defaults used by EPA can make it appear that uncertainty is increasing. For example, suppose that there was an epidemiologic study of the effects of an environmental contaminant, and suppose that the degree of overall uncertainty is incorrectly characterized by the parameter uncertainty in fitting a dose-response slope to the results of that single study. If a second study caused EPA to select an alternative value for the dose-response slope, the risk estimate would change. The uncertainty conditional on one or the other causal model may or may not change. Chapters 5 and 6 suggest approaches to establishment of defaults and uncertainty characterization that may encourage research that could reduce key uncertainties. Quantitative Uncertainty Analysis In a quantitative uncertainty analysis (QUA), both uncertainty and variability in different components of the assessment (emissions, transport, exposure, pharmacokinetics, and dose-response relationship) are combined by using an uncertainty-propagation method, such as Monte Carlo simulation, with two-stage Monte Carlo analysis utilized to separate uncertainty and variability to the extent possible. This approach has been referred to as probabilistic risk assessment, but the committee prefers to avoid this term because of its association with fault-tree analysis in engineering. The use of the term QUA to encompass variability as well as uncertainty is awkward, but we use this term going forward to be consistent with its usage elsewhere. In the federal government, an early user of QUA was the Nuclear Regulatory Commission. In the mid-1970s, the Nuclear Regulatory Commission used QUA that involved considerable use of expert judgment to characterize the likelihood of nuclear reactor failure (USNRC 1975). QUA became more commonly used in EPA in the late 1980s. EPA has since been encouraging the use of QUA in many programs, and the computational methods required have become more readily available and practicable. An example of the evolution of the use of QUA in EPA is its risk-assessment guidance for Superfund. The 1989 Risk Assessment Guidance for Superfund (RAGS), Volume 1 (EPA 1989a) and supporting guidance describe a point-estimate (single-value) approach to risk assessment. The output of the risk equation is a point estimate that could be a central-tendency exposure estimate of risk (for example, the mean or median risk) or reasonable-maximum-exposure (RME) estimate of risk (for example, the risk expected if the RME occurred), depending on the input values used in the risk equation. But RAGS, Volume 3, Part A (EPA 2001) describes a probabilistic approach that uses probability distributions for one or more variables in a risk equation to characterize variability and uncertainty quantitatively. The common practice of choosing high percentile values (ensuring one-sided confidence) for multiple uncertain variables provides results that are probably above the median but still at an unknown percentile of the risk distribution (EPA 2002a). QUA techniques, such as those in RAGS, Volume 3, can address this issue in part, but a few major concerns regarding their use in EPA remain. First, they require training to be used appropriately. Second, even if they are used appropriately, their outputs may not be easily understood by decision-makers. So training is recommended not only for risk assessors but for risk managers (see recommendations in Chapter 2). Third and perhaps most important, in many contexts, the data may not be available to characterize all input distributions fully, in which case the assessment either involves subjective judgments or systematically omits key uncertainties. For formal QUA to be most informative, the treatment of uncertainty should, to the extent feasible, be homologous among components of the risk assessment (exposure, dose, and dose-response relationship).
OCR for page 101
Science and Decisions: Advancing Risk Assessment The differential treatment of uncertainty among components of a risk assessment makes the communication of overall uncertainty difficult and sometimes misleading. For example, in EPA’s regulatory impact analysis for the Clean Air Interstate Rule (EPA 2005c), formal probabilistic uncertainty analysis was conducted with the Monte Carlo method, but this considered only sampling variability in epidemiologic studies used for dose-response functions and in valuation studies. EPA used expert elicitation for a more comprehensive characterization of dose-response relationship uncertainty, but this was not integrated into a single output distribution. Within the quantitative uncertainty analysis, emissions and fate and transport modeling outputs were assumed to be known with no uncertainty. Although EPA explicitly acknowledged the omitted uncertainty in a qualitative discussion, it was not addressed quantitatively. The 95% confidence intervals reported did not reflect the actual confidence level, because the important uncertainties in other components were not included. The training mentioned above therefore should not only be related to the mechanical aspects of software packages but address issues of interpretability and the goal of treating uncertainty consistently among all components of risk assessment. An earlier National Research Council committee (NRC 2002) and the EPA SAB (2004) also raised concerns about the inconsistent approach to uncertainty characterization. However, it is important to recognize that there are some uncertainties in environmental and health risk assessments that defy quantification (even by expert elicitation) (IPCS 2006; NRC 2007d) and that inconsistency in approach will be an issue to grapple with in risk characterization for some time to come. The call for homologous treatment of uncertainty should not be read as a call for “least-common-denominator” uncertainty analysis, in which the difficulty of characterizing uncertainty in one dimension of the analysis leads to the omission of formal uncertainty analysis in other components. Use of Expert Judgment1 It often happens in practice that empirical evidence on some components of a risk assessment is insufficient to establish uncertainty bounds and evidence on other components captures only a fraction of the total uncertainty. When large uncertainties result from a combination of lack of data and lack of conceptual understanding (for example, a mechanism of action at low dose), some regulatory agencies have relied on expert judgment to fill the gaps or establish default assumptions. Expert judgment involves asking a set of carefully selected experts a series of questions related to a specific array of potential outcomes and usually providing them with extensive briefing material, training activities, and calibration exercises to help in the determination of confidence intervals. Formal expert judgment has been used in risk analysis since the 1975 Reactor Safety Study (USNRC 1975), and there are multiple examples in the academic literature (Spetzler and von Holstein 1975; Evans et al. 1994; Budnitz et al. 1998; IEc 2006). EPA applications have been more limited, perhaps in part because of institutional and statutory constraints, but interest is growing in the agency. The 2005 Guidelines for Carcinogen Risk Assessment (EPA 2005b, p. 3-32) state that “these cancer guidelines are flexible enough to accommodate the use of expert elicitation to characterize cancer risks, as a complement to the methods presented in the cancer guidelines.” A recent study of health effects of particulate matter used expert elicitation to characterize uncertainties in the concentration-response function for mortality from fine particulate matter (IEc 2006). Expert elicitation can provide interesting and potentially valuable information, but some 1 Expert judgment is analogous to the term expert elicitation.
OCR for page 102
Science and Decisions: Advancing Risk Assessment critical issues remain to be addressed. It is unclear precisely how EPA can use this information in its risk assessments. For example, in its regulatory impact analysis of the National Ambient Air Quality Standard for PM2.5 (particulate matter no larger than 2.5 µm in aerodynamic diameter), EPA did not use the outputs of the expert elicitation to determine the confidence interval for the concentration-response function for uncertainty propagation but instead calculated alternative risk estimates corresponding to each individual expert’s judgment with no weighting or combining of judgments (EPA 2006b). It is unclear how that type of information can be used productively by a risk manager, inasmuch as it does not convey any sense of the likelihood of various values, although seeing the range and commonality of judgments of individual experts may be enlightening. Formally combining the judgments can obscure the degree of their heterogeneity, and there are important methodologic debates on the merits of weighing expert opinions on the basis of their performance on calibration exercises (Evans et al. 1994; Budnitz et al. 1998). Two other problems are the need to combine incompatible judgments or models and the technical issue of training and calibration when there is a fundamental lack of knowledge and no opportunity for direct observation of the phenomenon being estimated (for example, the risk of a particular disease at an environmental dose). Although methods have been developed to address various biases in expert elicitation, expert mischaracterization is still expected (NRC 1996; Cullen and Small 2004). Some findings about judgment in the face of uncertainty that can apply to experts are provided in Box 4-3. Other practical issues are the cost of and time required for expert elicitation, management of conflict of interest, and the need for a substantial evidence base on which the experts can draw to make expert elicitation useful. Given all of those limitations, there are few settings in which expert elicitation is likely to provide information necessary for discriminating among risk-management options. The committee suggests that expert elicitation be kept in the portfolio of uncertainty-characterization BOX 4-3 Cognitive Tendencies That Affect Expert Judgment Availability: The tendency to assign greater probability to commonly encountered or frequently mentioned events. Anchoring and adjustment: The tendency to be over-influenced by the first information seen or provided in an initial problem formulation. Representativeness: The tendency to judge an event by reference to another that in the eye of the expert resembles it, even in the absence of relevant information. Disqualification: The tendency to ignore data or strongly discount evidence that contradicts strongly held convictions. Belief in “law of small numbers”: The tendency of scientists to believe small samples from a population to be more representative than is justified. Overconfidence: The tendency of experts to overestimate the probability that their answers are correct. Source: Adapted from NRC 1996; Cullen and Small 2004.
OCR for page 103
Science and Decisions: Advancing Risk Assessment options available to EPA but that it be used only when necessary for decision-making and when evidence to support its use is available. The general concept of determining the level of sophistication in uncertainty analysis (which could include expert elicitation or complex QUA) based on decision-making needs is outlined in more detail below. Level of Uncertainty Analysis Needed The discussion of the variety of ways in which EPA has dealt with uncertainty—from defaults to standard QUA to expert elicitation—raises the question of the level of analysis that is needed in any given problem. A careful assessment of when a detailed assessment of uncertainty is needed may avoid putting additional analytic burdens on EPA staff or limiting the ability of EPA staff to complete timely assessments. Formal QUA is not necessary and not recommended for all risk assessments. For example, for a risk assessment conducted to inform a choice among various control strategies, if a simple (but informative and comprehensive) evaluation of uncertainties reveals that the choice is robust with respect to key uncertainties, there is no need for a more formal treatment of uncertainty. More complex characterization of uncertainty is necessary only to the extent that it is needed to inform specific risk-management decisions. It is important to address the extent and nature of uncertainty analysis needed in the planning and scoping phase of a risk assessment (see Chapter 3). For many problems, an initial sensitivity analysis can help determine those parameters whose uncertainty might most impact a decision and thus require a more detailed uncertainty analysis. One valuable approach involves utilizing tornado diagrams, in which individual parameters are permitted to vary while all other uncertain parameters are held fixed. The output of this exercise provides a graphical plot of parameters that have the largest influence on the final risk calculation. This both provides a visual representation of the sensitivity analysis, helpful for communication to risk managers and other stakeholders, and determines the subset of parameters that could be carried forward in more sophisticated QUA. “Tiers” or “levels” of sophistication in QUA in risk assessment have been discussed. Paté-Cornell (1996) proposed six levels ranging from level 0 (hazard detection and failure-mode identification) to level 5 (QUA with multiple risk curves reflecting variability at different levels of uncertainty). Similarly, in its draft report on the treatment of uncertainty in exposure assessment, the International Programme on Chemical Safety (IPCS 2006) has proposed four tiers for addressing uncertainty and variability in exposure assessment, from the use of default assumptions to sophisticated QUA. The IPCS tiers are shown in Box 4-4. BOX 4-4 Levels of Uncertainty Analysis Tier 0: Default assumptions—single value of result. Tier 1: Qualitative but systematic identification and characterization of uncertainty. Tier 2: Quantitative evaluation of uncertainty making use of bounding values, interval analysis, and sensitivity analysis. Tier 3: Probabilistic assessment with single or multiple outcome distributions reflecting uncertainty and variability. Source: IPCS 2006.
OCR for page 116
Science and Decisions: Advancing Risk Assessment Some models may be considered to have greater fidelity than others, given the degree to which they capture theoretical constructs and have been evaluated against field measurements, but this does not necessarily imply that the more detailed model should be used under all circumstances. A model with lower resolution (and more uncertainty) but more timely outputs may have greater utility in some decision contexts, especially if the uncertainty can be reasonably characterized to determine its influence on the decision process. Similarly, a model that is highly uncertain with respect to maximum individual exposure but can characterize population-average exposures well may be suitable if the risk management decision is driven by the latter. That reinforces a recurring theme of this report regarding the selection of the appropriate risk-assessment methods in light of the competing demands and constraints described in Chapter 3. With respect to human exposure modeling, EPA has placed increasing emphasis over the last 25 years on quantitative characterization of uncertainty and variability in its exposure assessments. Exposure assessments and exposure models have evolved from simple assessments that addressed only conditions of maximum exposure to assessments that focus explicitly on exposure variation in a population with a quantitative uncertainty analysis. For example, EPA guidelines for exposure assessment issued in 1992 (EPA 1992) called for both high-end and central-tendency estimates for the population. The high end was considered as what could occur for the 90th percentile or higher of exposed people, and the central tendency might represent an exposure near the median or mean of the distribution of exposed people. Through the 1990s, there was increasing emphasis on an explicit and quantitative characterization of the distinction between interindividual variability and uncertainty in exposure assessments. There was also growing interest in and use of probabilistic simulation methods, such as those based on Monte Carlo or closely related methods, as the basis of estimation of differences in exposure among individuals or, in some cases, of the uncertainty associated with any particular exposure estimate. That effort has been aided by a number of comprehensive studies in the United States and Europe that have used individual personal monitoring in conjunction with ambient and indoor measurements (Wallace et al. 1987; Özkaynak et al. 1996; Kousa et al. 2001, 2002a,b). Expanded use of biomonintoring will provide an opportunity both to evaluate and expand the characterization of exposure variability in human populations. The committee anticipates expanded efforts by EPA to quantify uncertainty in exposure estimates and to separate uncertainty and population variability in these estimates. Decisions about controlling exposures are typically based on protecting a particular group of people, such as a population or a highly exposed subpopulation (for example, children), because different individuals have different exposures (NRC 1994). The transparency afforded by probabilistic characterization and separation of uncertainty and variability in exposure assessment offers potential benefits for increasing common understanding as a basis of greater convergence in methodology (IPCS 2006). To date, however, probabilistic exposure assessments have focused on the uncertainty and variability associated with variables in an exposure-assessment model. Missing from the EPA process are guidelines for addressing how model uncertainty and data limitations affect overall uncertainty in exposure assessment. In particular, probabilistic methods have provided estimates of exposure to a compound at the 99th percentile of variability in the population, for example, but have often not considered how model uncertainty affects the reliability of the estimated percentiles. That is an important subject for improvement in future efforts. EPA should also strive for continual enhancement of databases used in exposure modeling, focusing attention on evaluation (that is, personal exposure measurements vs predicted exposures) and applicability to subpopulations of interest. Such documents as
OCR for page 117
Science and Decisions: Advancing Risk Assessment the Exposure Factors Handbook (EPA 1997d) provide crucial data for such analyses and should be regularly revised to reflect recommended improvements. Dose Assessment Assessment of doses of chemicals in the human population relies on a wide array of tools and techniques with varied applications in risk assessment. Monitoring and modeling approaches are used for dose assessment, and important uncertainties and variability are linked to them. Many of the above conclusions for exposure assessment are applicable to dose assessment, but with the recognition that there will be greater variability in doses than exposures across the population as well as greater uncertainty in characterizing those doses. For monitoring, there have been limited but important efforts in recent years to develop comprehensive databases of tissue burdens of chemicals in representative samples of the human population (for example, the National Health and Nutrition Examination Survey [NHANES], the Center for Health Assessment of Mothers and Children of Salinas, the National Children’s Study). There are also efforts to conduct systematic biomonitoring programs in the European Union and in California. Biomonitoring data can provide valuable insight into the degree of variability in internal doses in the population, and analyses of these data can help to determine factors that contribute to dose variability or that modify the exposure-dose relationship. But there are limits to how much variability can be assessed from these data. For example, NHANES is a database of representative samples for the entire U.S. population, but does not capture any geographic subgroups. A discussion of the limitations of NHANES can be found in NRC (2006a). Even with these emerging biomonitoring data, it is still a challenge to assess the contribution of a single source or set of sources to measures of internal dose, which can limit the risk management applicability of these data. In addition there is the challenge of interpreting what the biomonitoring data mean in terms of potential risk to human health (NRC 2006a). Issues related to the value of data obtained through biomonitoring programs are considered in more detail in Chapter 7 in the context of cumulative risk assessment. Dose modeling is commonly based on physiologically-based pharmacokinetic (PBPK) models. PBPK models are used as a means of addressing species, route, and dose-dependent differences in the ratio of tissue-specific dose to applied dose and thus serve as an alternative to default assumptions for extrapolation that link dose to outcome. PBPK models may address some of the uncertainty associated with extrapolating dose-response data from an animal model to humans, but they often fail to fully capture variability of pharmacokinetics and dose in human populations. Toxicologic research can be used to suggest the structure of PBPK models. And sensitive subpopulations or differing senstivities within the population might be described in terms of some attributes through pharmacokinetic modeling (see Chapter 5, 4-aminobiphenyl case study). A number of issues related to uncertainty and variability in pharmacokinetic models were addressed in a 2006 workshop (EPA 2006a; Barton et al. 2007). Because the present committee determined that that was a timely and comprehensive review of issues, key findings of the workshop are summarized here. The 2006 workshop considered both short-term and long-term goals for incorporating uncertainty and variability into PBPK models. In particular, Barton et al. (2007) reported the following short-term goals: multidisciplinary teams to integrate deterministic and nondeterministic statistical models; broader use of sensitivity analyses, including those of structural and global (rather than local) parameter changes; and enhanced transparency and reproducibility through more complete documentation of
OCR for page 118
Science and Decisions: Advancing Risk Assessment model structures and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. The longer-term needs reported by Barton et al. (2007) included theoretical and practical methodologic improvements for nondeterministic and statistical modeling; better methods for evaluating alternative model structures; peer-reviewed databases of parameters and covariates and their distributions; expanded coverage of PBPK models for chemicals with different properties; and training and reference materials, such as cases studies, tutorials, bibliographies and glossaries, model repositories, and enhanced software. Many recent examples of PBPK models applied in toxicology have been for volatile organic chemicals and have used similar structures. PBPK models are needed for a broader array of chemical species (for example, from low to high volatility and low to high log Kow2). Methods for comparing alternative model structures rapidly with available data would facilitate testing of new structural ideas, provide perspective on model uncertainty, and help to address chemicals on which data are sparse. Ultimately, the recognition that models of various degrees of complexity may all describe the available data reasonably will encourage the acquisition of data to differentiate between competing models. Mode of Action and Dose-Response Models Many of the most substantial issues related to both uncertainty and variability can be seen in the realm of dose-response assessment for both cancer and noncancer end points. Historically, risk assessments for carcinogenic end points have been conducted very differently from noncancer risk assessments. In reviewing the issue of mode of action, the committee recognized a clear and important need for a consistent and unified approach in dose-response modeling. For carcinogens, it has generally been assumed that there is no threshold of effect, and risk assessments have focused on quantifying their potency, which is the low-dose slope of the dose-response relationship. For noncancer risk assessment, the prevailing assumption has been that homeostatic and other repair mechanisms in the body result in a population threshold or low-dose nonlinearity that leads to inconsequential risk at low doses, and risk assessments have focused on defining the reference dose or concentration that is sufficiently below the threshold or threshold-like dose to be deemed safe (“likely to be without an appreciable risk of deleterious effects”) (EPA 2002b, p. 4-4). Noncancer risk assessments simply compare observed or predicted doses with the reference dose to yield a qualitative conclusion about the likelihood of harm. The committee finds substantial deficiencies in both approaches with respect to core concepts and the treatment of uncertainty and variability. Cancer risk assessments often provide estimates of the population burden of disease or fraction of the population likely to be above a defined risk level. But there is no explicit treatment of uncertainty associated with such factors as interspecies extrapolation, high-dose to low-dose extrapolation, and the limitations of dose-response studies to capture all relevant information. Moreover, there is essentially no consideration of variations in the population in susceptibility and vulnerability other than consideration of the increased susceptibility of infants and children. The noncancer risk-assessment paradigm remains one of defining a reference value with no formal quantification of how disease incidence varies with exposure. Human heterogeneity is accommodated with a “default” factor, and it is often unclear when the evidence is sufficient to deviate from such defaults. The structure of the reference dose also omits any formal quantification of uncer- 2 Kow is the octanol-water partition coefficient or the ratio of the concentration of a chemical in octanol and in water at equilibrium and at a specified temperature.
OCR for page 119
Science and Decisions: Advancing Risk Assessment tainty. And the current approach does not address compounds for which thresholds are not apparent (for example, fine particulate matter and lead) or not expected (for example, in the case of background additivity). To address the issue of improving dose-response modeling, both from the perspective of uncertainty and variability characterization and in the context of new information on mode of action, the committee has developed a unified and consistent approach to dose-response modeling (Chapter 5). Beyond toxicologic studies of chemicals, there are multiple examples where uncertainty and variability have been more explicitly treated. For example, two National Research Council reports prepared by the Committee on Biological Effects of Ionizing Radiation (NRC 1999, 2006b) have provided examples for addressing dose-response uncertainty for ionizing radiation. Both the BEIR VI report dealing with radon (NRC 1999) and the BEIR VII report dealing with low linear energy transfer (LET) ionizing radiation (NRC 2006b) provided a quantitative analysis of the uncertainties associated with estimates of radiation cancer risks. More generally, epidemiologic studies provide enhanced mechanisms for characterizing uncertainty and variability, sometimes providing information that is more relevant for human health risk assessment than dose-response relationships derived by extrapolating laboratory-animal data to humans. Emerging disciplines such as health tracking, molecular epidemiology, and social epidemiology provide opportunities to improve resolution in linking exposure to disease, which may enhance the ability of epidemiologists to uncover both main effects and effect modifiers, providing greater insight about human heterogeneity in response. A more detailed discussion of the role of these emerging epidemiologic disciplines from the perspective of cumulative risk assessment is provided in Chapter 7. An additional consideration in the treatment of uncertainty and variability in dose-response modeling is related to approaches to combine information across multiple publications, especially in the context of epidemiologic evidence. Various meta-analytic techniques have been employed both to provide pooled central estimates with uncertainty bounds and to evaluate factors that could explain variability in findings across studies (Bell et al. 2005; Ito et al. 2005; Levy et al. 2005). While these approaches will not be applicable in most contexts, because they require a sufficiently large body of epidemiologic literature to allow for pooled analyses, these methods can be utilized to reduce uncertainty associated with selection of a single epidemiologic study for a dose-response function, to characterize uncertainty associated with application of a pooled estimate to a specific setting, and to determine factors that contribute to variability in dose-response functions. EPA should consider these and other meta-analytic techniques, especially for risk management applications tied to specific geographic areas. PRINCIPLES FOR ADDRESSING UNCERTAINTY AND VARIABILITY EPA and policy analysts are not constrained by a lack of methods for conducting uncertainty analysis but can be paralyzed by the absence of guidance on what levels of detail and rigor are needed for a particular risk assessment. That creates situations that splinter the parties involved into those who favor application of the most sophisticated methods to all cases and those who would rather ignore uncertainty completely and simply rely on point estimates of parameters and defaults for all models. But risk assessment often requires something in between. To confront the issue, EPA should develop guidance for conducting and establishing the level of detail in uncertainty and variability analyses that is required for various risk assessments. To foster optimal treatment of variability in its assessments, the agency could develop general guidelines or further supplemental guidance to its health-effects
OCR for page 120
Science and Decisions: Advancing Risk Assessment BOX 4-7 Recommended Principles for Uncertainty and Variability Analysis Risk assessments should provide a quantitative, or at least qualitative, description of uncertainty and variability consistent with available data. The information required to conduct detailed uncertainty analyses may not be available in many situations. In addition to characterizing the full population at risk, attention should be directed to vulnerable individuals and subpopulations that may be particularly susceptible or more highly exposed. The depth, extent, and detail of the uncertainty and variability analyses should be commensurate with the importance and nature of the decision to be informed by the risk assessment and with what is valued in a decision. This may best be achieved by early engagement of assessors, managers, and stakeholders in the nature and objectives of the risk assessment and terms of reference (which must be clearly defined). The risk assessment should compile or otherwise characterize the types, sources, extent, and magnitude of variability and substantial uncertainties associated with the assessment. To the extent feasible, there should be homologous treatment of uncertainties among the different components of a risk assessment and among different policy options being compared. To maximize public understanding of and participation in risk-related decision-making, a risk assessment should explain the basis and results of the uncertainty analysis with sufficient clarity to be understood by the public and decision-makers. The uncertainty assessment should not be a significant source of delay in the release of an assessment. Uncertainty and variability should be kept conceptually separate in the risk characterization. (for example, EPA 2005a) and exposure guidance used in its various programs. To support the effort, the committee offers the principles presented in Box 4-7. The principles in Box 4-7 are consistent with and expand on the “Principles for Risk Analysis” originally established in 1995, noted as useful by the National Research Council (NRC 2007c), and recently re-released by the Office of Management and Budget and the Office of Science and Technology Policy (OMB/OSTP 2007). They are derived from the more detailed discussions above. In particular, they are based on the following issues. Qualitative thinking about uncertainty that reveals that despite the uncertainty, one can have confidence in which risk-management option to pick and not need to quantify further. A need to ensure that uncertainty and variability are addressed by ensuring that the risk is not underestimated. Characterization of a variety of risks and their corresponding confidence intervals. Depending on the risk-management options, a quantitative treatment of uncertainty and variability may be needed to differentiate among the options for making an informed decision. Uncertainty analysis is important for both data-rich and data-poor situations, but confidence in the analysis will vary according to the amount of information available. Because resources are limited in EPA, it is important to match the level of effort to the extent to which a more detailed analysis may influence an important decision. If an uncertainty analysis will not substantially influence outcomes of importance to the decision-maker, resources should not be expended on a detailed uncertainty analysis (for example, two-dimensional Monte Carlo analysis). In developing guidance for uncertainty analysis, EPA first should develop guidelines that “screen out” risk assessments that focus on risks that do not warrant the use of substantial analytic resources. Second, the guidelines should
OCR for page 121
Science and Decisions: Advancing Risk Assessment describe the level of detail that is warranted for “important” risk assessments. Third, the analysis should be tailored to the decision-rule outcome by addressing what is valued in a decision; for example, if the decision-maker is interested only in the 5% most-exposed or most at-risk members of a population, there is little value in structuring an uncertainty analysis that focuses on uncertainty and variability in the full population. The risk assessor should consider the uncertainties and variabilities that accrue in all stages of the risk assessment—in emissions or environmental concentration data, fate and exposure assessment, dose and mechanism of action, and dose-response relationship. It is important to identify the largest sources of uncertainty and variability and to determine the extent to which there is value in focusing on other components. This approach should be based on a value-of-information (VOI) strategy even when resources for a fully quantitative VOI analysis are limited (see discussion in Chapter 3). For example, when uncertainty gives rise to risk estimates that are spread across one or more key decision points, such as a range that includes acceptable and unacceptable levels of risk, then there is value in addressing uncertainty in other components when this information provides more insight on whether one choice of action for reducing risk is better than another. When the goal of a risk assessment is to discriminate among various options, the uncertainty analysis supporting the evaluation should be tailored to provide sufficient resolution to make the discriminations (to the extent that it can). It is important to distinguish when and how to engage an uncertainty analysis to characterize one-sided confidence (confidence that the risk does not exceed X or confidence that all or most individuals are protected from harm, and so on) or richer descriptions of the uncertainty (for example, two-sided confidence bounds, or the full distribution). Depending on the options being considered, a fuller description may be needed to understand tradeoffs. When a “safe” level of risk is being established, without consideration of costs or countervailing risks, a single-sided (bounding) risk estimate or lower-bound acceptable dose may be sufficient. RECOMMENDATIONS This chapter addressed the need to consider uncertainty and variability in an interpretable and consistent manner among all components of a risk assessment and to communicate them in the overall risk characterization. The committee focused on more detailed and transparent methods for addressing uncertainty and variability, on specific aspects of uncertainty and variability in key computational steps of risk assessment, and on approaches to help EPA to decide what level of detail to use in characterizing uncertainty and variability to support risk-management decisions and public involvement in the process. The committee recognizes that EPA has the technical capability to do two-stage Monte Carlo and other very detailed and computationally intensive analyses of uncertainty and variability. But such analyses are not necessary in all decision contexts, given that transparency and timeliness are also desirable attributes of a risk assessment, and given that some decisions can be made with less complex analyses. The question is not often about better ways to do these analyses, but about developing a better understanding of when to do these analyses. To address those issues, the committee provides the following recommendations: EPA should develop a process to address and communicate the uncertainty and variability that are parts of any risk assessment. In particular, this process should encourage risk assessments to characterize and communicate uncertainty and variability in all key
OCR for page 122
computational steps of risk assessment—emissions, fate-and-transport modeling, exposure assessment, dose assessment, dose-response assessment, and risk characterization. EPA should develop guidance to help analysts determine the appropriate level of detail needed in uncertainty and variability analyses to support decision-making. The principles of uncertainty and variability analysis above provide a starting point for development of this guidance, which should include approaches both for analysis and communication In the short term, EPA should adopt a “tiered” approach for selecting the level of detail used in uncertainty and variability assessment. A discussion of the level of detail used for uncertainty analysis and variability assessment should be an explicit part of the problem formulation and planning and scoping. In the short term, EPA should develop guidelines that define key terms of reference used in the presentation of uncertainty and variability, such as central tendency, average, expected, upper bound, and plausible upper bound. In addition, because risk-risk and benefit-cost comparisons pose unique analytic challenges, guidelines could provide insight into and advice on uncertainty characterizations to support risk decision-making in these contexts. Improving characterization of uncertainty and variability in risk assessment comes at a cost, and additional resources and training of risk assessors and risk managers will be required. In the short term, EPA should build the capacity to provide guidance to address and implement the principles of uncertainty and variability analysis. REFERENCES ATSDR (Agency for Toxic Substances and Disease Registry). 1992. Case Studies in Environmental Medicine: Radon Toxicity. U.S. Department of Health and Human Services, Public Health Service, Agency for Toxic Substances and Disease Registry, Atlanta, GA. Barton, H.A., W.A. Chiu, R. Woodrow Setzer, M.E. Andersen, A.J. Bailer, F.Y. Bois, R.S. Dewoskin, S. Hays, G. Johanson, N. Jones, G. Loizou, R.C. MacPhail, C.J. Portier, M. Spendiff, and Y.M. Tan. 2007. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: State of the science and needs for research and implementation. Toxicol. Sci. 99(2):395-402. Bell, M.L., F. Dominici, and J.M. Samet. 2005. A meta-analysis of time-series studies of ozone and mortality with comparison to the National Morbidity, Mortality and Air Pollution Study. Epidemiology 16(4):436-445. Bhatia, S., L.L. Robison, O. Oberlin, M. Greenberg, G. Bunin, F. Fossati-Bellani, and A.T. Meadows. 1996. Breast cancer and other second neoplasms after childhood Hodgkin’s disease. N. Engl. J. Med. 334(12):745-751. Blount, B.C., J.L. Pirkle, J.D. Osterloh, L. Valentin-Blasini, and K.L. Caldwell. 2006. Urinary perchlorate and thyroid hormone levels in adolescent and adult men and women living in the United States. Environ. Health Perspect. 114(12):1865-1871. Bois, F.Y., G. Krowech, and L. Zeise. 1995. Modeling human interindividual variability in metabolism and risk: The example of 4-aminobiphenyl. Risk Anal. 15(2):205-213. Budnitz, R.J., G. Apostolakis, D.M. Boore, L.S. Cluff, K.J. Coppersmith, C.A. Cornell, and P.A. Morris. 1998. Use of technical expert panels: Applications to probabilistic seismic hazard analysis. Risk Anal. 18(4):463-469. CDHS (California Department of Health Services). 1990. Report to the Air Resources Board on Inorganic Arsenic. Part B. Health Effects of Inorganic Arsenic. Air Toxicology and Epidemiology Section. Hazard Identification and Risk Assessment Branch. Department of Health Services. Berkeley, CA. Cowan, C.E., D. Mackay, T.C.J. Feijtel, D. Van De Meent, A. Di Guardo, J. Davies, and N. Mackay, eds. 1995. The Multi-Media Fate Model: A Vital Tool for Predicting the Fate of Chemicals. Pensacola, FL: Society of Environmental Toxicology and Chemistry. Cullen, A.C., and H.C. Frey. 1999. The Use of Probabilistic Techniques in Exposure Assessment: A Handbook for Dealing with Variability and Uncertainty in Models and Inputs. New York: Plenum Press. Cullen, A.C., and M.J. Small. 2004. Uncertain risk: The role and limits of quantitative analysis. Pp. 163-212 in Risk Analysis and Society: An Interdisciplinary Characterization of the Field, T. McDaniels, and M.J. Small, eds. Cambridge, UK: Cambridge University Press. Dubois, D., and H. Prade. 2001. Possibility theory, probability theory and multiple-valued logics: A clarification. Ann. Math. Artif. Intell. 32(1-4):35-66.
OCR for page 123
Science and Decisions: Advancing Risk Assessment EPA (U.S. Environmental Protection Agency). 1986. Guidelines for Carcinogen Risk Assessment. EPA/630/R-00/004. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. September 1986 [online]. Available: http://www.epa.gov/ncea/raf/car2sab/guidelines_1986.pdf [accessed Jan. 7, 2008]. EPA (U.S. Environmental Protection Agency). 1989a. Risk Assessment Guidance for Superfund, Vol. 1. Human Health Evaluation Manual Part A. EPA/540/1-89/002. Office of Emergency and Remedial Response, U.S. Environmental Protection Agency, Washington, DC. December 1989 [online]. Available: http://rais.ornl.gov/homepage/HHEMA.pdf [accessed Jan. 11, 2008]. EPA (U.S. Environmental Protection Agency). 1989b. Interim Procedures for Estimating Risks Associated with Exposures to Mixtures of Chlorinated Dibenzo-p-Dioxins and Dibenzofurans (CDDs and CDFs): 1989 Update. EPA/625/3-89/016. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. EPA (U.S. Environmental Protection Agency). 1992. Guidelines for Exposure Assessment. EPA600Z-92/001. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://cfpub.epa.gov/ncea/raf/recordisplay.cfm?deid=15263 [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 1996. Guidelines for Reproductive Toxicity Risk Assessment. EPA/630/R-96/009. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. October 1996 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/repro51.pdf [accessed Jan. 10, 2008]. EPA (U.S. Environmental Protection Agency). 1997a. Guiding Principles for Monte Carlo Analysis. EPA/630/R-97/001. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 1997 [online]. Available: http://www.epa.gov/ncea/raf/montecar.pdf [accessed Jan. 7, 2008]. EPA (U.S. Environmental Protection Agency). 1997b. Policy for Use of Probabilistic Analysis in Risk Assessment at the U.S. Environmental Protection Agency. Science Policy Council, U.S. Environmental Protection Agency, Washington, DC. May 15, 1997 [online]. Available: http://www.epa.gov/osp/spc/probpol.htm [accessed Jan. 15, 2008]. EPA (U.S. Environmental Protection Agency). 1997c. Guidance on Cumulative Risk Assessment, Part 1. Planning and Scoping. Science Policy Council, U.S. Environmental Protection Agency, Washington, DC. July 3, 1997 [online]. Available: http://www.epa.gov/brownfields/html-doc/cumrisk2.htm [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 1997d. Exposure Factors Handbook. National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. August 1997 [online]. Available: http://www.epa.gov/ncea/efh/report.html [accessed Aug. 5, 2008]. EPA (U.S. Environmental Protection Agency). 1998. Guidelines for Neurotoxicity Risk Assessment. EPA/630/R-95/001F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. April 1998 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/neurotox.pdf [accessed Jan. 10, 2008]. EPA (U.S. Environmental Protection Agency). 2001. Risk Assessment Guidance for Superfund (RAGS): Vol. 3 -Part A: Process for Conducting Probabilistic Risk Assessment. EPA 540-R-02-002. Office of Emergency and Remedial Response, U.S. Environmental Protection Agency, Washington, DC. December 2001. http://www.epa.gov/oswer/riskassessment/rags3a/ [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2002a. Calculating Upper Confidence Limits for Exposure Point Concentrations at Hazardous Waste Sites. OSWER 9285.6-10. Office of Emergency and Remedial Response, U.S. Environmental Protection Agency, Washington, DC. December 2002 [online]. Available: http://www.hanford.gov/dqo/training/ucl.pdf [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2002b. A Review of the Reference Dose and Reference Concentration Processes. Final report. EPA/630/P-02/002F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. December 2002 [online]. Available: http://www.epa.gov/iris/RFD_FINAL%5B1%5D.pdf [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2004a. Risk Assessment Principles and Practices: Staff Paper. EPA/100/B-04/001. Office of the Science Advisor, U.S. Environmental Protection Agency, Washington, DC. March 2004 [online]. Available: http://www.epa.gov/osa/pdfs/ratf-final.pdf [accessed Jan. 9, 2008]. EPA (U.S. Environmental Protection Agency). 2004b. Final Regulatory Analysis: Control of Emissions from Nonroad Diesel Engines. EPA420-R-04-007. Office of Transportation and Air Quality, U.S. Environmental Protection Agency. May 2004 [online]. Available: http://www.epa.gov/nonroad-diesel/2004fr/420r04007a.pdf [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2005a. Supplemental Guidance for Assessing Susceptibility from Early-Life Exposures to Carcinogens. EPA/630/R-03/003F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 2005 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=160003 [accessed Jan. 4, 2008]. EPA (U.S. Environmental Protection Agency). 2005b. Guidelines for Carcinogen Risk Assessment. EPA/630/P-03/001F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 2005 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=116283 [accessed Jan. 15, 2008].
OCR for page 124
Science and Decisions: Advancing Risk Assessment EPA (U.S. Environmental Protection Agency). 2005c. Regulatory Impact Analysis for the Final Clean Air Interstate Rule. EPA-452/R-05-002. Air Quality Strategies and Standards Division, Emission, Monitoring, and Analysis Division and Clean Air Markets Division, Office of Air and Radiation, U.S. Environmental Protection Agency. March 2005 [online]. Available: http://www.epa.gov/CAIR/pdfs/finaltech08.pdf [accessed Jan. 14, 2008]. EPA (U.S. Environmental Protection Agency). 2006a. International Workshop on Uncertainty and Variability in Physiologically Based Pharmacokinetic (PBPK) Models, October 31-November 2, 2006, Research Triangle Park, NC [online]. Available: http://www.epa.gov/ncct/uvpkm/ [accessed Jan. 15, 2008]. EPA (U.S. Environmental Protection Agency). 2006b. Regulatory Impact Analysis (RIA) of the 2006 National Ambient Air Quality Standards for Fine Particle Pollution. Air Quality Strategies and Standards Division, Office of Air and Radiation, U.S. Environmental Protection Agency. October 6, 2006 [online]. Available: http://www.epa.gov/ttn/ecas/regdata/RIAs/Executive%20Summary.pdf [accessed Nov. 17, 2008]. EPA (U.S. Environmental Protection Agency). 2007a. Integrated Risk Information System (IRIS). Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC [online]. Available http://www.epa.gov/iris/ [accessed Jan. 15, 2008]. EPA (U.S. Environmental Protection Agency). 2007b. Emissions Factors & AP 42. Clearinghouse for Inventories and Emissions Factors, Technology Transfer Network, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/ttn/chief/ap42/index.html [accessed Jan. 15, 2008]. EPA (U.S. Environmental Protection Agency). 2008. EPA’s Council for Regulatory Environmental Modeling (CREM). Office of the Science Advisor, U.S. Environmental Protection Agency. October 23, 2008 [online]. Available: http://www.epa.gov/crem/ [accessed Nov. 20, 2008]. EPA SAB (U.S. Environmental Protection Agency Science Advisory Board). 2004. EPA’s Multipmedia Multipathway and Multireceptor Risk Assessment (3MRA) Modeling System: A Review by the 3MRA Review Panel of the EPA Science Advisory Board.. EPA-SAB-05-003. U.S. Environmental Protection Agency, Science Advisory Board, Washington, DC. October 22, 2004 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/99390EFBFC255AE885256FFE00579745/$File/SAB-05-003_unsigned.pdf [accessed Sept. 9, 2008]. Evans, J. S., G.M. Gray, R.L. Sielken, A.E. Smith, C. Valdez-Flores, and J.D. Graham. 1994. Use of probabilistic expert judgment in uncertainty analysis of carcinogenic potency. Regul. Toxicol. Pharmacol. 20(1):15-36. Fenner, K., M. Scheringer, M. MacLeod, M. Matthies, T. McKone, M. Stroebe, A. Beyer, M. Bonnell, A.C. Le Gall, J. Klasmeier, D. Mackay, D. van de Meent, D. Pennington, B. Scharenberg, N. Suzuki, and F. Wania. 2005 Comparing estimates of persistence and long-range transport potential among multimedia models. Environ. Sci. Technol. 39(7):1932-1942. Finkel, A.M. 1990. Confronting Uncertainty in Risk Management: A Guide for Decision Makers. Washington, DC: Resources for the Future. Finkel, A.M. 1995a. A quantitative estimate of the variations in human susceptibility to cancer and its implications for risk management. Pp. 297-328 in Low-Dose Extrapolation of Cancer Risks: Issues and Perspectives, S.S. Olin, W. Farland, C. Park, L. Rhomberg, R. Scheuplein, and T. Starr, eds. Washington, DC: ILSI Press. Finkel, A.M. 1995b. Toward less misleading comparisons of uncertain risks: The example of aflatoxin and alar. Environ. Health Perspect. 103(4):376-385. Finkel, A.M. 2002. The joy before cooking: Preparing ourselves to write a risk research recipe. Hum. Ecol. Risk Assess. 8(6):1203-1221. Greer, M.A., G. Goodman, R.C. Pleus, and S.E. Greer. 2002. Health effects assessment for environmental perchlorate contamination: The dose response for inhibition of thyroidal radioiodine uptake in humans. Environ. Health Perspect. 110(9):927-937. Grossman, L. 1997. Epidemiology of ultraviolet-DNA repair capacity and human cancer. Environ. Health Perspect. 105(Suppl. 4):927-930. Hattis, D., P. Banati, and R. Goble. 1999. Distributions of individual susceptibility among humans for toxic effects: How much protection does the traditional tenfold factor provide for what fraction of which kinds of chemicals and effects? Ann. NY Acad. Sci. 895:286-316. Hattis, D., R. Goble, A. Russ, M. Chu, and J. Ericson. 2004. Age-related differences in susceptibility to carcinogenesis: A quantitative analysis of empirical animal bioassay data. Environ. Health Perspect. 112(11):1152-1158. Heidenreich, W.F. 2005. Heterogeneity of cancer risk due to stochastic effects. Risk Anal. 25(6):1589-1594. ICRP (International Commission on Radiological Protection). 1998. Genetic Susceptibility to Cancer. ICRP Publication 79. Annals of the ICPR 28(1-2). New York: Pergamon.
OCR for page 125
Science and Decisions: Advancing Risk Assessment IEc (Industrial Economics, Inc). 2006. Expanded Expert Judgment Assessment of the Concentration-Response Relationship Between PM2.5 Exposure and Mortality. Prepared for the Office of Air Quality Planning and Standards, U.S. Environmental Protection Agency, Research Triangle Park, NC, by Industrial Economics Inc., Cambridge, MA. September, 2006 [online]. Available: http://www.epa.gov/ttn/ecas/regdata/Uncertainty/pm_ee_report.pdf [accessed Jan. 14, 2008]. Ingelman-Sundberg, M., I. Johannson, H. Yin, Y. Terelius, E. Eliasson, P. Clot, and E. Albano. 1993. Ethanol-inducible cytochrome P4502E1: Genetic polymorphism, regulation, and possible role in the etiology of alcohol-induced liver disease. Alcohol 10(6):447-452. Ingelman-Sundberg, M., M.J. Ronis, K.O. Lindros, E. Eliasson, and A. Zhukov. 1994. Ethanol-inducible cytochrome P4502E1: Regulation, enzymology and molecular biology. Alcohol Suppl. 2:131-139. IPCS (International Programme on Chemical Safety). 2000. Human exposure and dose modeling. Part 6 in Human Exposure Assessment. Environmental Health Criteria 214. Geneva: World Health Organization [online]. Available: http://www.inchem.org/documents/ehc/ehc/ehc214.htm#PartNumber:6 [accessed Jan. 15, 2008]. IPCS (International Programme on Chemical Safety). 2004. IPCS Risk Assessment Terminology Part 1: IPCS/OECD Key Generic Terms used in Chemical Hazard/Risk Assessment and Part 2: IPCS Glossary of Key Exposure Assessment Terminology. Geneva: World Health Organization [online]. Available: http://www.who.int/ipcs/methods/harmonization/areas/ipcsterminologyparts1and2.pdf [accessed Jan. 15, 2008]. IPCS (International Programme on Chemical Safety). 2006. Draft Guidance Document on Characterizing and Communicating Uncertainty of Exposure Assessment, Draft for Public Review. IPCS Project on the Harmonization of Approaches to the Assessment of Risk from Exposure to Chemicals. Geneva: World Health Organization [online]. Available: http://www.who.int/ipcs/methods/harmonization/areas/draftundertainty.pdf [accessed Jan. 15, 2008]. Ito, K., S.F. DeLeon, and M. Lippmann. 2005. Associations between ozone and daily mortality: Analysis and meta-analysis. Epidemiology 16(4):446-457. Kavlock, R. 2006. Computational Toxicology: New Approaches to Improve Environmental Health Protection. Presentation on the 1st Meeting on Improving Risk Analysis Approaches Used by the U.S. EPA, November 20, 2006, Washington, DC. Kousa, A., C. Monn, T. Totko, S. Alm, L. Oglesby, and M.J. Jantunen. 2001. Personal exposures to NO2 in the EXPOLIS study: Relation to residential indoor, outdoor, and workplace concentrations in Basel, Helsinki, and Prague. Atmos. Environ. 35(20):3405-3412. Kousa, A., J. Kukkonen, A. Karppinen, P. Aarnio, and T. Koskentalo. 2002a. A model for evaluating the population exposure to ambient air pollution in an urban area. Atmos. Environ. 36(13):2109-2119. Kousa, A., L. Oglesby, K. Koistinen, N. Kunzli, and M. Jantunen. 2002b. Exposure chain of urban air PM2.5− associations between ambient fixed site, residential outdoor, indoor, workplace, and personal exposures in four European cities in EXPOLIS study. Atmos. Environ. 36(18):3031-3039. Krupnick, A., R. Morgenstern, M. Batz, P. Nelsen, D. Burtraw, J.S. Shih, and M. McWilliams. 2006. Not a Sure Thing: Making Regulatory Choices under Uncertainty. Washington, DC: Resources for the Future. February 2006 [online]. Available: http://www.rff.org/rff/Documents/RFF-Rpt-RegulatoryChoices.pdf [accessed Nov. 22, 2006]. Landi, M.T., A. Baccarelli, R.E. Tarone, A.Pesatori, M.A. Tucker, M. Hedayati, and L. Grossman. 2002. DNA repair, dysplastic nevi, and sunlight sensitivity in the development of cutaneous malignant melanoma. J. Natl. Cancer Inst. 94(2):94-101. Levy, J.I., S.M. Chemerynski, and J.A. Sarnat. 2005. Ozone exposure and mortality: An empiric Bayes meta-regression analysis. Epidemiology 16(4):458-468. Mackay, D. 2001. Multimedia Environmental Models: The Fugacity Approach, 2nd Ed. Boca Raton: Lewis. McKone, T.E., and M. MacLeod. 2004. Tracking multiple pathways of human exposure to persistent multimedia pollutants: Regional, continental, and global scale models. Annu. Rev. Environ. Resour. 28:463-492. Micu, A.L., S. Miksys, E.M. Sellers, D.R. Koop, and R.F. Tyndale. 2003. Rat hepatic CYP2E1 is induced by very low nicotine doses: An investigation of induction, time course, dose response, and mechanism. J. Pharmacol. Exp. Ther. 306(3):941-947. Morgan, M.G., M. Henrion, and M. Small. 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge: Cambridge University Press. NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press. NRC (National Research Council). 1994. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press. NRC (National Research Council). 1996. Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Academy Press.
OCR for page 126
Science and Decisions: Advancing Risk Assessment NRC (National Research Council) 1999. Health Effects of Exposure to Radon BEIR VI. Washington, DC: National Academy Press. NRC (National Research Council). 2000. Copper in Drinking Water. Washington, DC: National Academy Press. NRC (National Research Council). 2002. Estimating the Public Health Benefits of Proposed Air Pollution Regulations. Washington, DC: The National Academies Press. NRC (National Research Council). 2006a. Human Biomonitoring of Environmental Chemicals. Washington, DC: The National Academies Press. NRC (National Research Council). 2006b. Health Risks from Exposures to Low Levels of Ionizing Radiation BEIR VII. Washington, DC: The National Academies Press. NRC (National Research Council). 2007a. Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment. Washington, DC: The National Academies Press. NRC (National Research Council). 2007b. Toxicity Testing in the Twenty-First Century: A Vision and a Strategy. Washington, DC: The National Academies Press. NRC (National Research Council). 2007c. Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget. Washington, DC: The National Academies Press. NRC (National Research Council). 2007d. Models in Environmental Regulatory Decision Making. Washington, DC: The National Academies Press. OMB/OSTP (Office of Management and Budget/Office of Science and Technology Policy). 2007. Updated Principles for Risk Analysis. Memorandum for the Heads of Executive Departments and Agencies, from Susan E. Dudley, Administrator, Office of Information and Regulatory Affairs, Office of Management and Budget, and Sharon L. Hays, Associate Director and Deputy Director for Science, Office of Science and Technology Policy, Washington, DC. September 19, 2007 [online]. Available: http://www.whitehouse.gov/omb/memoranda/fy2007/m07-24.pdf [accessed Jan. 4, 2008]. Özkaynak, H., J. Xue, J. Spengler, L. Wallace, E. Pellizari, and P. Jenkins. 1996. Personal exposure to airborne particles and metals: Results from the particle TEAM study in Riverside, California. J. Expo. Anal. Environ. Epidemiol. 6(1):57-78. Paté-Cornell, M.E. 1996. Uncertainties in risk analysis: Six levels of treatment. Reliab. Eng. Syst. Safe. 54(2):95-111. Sexton, K., and D. Hattis. 2007. Assessing cumulative health risks from exposure to environmental mixtures: Three fundamental questions. Environ. Health Perspect. 115(5):825-832. Spetzler, C.S., and C.S. von Holstein. 1975. Probability encoding in decision analysis. Manage. Sci. 22(3):340-358. Tawn, E.J. 2000. Book Reviews: Genetic Susceptibility to Cancer (1998) and Genetic Heterogeneity in the Population and its Implications for Radiation Risk (1999). J. Radiol. Prot. 20:89-92. USNRC (U.S. Nuclear Regulatory Commission). 1975. The Reactor Safety Study: An Assessment of Accident Risk in U.S. Commercial Nuclear Power Plants. WASH-1400. NUREG-75/014. U.S. Nuclear Regulatory Commission, Washington, DC. October 1975 [online]. Available: http://www.osti.gov/energycitations/servlets/purl/7134131-wKhXcG/7134131.PDF [accessed Jan. 15, 2008]. Wallace, L.A., E.D. Pellizzari, T.D. Hartwell, C. Sparacino, and R. Whitmore. 1987. TEAM (Total Exposure Assessment Methodology) study: Personal exposures to toxic substances in air, drinking water, and breath of 400 residents of New Jersey, North Carolina, and North Dakota. Environ. Res. 43(2):290-307. Wu-Williams, A.H., L. Zeise, and D. Thomas. 1992. Risk assessment for aflatoxin B1: A modeling approach. Risk Anal. 12(4):559-567. Zadeh, L.A. 1965. Fuzzy sets. Inform. Control 8(3):338-353. Zartarian, V., T. Bahadori, and T. McKone. 2005. Adoption of an official ISEA glossary. J. Expo. Anal. Environ. Epidemiol. 15(1):1-5. Zenick, H. 2006. Maturation of Risk Assessment: Attributable Risk as a More Holistic Approach. Presentation on the 1st Meeting on Improving Risk Analysis Approaches Used by the U.S. EPA, November 20, 2006, Washington, DC.