Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 188
Science and Decisions: Advancing Risk Assessment 6 Selection and Use of Defaults As described in Chapter 2, the authors of the National Research Council report Risk Assessment in the Federal Government: Managing the Process (NRC 1983), known as the Red Book, recommended that federal agencies develop uniform inference guidelines for risk assessment. The guidelines were to be developed to justify and select, from among available options, the assumptions to be used for agency risk assessments. The Red Book committee recognized that distinguishing the available options on purely scientific grounds would not be possible and that an element of what the committee referred to as risk-assessment policy—often referred to later as science policy (NRC 1994)1—was needed to select the options for general use. The need for agencies to specify the options for general use was seen by the committee as necessary to avoid manipulation of risk-assessment outcomes and to ensure a high degree of consistency in the risk-assessment process. The specific inference options that now appear in EPA’s risk-assessment guidelines, and that permeate risk assessments performed under those guidelines, have come to be called default options, or more simply defaults. The Red Book committee defined a default option as the inference option “chosen on the basis of risk assessment policy that appears to be the best choice in the absence of data to the contrary.” As the authors of Science and Judgment in Risk Assessment (NRC 1994) observed, many of the key inference options selected as defaults by EPA are based on relatively strong scientific foundations, although none can be demonstrated to be “correct” for every toxic substance. Because generally applicable defaults are necessary, the ultimate choice of defaults involves an element of policy. Since 1983, EPA has updated its set of defaults and has made strides in providing more detailed explanations for the choice of defaults that emphasize their theoretical and evidentiary foundations and the policy and administrative considerations that may have influenced the choices (EPA 2004a). 1 The Red Book committee did not use the phrase risk-assessment policy in the usual sense in which science policy is used but far more narrowly to describe the policy elements of risk assessments. The committee distinguished between the policy considerations in risk assessment and those pertaining to risk management.
OCR for page 189
Science and Decisions: Advancing Risk Assessment The Red Book emphasized both the need for generically applicable defaults and the need for flexibility in their application. Thus, the Red Book and Science and Judgment pointed out that scientific data could shed light, in the case of specific substances, on one or more of the information gaps in a risk assessment for which a generally applicable default had been applied. The substance-specific data might reveal that a given default might be inapplicable because it is inconsistent with the data. The substance-specific data might not show that the default had been ill chosen in the general sense but could show its inapplicability in the specific circumstance. Thus, there arose the notion of substance-specific departures from defaults based on substance-specific data. Much discourse and debate have attended the question of how many data, and of what type, are necessary to justify such departures, and the committee addresses the matter in this chapter. EPA recently altered its view on the question of “departures from defaults,” and this chapter begins by examining this view in relation to its central theme. CURRENT ENVIRONMENTAL PROTECTION AGENCY POLICY ON DEFAULTS The committee recognizes that defaults are among the most controversial aspects of risk assessments. Because the committee considers that defaults will always be a necessary part of the risk-assessment process, the committee examined EPA’s current policy on defaults, beginning with an eye toward understanding its applications, its strengths and weaknesses, and how the current system of defaults might be improved. EPA began articulating a shift toward its current policy on defaults in the Risk Characterization Handbook (EPA 2000a) when it stated, For some common and important data gaps, Agency or program-specific risk assessment guidance provides default assumptions or values. Risk assessors should carefully consider all available data before deciding to rely on default assumptions. If defaults are used, the risk assessment should reference the Agency guidance that explains the default assumptions or values (p. 41). EPA’s staff paper titled Risk Assessment Principles and Practices (EPA 2004a) reflected a further shift in the agency’s practices on defaults: EPA’s current practice is to examine all relevant and available data first when performing a risk assessment. When the chemical- and/or site-specific data are unavailable (that is, when there are data gaps) or insufficient to estimate parameters or resolve paradigms, EPA uses a default assumption in order to continue with the risk assessment. Under this practice EPA invokes defaults only after the data are determined to be not usable at that point in the assessment—this is a different approach from choosing defaults first and then using data to depart from them (p. 51). EPA’s revised cancer guidelines (EPA 2005a) emphasize that the policy is consistent with EPA’s mission and make clear that the general policy applies to cancer risk assessments: As an increasing understanding of carcinogenesis is becoming available, these cancer guidelines adopt a view of default options that is consistent with EPA’s mission to protect human health while adhering to the tenets of sound science. Rather than viewing default options as the starting point from which departures may be justified by new scientific information, these cancer guidelines view a critical analysis of all of the available information that is relevant to assessing the carcinogenic risk as the starting point from which a default option may be invoked if needed to address uncertainty or the absence of critical information (p. 1-7). Those statements may reflect the agency’s current perspective on the primacy of scientific data and analysis in its risk assessments; the agency commits to examining all relevant
OCR for page 190
Science and Decisions: Advancing Risk Assessment and available data before selecting defaults. The committee struggled with what the current policy means in terms of both literal interpretation and application to the risk-assessment process. The lack of clarity has the potential to lead to multiple interpretations. It raised questions regarding the implications of the policy for risk decision-making. It is difficult to argue with a more robust examination of available science, which the committee strongly supports; however, the committee expressed concern that without clear guidelines on the extent to which science should be evaluated, the open-ended approach could lead to delays and undermine the credibility of defaults and the ultimate decision process. The committee notes that the risk-characterization handbook (EPA 2000a) provides some statements regarding the need to identify key data gaps and avoid delays in the risk-assessment process in the planning and scoping phase, but it is concerned that such statements may not be adequate to address complications resulting from the current policy: Another discussion during the planning and scoping process concerns the identification of key data gaps and thoughts about how to fill the information needs. For example, can you fill the information needs in the near-term using existing data, in the mid-term by conducting tests with currently available test methods to provide data on the agents(s) of interest, and over the long-term to develop better, more realistic understandings of exposure and effects, and to construct more realistic test methods to evaluate agents of concern? In keeping with [transparency, clarity, consistency, and reasonableness] TCCR, care must be taken not to set the risk assessment up for failure by delaying environmental decisions until more research is done (p. 29). The policy may be appealing at first glance: it creates a two-phase process that obligates the agency to give full attention to all available and relevant scientific information and in the absence of some needed information to use defaults rather than allow uncertainties to force an end to an assessment and to related regulatory decision-making. On closer examination, the current policy carries a number of disadvantages. Concerns with EPA’s Current Policy on Defaults Depending on implementation, the position in the current policy as articulated in the 2004 staff paper (EPA 2004a) and 2005 cancer guidelines (EPA 2005a) could represent a radical departure from previous policies. Rather than starting with a default that represents a culmination of a thorough examination of “all the relevant and available scientific information,” this policy has the potential to promote with each assessment a full ad hoc examination of data and the spectrum of inferences they may support without being selective or contrasting them with the default to reflect on their plausibility. There are then no real defaults, and every inference is subject to ready replacement. By definition, a full evaluation of the evidence identifies the best available assumption, whether it is based on chemical-specific information or more general information. Thus, EPA takes on, even more than before, the burden of establishing that existing science does not warrant use of an inference different from the default. There is also the commitment “to examine all relevant and available data” first. Pushed to the extreme for some chemicals, that can mean retrieving, cataloging, and demonstrating full consideration of thousands of references, many of little utility but nonetheless “relevant.” It also could lead to the reopening of the basis of some of the generic defaults on an ad hoc basis, as discussed below. Those possibilities create further vulnerability to challenge and delay that could affect environmental protection and public health. From a practical management perspective, the mandate to consider “all relevant and available data” may be unworkable for an overburdened and underresourced EPA (EPA SAB 2006, 2007) that is struggling to keep up with demands for analysis of hazard and dose-response
OCR for page 191
Science and Decisions: Advancing Risk Assessment information (Gilman 2006; Mills 2006). It may also have profound ripple effects on regulatory and risk-management efforts by other agencies at both the federal and state levels. And there is a lack of clarity as to what the policy means in cases in which the database supports a different inference from the default and does not merely replace a default with data.2 What Is Needed for an Effective Default Policy? Both the current and previous EPA policies on defaults raise a crucial question: How should the agency determine that the available data are or are not “usable,” that is, that 2 One member of the committee concluded that the new EPA policy is not unclear, but instead represents a definitive and troubling shift away from a decades-old system that appropriately valued sound scientific information and avoided the paralysis of having to re-examine generic information with every new risk assessment. During its deliberations, the member heard two things clearly from EPA that make the intent of its above language unambiguous: (1) that EPA regards “data” and inferences as two concepts that can be compared to each other, and that the former should trump the latter (the member heard, for example, that the new policy is intended to repudiate the historical use of “risk assessment without data—just defaults”); and (2) that the goal of the policy shift is to “reduce reliance on defaults” (EPA SAB 2004a; EPA 2007d). This member of the committee questioned both of these premises. First, the member concluded that there are two problems with the notion of pitting “data” against defaults. The logical problem, in this member’s opinion, was that the actual choice EPA faces is a choice among models (inferences, assumptions), which are not themselves “data” but which are ways of making sense of data. For example, reams of data may exist on some biochemical reaction that might suggest that a particular rodent tumor was caused via a mechanism that does not operate in humans. EPA’s task, however, is whether or not to make the assumption that the rodent tumors are relevant, in the absence of a well-posed theory to the contrary, one that is supported by data. Without the alternative assumption being articulated, EPA has nothing coherent to do with the data. The more important practical problem with EPA’s new formulation, in this member’s opinion, is that a policy of “retreating to the default” if the chemical- or site-specific data are “not usable” ignores the vast quantities of data (interpretable via inferences with a sound theoretical basis) that already support most of the defaults EPA has chosen over the past 30 years. In order for a decision to not “invoke” a default to be made fairly, data supporting the inference that a rodent tumor response was irrelevant would have to be weighed against the data supporting the default inference that such responses are generally relevant (see, for example, Allen et al. 1988), data supporting a possible nonlinearity in cancer dose-response would have to be weighed against the data supporting linearity as a general rule (Crawford and Wilson 1996), data on pharmacokinetic parameters would have to be weighed against the data and theory supporting allometric interspecies scaling (see, for example, Clewell et al. 2002), and so on. In other words, having no chemical-specific data other than bioassay data does not imply there is a “data gap,” as EPA now claims—it may well mean that vast amounts of data support a time-tested inference on how to interpret this bioassay, and that no data to the contrary exist because no plausible inference to the contrary exists in this case. In short, this committee member sees most of the common risk assessment defaults not as “inferences retreated to because of the absence of information,” but rather as “inferences generally endorsed on account of the information.” Therefore, this committee member concluded that EPA’s stated goal of “reducing reliance on defaults” per se is problematic; it begs the question of why a scientific-regulatory agency would ever want to reduce its reliance on those inferences that are supported by the most substantial theory and evidence. Worse yet, the committee member concluded, it seems to prejudice the comparison between default and alternative models before it starts—if EPA accomplishes part of its mission by ruling against a default model, the “critical analysis of all available information” may be preordained by a distaste for the conclusion that the default is in fact proper. This committee member certainly endorses the idea of reducing EPA’s reliance on those defaults that are found to be outmoded, erroneous, or correct in the general case but not in a specific case—but identifying those inferior assumptions is exactly what a system of departures from defaults, as recommended in the Red Book, in Science and Judgment, and in this report, is designed to do. EPA should modify its language to make clear that across-the-board skepticism about defaults is not scientifically appropriate. Thus, the committee member concludes that recommendations in this chapter apply whether or not EPA believes it has “evolved beyond defaults.” A system that evaluates every inference for every risk assessment still needs ground rules, of the kind recommended in this chapter, to show interested parties how EPA will decide what data are “usable” or which inference is proper. This committee member urges EPA to delineate what evidence will determine how it makes these judgments, and how that evidence will be interpreted and questioned—and EPA’s current policy sidesteps these tasks.
OCR for page 192
Science and Decisions: Advancing Risk Assessment they do or do not support an inference alternative to the default? The question underscores the need for guidance to implement a default policy and evaluate its effect on risk decisions and efforts to protect the environment and public health. The committee did not conduct a detailed evaluation, but a cursory examination of some recent assessments shows detailed presentations and analyses of the available data bearing on each assessment, explicit determinations that identified data that do not support an inference alternative to such defaults as low-dose linearity and the cross-species scaling of risk, but thus far not the wholesale reconsideration of generic defaults. No matter how one interprets EPA’s current policy on defaults, an effective policy requires criteria to guide risk assessors on factors that would render data “not usable” or supportable of inference alternatives to a default, and therefore requiring that a default be invoked. Therefore it remains the case that Defaults need to be maintained for the steps in risk assessment that require inferences beyond those that can be clearly drawn from the available data or to otherwise fill common data gaps. Criteria should be available for judging whether, in specific cases, data are adequate for direct use or to support an inference in place of a default. The “data” that may be usable in place of a default will depend on the role of the particular default in question. For example, some defaults regarding exposure may be readily inferred from observations and in this sense are “measurable,” but many defaults for biologic end points will continue to be based on science and policy judgments. The latter type of defaults is the focus of this report. Readily observable and measurable defaults, such as the amount of air breathed each day or the number of liters of water consumed, may be chosen to make assessments manageable or consistent with one another but not to support inferences beyond the available data or what can be readily observed, and they are therefore generally less difficult to justify. Decisions about replacing them with distributions (for variability analysis) or specific values based on survey data tend to be less controversial. In contrast, the defaults involving science and policy judgments, such as the relevance of a rodent cancer finding in predicting low-dose-human risk, are used to draw inferences “beyond the data,” that is, beyond what may be directly observable through scientific study. The next section gives examples of important defaults of that kind related to the hazard-identification and dose-response assessment steps. Inferences are needed when underlying biologic knowledge is uncertain or absent. Indeed, fundamental lack of understanding of key biologic phenomena can remain after many years of research. In some cases, however, research “data”—typically on pharmacokinetic (PK) behavior and modes of toxic action—support an inference different from that implicit in the default. Determining whether such “data” are adequate to support a different inference is often difficult and controversial. Much of the emphasis of this chapter is on the defaults chosen as “inferences” in the presence of considerable uncertainty, not on those chosen to represent observed parameters or to fill gaps in data on readily observable phenomena. In the discussions in this chapter, simply for ease of presentation, the committee uses the term departures in offering its views regarding the use of inferences based on substance-specific data rather than defaults. Departures in the sense used in this report is related to the decision in specific cases as to whether data are adequate to support an inference different from the default and to make it unnecessary to adopt the default. Recognizing the challenge
OCR for page 193
Science and Decisions: Advancing Risk Assessment of interpreting EPA’s policy, the committee, to be consistent with its charge, offers its discussions and recommendations in the context of current EPA policy. THE ENVIRONMENTAL PROTECTION AGENCY’S SYSTEM OF DEFAULTS Explicit Defaults The system of inferences used in EPA risk assessments is contained in the agency’s reports, staff papers, procedural manuals and guidance documents. These materials provide some advice and information on interpreting the strengths and limitations of various types of scientific datasets and on data synthesis, including whether a body of data supports a default or alternative inference, and risk assessment methods. Guidance is given on assessment of risks of cancer (EPA 2005a), neurotoxicity (EPA 1998a), developmental toxicity (EPA 1991a), and reproductive toxicity (EPA 1996); on Monte Carlo analysis (EPA 1997); on assessment of chemical mixtures (EPA 1986, 2000b); on reference-dose (RfD) and reference-concentration (RfC) processes (EPA 1994, 2002a,b); and on how to judge data on whether, for example, male rat kidney tumors (EPA 1991b) or rodent thyroid tumors (EPA 1998b) are relevant to humans (see, for example, Box 2-1 and Table D-1). The toxicity guidance documents also identify some defaults commonly used in assessments covered by the guidance. Tables 6-1 and 6-2 list some of the important defaults for carcinogen and noncarcinogen risk assessments. Missing Defaults In addition to explicitly recognized defaults, EPA relies on a series of implicit or “missing” defaults3—assumptions that may sometimes exert great influence on risk characterization. For a risk assessment to be completed, every “inference gap” must have been “bridged” with some assumption, whether explicitly stated or not. Assumptions analogous to missing defaults are made in every field. For example, it is common to treat a pair of variables as independent when no information exists about any relationship between them. That assumption may well be reasonable, but it imposes a powerful condition on the analysis: that the correlation coefficient between the variables is exactly 0.0 rather than any other value between −1 and 1. Use of missing defaults has become so ingrained in EPA risk-assessment practice that it is as though EPA has chosen the same assumptions explicitly. The committee recommends that EPA systematically examine the risk-assessment process and identify key instances of the bridging of an inference gap with a missing default, examine its basis, and consider alternatives if such a default is not sufficiently justified. This committee is concerned particularly about two missing defaults. First, agents that have not been examined sufficiently in epidemiologic or toxicologic studies are insufficiently included in or even excluded from risk assessments. Typically, there is no description of the risks potentially posed by these agents in the risk characterization, so their presence often carries no weight in decision-making. With few notable exceptions (for example, dioxin-like compounds), they are treated as though they pose no risk that should be subject to regulation in EPA’s air, drinking-water, and hazardous-waste site programs. Also with very few 3 Science and Judgment in Risk Assessment (NRC 1994) coined the term missing default to describe the use of de facto assumptions by EPA without explicit explanation. These de facto assumptions may also be thought of as “implicit defaults.”
OCR for page 194
Science and Decisions: Advancing Risk Assessment TABLE 6-1 Examples of Explicit EPA Default Carcinogen Risk-Assessment Assumptions Issue EPA Default Approach Extrapolation across human populations “When cancer effects in exposed humans are attributed to exposure to an agent, the default option is that the resulting data are predictive of cancer in any other exposed human population.” (EPA 2005a, p. A-2) “When cancer effects are not found in an exposed human population, this information by itself is not generally sufficient to conclude that the agent poses no carcinogenic hazard to this or other populations of potentially exposed humans, including susceptible subpopulations or lifestages.” (EPA 2005a, p. A-2) Extrapolation of results from animals to humans “Positive effects in animal cancer studies indicate that the agent under study can have carcinogenic potential in humans.” (EPA 2005a, p. A-3) “When cancer effects are not found in well-conducted animal cancer studies in two or more appropriate species and other information does not support the carcinogenic potential of the agent, these data provide a basis for concluding that the agent is not likely to possess human carcinogenic potential, in the absence of human data to the contrary.” (EPA 2005a, p A-4) Extrapolation of metabolic pathways across species, age groups, and sexes “There is a similarity of the basic pathways of metabolism and the occurrence of metabolites in tissues in regard to the species-to-species extrapolation of cancer hazard and risk” (EPA 2005a, p. A-6). Extrapolation of toxicokinetics across species, age groups, and sexes “As a default for oral exposure, a human equivalent dose for adults is estimated from data on another species by an adjustment of animal applied oral dose by a scaling factor based on body weight to the 3/4 power. The same factor is used for children because it is slightly more protective than using children’s body weight.” (EPA 2005a, p. A-7) Shape of dose-response relationship “When the weight of evidence evaluation of all available data are insufficient to establish the mode of action for a tumor site and when scientifically plausible based on the available data, linear extrapolation is used as a default approach, because linear extrapolation generally is considered to be a health-protective approach. Nonlinear approaches generally should not be used in cases where the mode of action has not been ascertained. Where alternative approaches with significant biological support are available for the same tumor response and no scientific consensus favors a single approach, an assessment may present results based on more than one approach.” (EPA 2005a, p. 3-21) exceptions, EPA treats all adults as equally susceptible to carcinogens that act via a linear mode of action (MOA) (see Chapter 5 and, for a recent example, EPA 2007a). Table 6-3 lists those and several other apparently missing EPA defaults. Both explicit and missing defaults used by EPA are a cornerstone of the agency’s approach to facilitating human health risk assessment in the face of inherent scientific limitations that may prevent verification of any particular causal model. Understanding of the complications introduced by EPA’s policy and practice regarding defaults is central to evaluating EPA’s management of uncertainty.
OCR for page 195
Science and Decisions: Advancing Risk Assessment TABLE 6-2 Examples of Explicit EPA Default Noncarcinogen Risk-Assessment Assumptions Issue EPA Default Approach Relevant human health end point and extrapolation from animals to humans “The effect used for determining the NOAEL, LOAEL,a or benchmark dose in deriving the RfD or RfC is the most sensitive adverse reproductive end point (that is, the critical effect) from the most appropriate or, in the absence of such information, the most sensitive mammalian species.” (EPA 1996, p. 77) Adjustment to account for differences between humans and animal test species Factor of 1, 3, or 10. (EPA 2002a, p. 2-12) Heterogeneity among humans Factor of 1, 3, or 10. (EPA 2002a, p. 2-12) Shape of dose-response relationship “In quantitative dose-response assessment, a nonlinear dose-response is assumed for noncancer health effects unless mode of action or pharmacodynamic information indicates otherwise.” (EPA 1996, p. 75) Human risk estimate Division of the point of departure (for example, NOAEL, LOAEL, or benchmark dose) by the appropriate uncertainty factors to take into account, for example, the magnitude of the LOAEL compared with the NOAEL, interspecies differences, or heterogeneity among members of the human population produces “an estimate (with uncertainty spanning perhaps an order of magnitude) of a daily exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime.” (EPA 1998a, p. 57) aNOAEL = no-observed-adverse-effect level, LOAEL = lowest-observed-adverse-effect level. COMPLICATIONS INTRODUCED BY USE OF DEFAULTS The National Research Council (NRC 1994) noted that although EPA had justified the selection of some of its defaults, many had received incomplete scrutiny by the agency. In the agency’s Guidelines for Carcinogen Risk Assessment (EPA 2005a), it elucidated more fully the bases of many of its defaults. Selection of defaults by EPA has been controversial, and the controversies were described in Science and Judgment in Risk Assessment (NRC 1994, Chapter 6 and Appendices N-1 and N-2). Because choice of defaults involves a blend of science and risk-assessment policy, controversy is inevitable. Some have argued that EPA has selected defaults at each opportunity that are needlessly “conservative” and result in large overestimates of human risk (OMB 1990; Breyer 1992; Perhac 1996). Others have argued—given the large scientific uncertainties surrounding risk assessment, human variability in both exposure to and response to toxic substances, and various missing defaults with “nonconservative” biases—that risk overestimation might not be common in EPA’s practices and that risk underestimation may occur (Finkel 1997; EPA SAB 1997, 1999). EPA (2004a, p. 20) states that the sum of conservative risk estimates for a chemical mixture overstates risk to a relatively modest extent (a factor of 2-5). In general, estimates based on animal extrapolations have been found to be generally concordant with those based on epidemiologic studies (Allen et al. 1988; Kaldor et al. 1988; Zeise 1994), and in several cases human
OCR for page 196
Science and Decisions: Advancing Risk Assessment TABLE 6-3 Examples of “Missing” Defaults in EPA “Default” Dose-Response Assessments • For low-dose linear agents, all humans are equally susceptible during the same life stage (when estimates are based on animal bioassay data) (EPA 2005a). The agency assumes that the linear extrapolation procedure accounts for human variation (explained in Chapter 5), but does not formally account for human variation in predicting risk. For low-dose nonlinear agents, an RfD is derived with an uncertainty factor for interhuman variability of 1-10 (EPA 2004a, p. 44; EPA 2005a, p. 3-24). • Tumor incidence from conventional chronic rodent studies is treated as representative of the effect of lifetime human exposures after species dose equivalence adjustments (EPA 2005a). For chemicals established as operating by a mutagenic mode of action, that holds after adjustment for early-life sensitivity (EPA 2005b). This assumes (1) that humans and rodents have the same “biologic clock,” that is, that rodents and humans exposed for a lifetime to the same (species-corrected) dose will have the same cancer risk, and (2) that a chronic rodent bioassay, which doses only in adulthood and misses late old age (EPA 2002a, p. 41), is representative of a lifetime of rodent exposure. • Agents have no in utero carcinogenic activity. Although the agency notes that in utero activity is a concern, default approaches do not take carcinogenic activity from in utero exposure into account, and risks from in utero exposure are not calculated (EPA 2005b; EPA 2006a, p. 29). • For known or likely carcinogens not established as mutagens, there is no difference in susceptibility at different ages (EPA 2005b). • Nonlinear carcinogens and noncarcinogens act independently of background exposures and host susceptibility (see Chapter 5 for full discussion). • Chemicals that lack both adequate epidemiologic and animal bioassay data are treated as though they pose no risk of cancer worthy of regulatory attention, with few exceptions. They are typically classified as having “inadequate information to assess carcinogenic potential” (EPA 2005a, Section 2.5); consequently, no cancer dose-response assessment is performed (EPA 2005a, p. 3-2). Integrated Risk Information System and provisional peer-reviewed toxicity values are then based on noncancer end points, and cancer risk estimates are not presented. data have indicated that animal-based estimates were not conservative for the population as a whole (see discussion in Chapter 4). In any event, the committee observes that any set of defaults will impose value judgments on balancing potential errors of overestimation and underestimation of risk even if the judgments dictate that the balance be exactly indifferent between the two. Thus, the issue is not whether to accept a value-laden system of model choice but which value judgments EPA’s assessments will reflect. Some members of the Science and Judgment in Risk Assessment committee endorsed the view that risk-assessment policy should seek a “plausible conservatism”4 in the choice of default options rather than seeking to impose the alternative value judgment that models should strive to balance errors of underestimation and overestimation exactly (Finkel 1994); others took the view that relative scientific plausibility alone should govern the choice of defaults and the motivation for departing from them (McClellan and North 1994). EPA (2004a, pp. 11-12) acknowledged the debate: EPA seeks to adequately protect public and environmental health by ensuring that risk is not likely to be underestimated. However, because there are many views on what “adequate” protection is, some may consider the risk assessment that supports a particular protection 4 This use of conservatism is intended to describe the situation in which the assumptions and defaults used in risk assessment are likely to overstate the true but unknowable risk. It is derived from the public-health dictum that when science is uncertain, judgments based on it should err on the side of public-health protection.
OCR for page 197
Science and Decisions: Advancing Risk Assessment level to be “too conservative” (that is, it overestimates risk), while others may feel it is “not conservative enough” (that is, it underestimates risk)…. Even with an optimal cost-benefit solution, in a heterogeneous society, some members of the population will bear a disproportionate fraction of the costs while others will enjoy a disproportionate fraction of the benefits (Pacala et al. 2003). Thus, inevitably, different segments of our society will view EPA’s approach to public health and environmental protection with different perspectives. In addition to the debate over how “conservative” default assumptions should be, there is tension between their use and the complete characterization of uncertainty. For example, it is possible to imagine eliminating defaults and instead using ranges of plausible assumptions in their place. Doing so, however, could produce such a broad range of risk estimates, with no clear way to distinguish their relative scientific merits, that the result could be useless for the purpose of choosing among various risk-management options for decision-making (see Chapter 8). As explained above, using defaults ameliorates that problem but at the cost of reporting only a portion of the complete range of risk estimates that is consistent with available scientific knowledge. In some cases, use of defaults overstates the central tendency of the complete range; in other cases, it underestimates the central tendency. As discussed below, that pitfall is important because of the ubiquitous nature of tradeoffs that surround most risk-management decisions. How EPA has responded to suggestions to improve its system of defaults reveals three related issues. First, the agency has not published clear, general guidance on what level of evidence is needed to justify use of chemical-specific evidence and not use a default, although EPA has provided some specific guidance for a small number of particular defaults (see below). Second, as part of its current practice of using defaults, EPA often does not quantify the portion of the total uncertainty characterized in the resulting risk estimate or RfD that is due to the presence of competing plausible causal models. EPA in its various guidance documents and reviews has provided a scientific justification for many of its defaults (for example, EPA 1991a, 2002b, 2004a, 2005a,b). In some cases, it has demonstrated that the defaults are plausible, but not the extent to which a default may produce an estimate of the risk or RfD different from that produced by a plausible alternative model. Tables 6-1 and 6-2 list explicit defaults used by EPA. A notable example is the use of the linear no-threshold dose-response relationship for extrapolation of cancer risk below the point of departure when there is no evidence of an MOA that would introduce nonlinearity. That assumption is based on both mechanistic hypotheses and empirical evidence. “Low-dose nonlinear” carcinogens and chemicals without established carcinogenic properties are assumed to follow threshold-like dose-response relationships5 even when, as in the case of chloroform, it is acknowledged that multiple modes of action, including genotoxicity, cannot be ruled out (EPA SAB 2000, p. 1; EPA 2001, p. 42). The nonlinear effects are also presumed to act independently of background processes although for many mechanisms (such as receptor-mediated ones) there can be endogenous and exogenous agents that contribute to the same disease process present in the population that the toxicant under study contributes to (see Chapter 5). EPA risk-assessment guidance acknowledges that defaults are uncertain (EPA 2002a, 2005a). In practice, the agency addresses the uncertainty by discussing it qualitatively. EPA 5 The agency’s most recent cancer and noncancer guidelines do not strictly assume biologic thresholds, because of “the difficulty of empirically distinguishing a true threshold from a dose-response curve that is nonlinear at low doses”; instead, it refers to the dose-response relationships as low-dose nonlinear (EPA 2005a).
OCR for page 198
Science and Decisions: Advancing Risk Assessment has recently been criticized, however, for not describing the range of risk estimates associated with alternative assumptions quantitatively (NRC 2006a), and it has been encouraged in various forums to begin to develop the methodology and data to describe the uncertainty in dose-response modeling quantitatively (EPA SAB 2004b; NRC 2007a). Third, EPA has not established a clear set of standards to apply when evidence of an alternative assumption is sufficiently robust not to invoke a default. EPA (2005a, p. 1-9) states that “with a multitude of types of data, analyses, and risk assessments, as well as the diversity of needs of decision makers, it is neither possible nor desirable to specify step-by-step criteria for decisions to invoke a default option.” The committee agrees that it is neither possible nor desirable to reduce the evaluation of defaults to a checklist. However, failure to establish clear guidelines detailing the issues that must be addressed to depart from a default and the type of evidence that would be compelling can have a number of adverse consequences. The lack of clear standards may reduce the incentive for further research (Finkel 2003). With no guidance on criteria for using an alternative assumption, it is difficult for an interested party to understand the type of scientific information that might be required by the agency, and a lack of clear standards can make the process of deciding whether new research data (instead of a default) are usable appear to be arbitrary. The committee considers that clear evidence standards for deciding to retain or depart from defaults can make the process more transparent, consistent, and fair for all stakeholders involved and enhance their trust in the process. Examples from EPA (discussed below) demonstrate that it is possible to specify criteria for departure from defaults. Risk estimates developed with defaults focus on a portion of the scientifically plausible risk-estimate range. However, because some defaults may lead to the overstatement of the risk posed by a chemical and others to an understatement of risk, EPA needs to be mindful of the influence of defaults on risk estimates when the estimates will influence risk-management decisions. Intervention options often involve tradeoffs, and the tradeoffs being considered (such as replacement of one chemical with another in a production process) might result in risk estimates whose health protectiveness depends on the defaults used in estimation. An example is the tradeoff between the risks resulting from exposure to mercury and PCBs in fish and the nutritional benefit of fish consumption (Cohen et al. 2005). When chemical risks are being compared, the agency can minimize the differential effects of defaults by ensuring that they are applied consistently. When chemical risks are being compared with other considerations whose estimated effects are not influenced by defaults, EPA should emphasize the quantitative characterization of the contribution of the defaults to uncertainty (as discussed below). ENHANCEMENTS OF THE ENVIRONMENTAL PROTECTION AGENCY’S DEFAULT APPROACH This section describes the committee’s recommendations for improving how defaults are chosen, used, and modified. These recommendations include continued and expanded use of the best, most current science to choose, justify, and, when appropriate, revise EPA’s default assumptions; development of a clear standard to determine when evidence supporting an alternative assumption is robust enough that the default need not be invoked and development of various sets of scientific criteria for identifying when an alternative has met that standard; making explicit the existing assumptions or developing new defaults to address the missing defaults, such as treatment of chemicals with limited information as though they pose risks that do not require regulatory action; and quantifying the risk estimates emerging
OCR for page 202
Science and Decisions: Advancing Risk Assessment renal proximal tubule cells of treated male rats,” “(2) Accumulating protein in the hyaline droplets is α2μ-g[lobulin],” and “(3) Additional aspects of the pathological sequence of lesions associated with α2μ-g[lobulin] nephropathy are present.” If the first condition is satisfied, EPA states that the extent to which α2μ-globulin is responsible for renal tumors must be established. Establishing that it is largely responsible for the observed renal tumors is grounds for setting aside the default assumption of their relevance to humans. EPA states (p. 86) that this step “requires a substantial database, and not just a limited set of information confined to the male rat. For example, cancer bioassay data are needed from the mouse and the female rat to be able to demonstrate that the renal tumors are male-rat specific.” EPA lists the type of data that are helpful, for example, data showing that the chemical in question does not cause renal tumors in the NBR rat (which does not produce substantial quantities of α2μ-globulin), evidence that the substance’s binding to α2μ-globulin is reversible, sustained cell division of the P2 renal tubule segment that is typical of the α2μ-globulin renal-cancer mode of action, structure-activity relationship data similar to those on other known α2μ-globulin MOA substances, evidence of an absence of genotoxicity, and the presence of positive renal-carcinogenicity findings only in male rats and negative findings in mice and female rats (EPA 1991b). Applicability of the safety factor8 of 10 under the Food Quality Protection Act. EPA’s treatment of the safety factor of 10 to protect infants and children when setting pesticide exposure limits is an example of how the agency could establish a process to determine regularly whether data are sufficient to depart from what is, in effect, a default. The 1996 Food Quality Protection Act (FQPA) mandates the use of a safety factor of 10 unless EPA has sufficient evidence to determine that a different value is more appropriate [§ 408 (b)(2)(c)]. The EPA Office of Pesticide Programs (EPA 2002b) has developed a systematic weight-of-evidence approach that addresses a series of considerations, including prenatal and postnatal toxicity, the nature of the dose-response relationship, PK, and MOA. On the basis of the framework, EPA had found it unnecessary to apply the safety factor of 10 in 48 of 59 cases (reviewed in NRC 2006b). Committee’s Evaluation Those examples provide a starting point for the agency’s development of a standardized approach to departures from defaults. An improvement based on these examples would be greater specificity regarding the type of evidence that is sufficient to justify a departure. Consider, for example, EPA’s guidance for chemicals that cause follicular tumors. Section 2.2.4 of EPA 1998b (p. 21) requires that “enough information on a chemical should be given to be able to identify the sites that contribute the major effect on thyroid-pituitary function,” but EPA does not indicate what quantity and quality of information are “enough” for a researcher to make such a determination. In addition, the key statement that “where thyroid-pituitary homeostasis is maintained, the steps leading to tumor formation are not expected to develop, and the chances of tumor development are negligible” refers throughout the document to humans in general and does not address interindividual variability in homeostasis. EPA has presented guidance (EPA 2002b) for departing from the use of a safety factor of 10 as provided for in the FQPA. The guidance includes a list of issues to consider and the type of evidence to evaluate. Some of the guidelines provide sufficient specificity as to 8 In Chapter 5, the committee takes exception to the term safety factor, but it uses it here to avoid confusion with EPA terminology.
OCR for page 203
Science and Decisions: Advancing Risk Assessment evaluation of departures. For example, a finding of effects in humans or in more than one species militates against departure, as does a finding that the young do not recover as quickly from the adverse effects of a chemical as do adults. In contrast, some of the guidelines lack specificity. In particular, an MOA supporting the human relevance of effects observed in animals militates against departure from the default; this guideline would be more useful if it spelled out specific MOA findings that support the relevance to humans. The committee recommends that EPA review those and other cases in which it has used substance-specific data and not invoked defaults and that it catalog the principles characterizing those departures. The principles can be used in developing more general guidance for deciding when data clearly support an inference that can be used in place of a default. Crafting Defaults That Replace (or Make Explicit) Missing Assumptions: The Case of Chemicals with Inadequate Toxicity Data EPA should work toward developing explicit defaults to use in place of missing defaults. To the extent possible, the new, explicit defaults should characterize the uncertainty associated with their use. Although there appear to be a number of missing defaults, this section focuses on the “untested-chemical assumption” and outlines an approach for characterizing the toxicity of untested or inadequately tested chemicals.9 The approach attempts to strike a balance between gathering enough information to reduce uncertainty sufficiently to make the resulting estimate useful and making the approach applicable for characterizing a large number of chemicals. In the absence of data to derive a quantitative, chemical-specific estimate of toxicity, EPA treats such chemicals as though they pose risks that do not require regulatory action in its air, drinking-water, and hazardous-waste programs. In the case of carcinogens, EPA assigns no potency factor to a chemical and thus implicitly treats it as though it poses no cancer risk, for example, chemicals whose evidence meets the standard of “inadequate information to assess carcinogenic potential” in the carcinogen guidelines (EPA 2005a, p. 1-12). For noncancer end points, EPA practice limits the product of the uncertainty factors applied to no more than 3,000. When a larger value would be required to address the uncertainty (for example, when “there is uncertainty in more than four areas of extrapolation” [EPA 2002a, p. xvii]), EPA does not derive an RfD or RfC. The vast majority of chemicals now produced lack a cancer slope factor, RfD, RfC, or a combination of these. The effective assumption that many chemicals pose no risk that should be subject to regulation can compromise decision making in a variety of contexts, as it is not possible to meaningfully evaluate net health risks and benefits associated with the substitution of one chemical for another in a production process or interpret risk estimates where there can be a large number of untested chemicals (for example, a Superfund site) that have not been examined sufficiently in epidemiologic or toxicologic studies. To develop a distribution of dose-response relationship estimates for chemicals on which agent-specific information is lacking, a tiered series of default distributions could be constructed. The approach is based on the notion that for virtually all chemicals it is possible to say something about the uncertainty distribution regarding dose-response relationships. The process begins by selecting a set of cancer and noncancer end points and applying the full distribution of chemical potencies (including a data-driven probability of zero potency) to 9 Chapter 5 addresses other missing defaults including that in the absence of chemical-specific data, EPA treats all members of the human population as though they are de facto equally susceptible to carcinogens that act via a linear MOA.
OCR for page 204
Science and Decisions: Advancing Risk Assessment the unknown chemical in question. That initial distribution can then be narrowed by using the various types and levels of intermediate toxicity information. At the simplest level, information on chemical structure can be used to bin chemicals in much the way that EPA uses chemical structures and physicochemical properties to perform quantitative structure activity relationship (QSAR) analyses for premanufacturing notices and for developing distributions of toxicity parameter values derived from data on representative data-rich chemicals (The Toxic Substances Control Act [TSCA] Section 5 New Chemicals Program [EPA 2007b]). At the next level, the distributions can be further refined by including toxicologic tests and other model or experimental data to create chemical categories. That has been done to fill in data gaps in the U.S. and Organisation for Economic Co-operation and Development high-production-volume chemical programs (OECD 2007). Chemical categories in those programs have been created to help to estimate actual values for the programs’ short-term toxicity tests, but the underlying concepts could be applied to the development of distributions of cancer potencies or dose-response parameters for other chronic-toxicity end points. In the future, the results of intermediate mechanistic tests, in the context of growing understanding of toxicity networks and pathways, are likely to assist in selecting end points and estimating potency distributions. There are descriptions of how to make use of the observed correlation between carcinogenic potency and short-term toxicity values, such as the maximum tolerated dose (Crouch et al. 1982; Gold et al. 1984; Bernstein et al. 1985) and acute LD50 (Zeise et al. 1984, 1986; Crouch et al. 1987). The approach can be updated and expanded to include other data on toxicity from structure-activity and short-term tests. EPA is building databases that could facilitate such development (EPA 2007c; Dix et al. 2007); the National Research Council (NRC 2007b) advocates eventually relying on high and medium throughput assays for risk assessment. Finally, the most sophisticated level can involve development of toxic-potency distributions for chemicals whose structures are clearly similar to those of well-studied substances, such as polycyclic aromatic hydrocarbons and dioxin-like compounds, in a manner like current extrapolation methods (for example, see Boström et al. 2002; EPA 2003; van den Berg et al. 2006). In that way, the agency can take advantage of the wealth of intermediate toxicity data being generated in multiple settings at a stage when their precise implications for traditional dose-response estimation are not fully understood. EPA over the long term can develop probability distributions based on results of the intermediate assays, and the potency distribution for a chemical can become narrower as more data become available. Those approaches have a number of limitations. For now, they would be based on results with chemicals that have already been tested in long-term bioassays. If selection for long-term bioassay testing is already associated with indications of toxicity, generalization of the results to untested chemicals could lead to an overestimation of the toxicity of the untested chemicals. The creation of potency distributions for unknown chemicals will have to include a database estimation of the probability of zero potency to reduce the possibility of systematic overestimation. Characterization of the uncertainty surrounding the potency estimates will be necessary, but it should be facilitated by the probabilistic nature of the approach. The lack of sufficient data to estimate potency distributions for a wide variety of end points poses a serious challenge. Creation of such a database may be feasible now for cancer and a small number of noncancer end points but not for many of the end points of great concern, such as developmental neurotoxicity, immune toxicity, and reproductive toxicity. Full implementation of such a system will require about 10-20 years of data and method development. The committee urges EPA to begin to develop the methods for such a system by using existing data and the wealth of intermediate toxicity data being generated
OCR for page 205
Science and Decisions: Advancing Risk Assessment now by U.S. and international chemical priority-setting programs (EC 1993, 1994, 1998, 2003; 65 Fed. Reg. 81686; NRC 2006b). When necessary, EPA can prioritize efforts to establish missing default information based on the potential impact of this information on the estimated benefits of regulatory action. This impact is most likely to be substantial for chemicals that have exposure levels that could change substantially in response to regulation (for example, chemicals that might be substituted for other chemicals that undergo more stringent control), and for chemicals whose physical and chemical properties increase the likelihood of their relative toxicity. PERFORMING MULTIPLE RISK CHARACTERIZATIONS FOR ALTERNATIVE MODELS The current management of defaults resembles an all-or-none approach in that EPA often quantifies the dose-response relationship for one set of assumptions—either the default or whatever alternative to the default the agency adopts. Model uncertainty is discussed qualitatively; EPA discusses the scientific merits of competing assumptions. In the long term, the committee envisions research leading to improved descriptions of model uncertainty (see Chapter 4). In the near term, sensitivity analysis could be performed when risk estimates for alternative hypotheses that are sufficiently supported by evidence are reported. This approach would require development of a framework with criteria for judging when such an analysis should be performed. The goal is not to present the multitude of possible risk estimates exhaustively but to present a small number of exemplar, plausible cases to provide the risk manager a context for understanding additional uncertainty contributed by considering assumptions other than the default. The committee acknowledges the difficulty of assigning probabilities to alternative estimates in the face of a lack of scientific understanding related to the defaults and acknowledges that much work is needed to move toward a more probabilistic approach to model uncertainty (see Chapter 4). The standard for reporting alternative risk estimates should be less stringent than the “clearly superior” standard recommended for use of alternatives in place of the default. The committee finds that alternative risk estimates should be reported if they are “comparably” plausible relative to the risk estimate based on the default. The standard of comparability should not be interpreted to mean that the alternative must be at least as plausible as the default; this makes sense given that the alternative risk estimates provide information on the implications of tradeoffs associated with the interventions or options to address a given risk and that a risk manager might be interested in possible outcomes even if they are less than 50% probable. The comparability standard, however, does rule out risk estimates that are possibly valid but that are based on assumptions that are substantially less plausible than the default. The purposes are to help to ensure that the set of risk estimates to be considered by the risk manager remains manageable and to prevent distraction by risk estimates that are unlikely to be valid. In the final analysis, making the term comparable operational will depend on EPA’s deciding how large a probability it is willing to accept that its risk assessment omitted the true risk. EPA should consider developing guidance that explicitly directs risk assessors to present a broader array of risk estimates in “high stakes” risk assessment situations, that is, situations where there are potentially important countervailing risks or economic costs associated with mitigation of a target risk. The guidance should take into account the analytic cost of developing more extensive information, including the potential additional delay (see discussion of value of information in Chapter 3). As in the case of the “clearly superior” standard to replace the default, the agency should establish guidance for evaluation of plausibility and should issue specific criteria for
OCR for page 206
Science and Decisions: Advancing Risk Assessment the demonstration that an alternative is “comparably plausible.” EPA should exclude from consideration alternative risk estimates that fail to satisfy the “reasonably” plausible criteria, because they can distract attention from the possibilities that have a reasonable level of scientific support. Specifically, the committee discourages EPA from the regular (pro forma) reporting that the risk posed by an evaluated chemical “may be as small as zero” unless there is scientific evidence that raises this possibility to the requisite level of plausibility. Under the proposed approach, the risk assessor would describe, to the extent possible, the relative scientific merits of alternative assumptions and the factors that make the assumptions as “comparably plausible” relative to the default (and the factors that cause it to fall short of a “clearly superior” standard). Such a characterization would identify the risk estimate associated with the default assumptions and identify that estimate as the appropriate basis of risk management. Nonetheless, the risk assessment would also report a small number of other plausible exemplar assessments to convey the uncertainty associated with the preferred risk estimate. That recommendation is consistent with the National Research Council recommendation (NRC 2006a) that encouraged EPA to report risk estimates corresponding to alternative assumptions in its risk assessments. The level of detail in and scientific support for the alternative risk estimates should be tailored to be appropriate for the type of questions that the risk assessment is addressing (see Chapter 3). If potential tradeoffs associated with intervention options under evaluation are modest, less detail is needed to discriminate among the intervention options. For example, while maintaining designation of the risk calculated with the default assumptions as the primary estimate, it may be sufficient to provide a range of risk estimates without detailed information about the relative plausibility of alternative values within the range; the information can then be used in screening assessments to identify options whose desirability can be established robustly in the face of uncertainty. Because it is not always possible to know what options will be evaluated, simple characterizations of uncertainty can serve as a starting point for later assessments of alternative options. In all cases, refinement of the uncertainty characterization can proceed in an iterative fashion as needed to address either more serious tradeoffs or the evaluation of options and tradeoffs that were not initially contemplated. The key point is that the options to be evaluated drive the level of detail needed in the assessment (see Chapter 3). Advantages of Multiple Risk Characterizations Presenting a full risk characterization for models other than the default confers several benefits on the risk-assessment process. Retaining alternative risk estimates in the final risk-assessment results gives the risk manager wider latitude to understand the tradeoffs among the risk-management options. However, it is important that any evaluation of the range of risk-assessment outcomes take into account EPA’s mandate to protect public health and the environment. The committee recommends that EPA quantify the implications of using an alternative assumption when it elects to depart from a default assumption. In particular, EPA should describe how use of a default and the selected alternative influences the risk estimate for the risk-management options under consideration. For example, if a risk assessment that departs from default assumptions identifies chemical A as the lowest-risk chemical to use in a production process rather than chemical B, it should also describe which chemical would pose the lower risk if the default assumption were used. It is important for EPA to emphasize that only one assumption deserves primary consideration for risk characterization and risk management. If alternative assumptions are presented as “comparably plausible,” the default must be highlighted and given deference.
OCR for page 207
Science and Decisions: Advancing Risk Assessment The proposed approach more completely characterizes the uncertainty in the resulting risk estimate. As explained in Chapter 3, identifying the most appropriate course of action may depend on the degree of uncertainty associated with a risk estimate. Under the framework (Chapter 8), when there are multiple control options and multiple causal models, highlighting the model uncertainty can facilitate finding the optimal choices. Clear standards for departure from defaults can provide incentives for third parties to produce research in that they will know what data need to be produced that could influence the risk-assessment process. Finally, the approach facilitates the setting of priorities among research needs as a necessary component of value-of-information analysis (see Chapter 3). CONCLUSIONS AND RECOMMENDATIONS EPA’s current policy on defaults calls for evaluating all relevant and available data first and considers defaults only when it is determined that data are not available or unusable. It is not known to what extent that is practiced, in contrast with judging the adequacy of available data to depart from a default. Whatever the case, defaults need to be maintained for the steps in risk assessment that require inferences or to fill common data gaps. Criteria are needed for judging whether, in specific cases, data are adequate to support a different inference from the default (or whether data are sufficient to justify departure from a default). The committee urges EPA to delineate what evidence will determine how it makes these judgments, and how that evidence will be interpreted and questioned. Providing a credible and consistent approach to defaults is essential to have a risk-assessment process to support regulatory decision-making. The committee provides the following recommendations to strengthen the use of defaults in EPA: EPA should continue and expand use of the best, most current science to support or revise its default assumptions. The committee is reluctant to specify a schedule for revising these default assumptions. Factors EPA should take into consideration in setting priorities for such revisions include (1) the extent to which the current default is inconsistent with available science; (2) the extent to which a revised default would alter risk estimates; and (3) the public health (or ecologic) importance of risk estimates that would be influenced by a revision to the default. EPA should work toward the development of explicitly stated defaults to take the place of implicit or missing defaults. Key priorities should be development of default approaches to support risk estimation for chemicals lacking chemical-specific information to characterize individual susceptibility to cancer (see Chapter 5) and to develop a dose-response relationship. With respect to chemicals that have inadequate data to develop a dose-response relationship, information is currently available to make progress on cancer and a limited number of noncancer end points. EPA should also begin developing methods that take advantage of information already available in the U.S. or by international prioritization programs with a goal of creating a comprehensive system over the next 10 to 20 years. When necessary, EPA can prioritize efforts to target chemicals for which this information is most likely to influence the estimated benefits of regulatory action. In the next 2-5 years, EPA should develop clear criteria for the level of evidence needed to justify use of alternative assumptions in place of defaults. The committee recommends that departure should occur only when the evidence of the plausibility of the alternative is clearly superior to the evidence of the value of the default. In addition to a general standard for the level of evidence needed for use of alternative assumptions, EPA should
OCR for page 208
Science and Decisions: Advancing Risk Assessment describe specific criteria that must be addressed for use of alternatives to each particular default. When none of the alternative risk estimates achieves a level of plausibility sufficient to justify use in place of a default, EPA should characterize the impact of the uncertainty associated with use of the default assumptions. To the extent feasible, the characterization should be quantitative. In the next 2-5 years, EPA should develop criteria for the listing of the alternative values, limiting attention to assumptions whose plausibility is at least comparable with that of the plausibility of the default. The goal is not to present the multitude of possible risk estimates exhaustively but to present a small number of exemplar, plausible cases to provide a context for understanding the uncertainty in the assessment. The committee acknowledges the difficulty of assigning probabilities to alternative estimates in the face of a lack of scientific understanding related to the defaults and acknowledges that much work is needed to move toward a more probabilistic approach to model uncertainty. When EPA elects to depart from a default assumption, it should quantify the implications of using an alternative assumption, including describing how use of the default and the selected alternative influences the risk estimate for risk-management options under consideration. EPA needs to more clearly elucidate a policy on defaults and provide guidance on its implementation and on evaluation of its impact on risk decisions and on efforts to protect the environment and public health. REFERENCES Allen, B.C., K.S. Crump, and A.M. Shipp. 1988. Correlations between carcinogenic potency of chemicals in animals and humans. Risk. Anal. 8(4):531-544. Bernstein, L., L.S. Gold, B.N. Ames, M.C. Pike, and D.G. Hoel. 1985. Some tautologous aspects of the comparison of carcinogenic potency in rats and mice. Fundam. Appl. Toxicol. 5(1):79-86. Boström, C.E., P. Gerde, A. Hanberg, B. Jernström, C. Johansson, T. Kyrklund, A. Rannug, M. Törnqvist, K. Victorin, and R. Westerholm. 2002. Cancer risk assessment, indicators, and guidelines for polycyclic aromatic hydrocarbons in the ambient air. Environ. Health Perspect. 110(Suppl. 3):451-488. Breyer, S. 1992. Breaking the Vicious Circle: Toward Effective Risk Regulation. Cambridge, MA: Harvard University Press. Clewell, H.J. III, M.E. Andersen, and H.A. Barton. 2002. A consistent approach for the application of pharmacokinetic modeling in cancer and noncancer risk assessment. Environ. Health Perspect. 110(1):85-93. Cohen, J., D. Bellinger, W. Connor, P. Kris-Etherton, R. Lawrence, D. Savitz, B. Shaywitz, S. Teutsch, and G. Gray. 2005. A quantitative risk-benefit analysis of changes in population fish consumption. Am. J. Prev. Med. 29(4):325-334. Crawford, M., and R. Wilson. 1996. Low-dose linearity: The rule or the exception? Hum. Ecol. Risk Assess. 2(2):305-330. Crouch, E.A.C., J. Feller, M.B. Fiering, E. Hakanoglu, R. Wilson, and L. Zeise. 1982. Health and Environmental Effects Document: Non-Regulatory and Cost Effective Control of Carcinogenic Hazard. Prepared for the Department of Energy, Health and Assessment Division, Office of Energy Research, by Energy and Environmental Policy Center, Harvard University, Cambridge, MA. September 1982. Crouch, E., R. Wilson, and L. Zeise. 1987. Tautology or not tautology? Toxicol. Environ. Health 20(1-2):1-10. DeWoskin, R.S., J.C. Lipscomb, C. Thompson, W.A. Chiu, P. Schlosser, C. Smallwood, J. Swartout, L. Teuschler, and A. Marcus. 2007. Pharmacokinetic/physiologically based pharmacokinetic models in integrated risk information system assessments. Pp. 301-348 in Toxicokinetics and Risk Assessment, J.C. Lipscomb and E.V. Ohanian, eds. New York: Informa Healthcare. Dix, D.J., K.A. Houck, M.T. Martin, A.M. Richard, R.W. Setzer, and R.J. Kavlock. 2007. The ToxCast program for prioritizing toxicity testing of environmental chemicals. Toxicol. Sci. 95(1):5-12. EC (European Commission). 1993. Commission Directive 93/67/EEC of 20 July 1993, Laying down the Principles for the Assessment of Risks to Man and the Environment of Substances Notified in Accordance with Council Directive 67/548/EEC. Official Journal of the European Communities L227:9-18.
OCR for page 209
Science and Decisions: Advancing Risk Assessment EC (European Commission). 1994. Commission Regulation (EC) No. 1488/94 of 28 June 1994, Laying down the Principles for the Assessment of Risks to Man and the Environment of Existing Substances in Accordance with Council Regulation (EEC) No793/93. Official Journal of the European Communities L161:3-11 [online]. Available: http://www.unitar.org/cwm/publications/cbl/ghs/Documents_2ed/C_Regional_Documents/85_EU_Regulation148894EC.pdf [accessed Jan. 25, 2008]. EC (European Commission). 1998. Directive 98/8/EC of the European Parliament and of the Council of 16 February 1998 Concerning the Placing of Biocidal Products on the Market. Official Journal of the European Communities L123/1-L123/63 [online]. Available: http://ecb.jrc.it/legislation/1998L0008EC.pdf [accessed Jan. 28, 2008]. EC (European Commission). 2003. Technical Guidance Document in Support of Commission Directive 93/67/EEC on Risk Assessment for New Notified Substances and Commission Regulation (EC) 1488/94 on Risk Assessment for Existing Substances, and Directive 98/8/EC of the European Parliament and the Council Concerning the Placing of Biocidal Products on the Market, 2nd Ed. European Chemicals Bureau, Joint Research Centre, Ispra, Italy [online]. Available: http://ecb.jrc.it/home.php?CONTENU=/DOCUMENTS/TECHNICAL_GUIDANCE_DOCUMENT/EDITION_2/ [accessed Jan. 28, 2008]. EPA (U.S. Environmental Protection Agency). 1986. Guidelines for the Health Risk Assessment of Chemical Mixtures. EPA/630/R-98/002. Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. September 1986 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/chem_mix/chemmix_1986.pdf [accessed Jan. 24, 2008]. EPA (U.S. Environmental Protection Agency). 1991a. Guidelines for Developmental Toxicity Risk Assessment. EPA/600/FR-91/001. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. December 1991 [online]. Available: http://www.epa.gov/NCEA/raf/pdfs/devtox.pdf [accessed Jan. 10, 2008]. EPA (U.S. Environmental Protection Agency). 1991b. Alpha-2μ-Globulin: Association with Chemically-Induced Renal Toxicity and Neoplasia in the Male Rat. EPA/625/3-91/019F. Prepared for Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. February 1991. EPA (U.S. Environmental Protection Agency). 1994. Methods for Derivation of Inhalation Reference Concentrations and Application of Inhalation Dosimetry. EPA/600/8-90/066F. Environmental Criteria and Assessment Office, Office of Health and Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Research Triangle Park, NC. October 1994 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=71993 [accessed Jan. 24, 2008]. EPA (U.S. Environmental Protection Agency). 1996. Guidelines for Reproductive Toxicity Risk Assessment. EPA/630/R-96/009. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. October 1996 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/repro51.pdf [accessed Jan. 10, 2008]. EPA (U.S. Environmental Protection Agency). 1997. Guiding Principles for Monte Carlo Analysis. EPA/630/R-97/001. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 1997 [online]. Available: http://www.epa.gov/ncea/raf/montecar.pdf [accessed Jan. 7, 2008]. EPA (U.S. Environmental Protection Agency). 1998a. Guidelines for Neurotoxicity Risk Assessment. EPA/630/R-95/001F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. April 1998 [online]. Available: http://www.epa.gov/NCEA/raf/pdfs/neurotox.pdf [accessed Jan. 24, 2008]. EPA (U.S. Environmental Protection Agency). 1998b. Assessment of Thyroid Follicular Cell Tumors. EPA/630/R-97-002. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 1998 [online]. Available: http://www.epa.gov/ncea/pdfs/thyroid.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2000a. Risk Characterization Handbook. EPA-100-B-00-002. Office of Science Policy, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. December 2000 [online]. Available: http://www.epa.gov/OSA/spc/pdfs/rchandbk.pdf [accessed Feb. 6, 2008]. EPA (U.S. Environmental Protection Agency). 2000b. Supplementary Guidance for Conducting Health Risk Assessment of Chemical Mixtures. EPA/630/R-00/002. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. August 2000 [online]. Available: http://www.epa.gov/ncea/raf/pdfs/chem_mix/chem_mix_08_2001.pdf [accessed Jan. 7, 2008]. EPA (U.S. Environmental Protection Agency). 2001. Toxicological Review of Chloroform (CAS No. 67-66-3) In Support of Summary Information on the Integrated Risk Information System (IRIS). EPA/635/R-01/001. U.S. Environmental Protection Agency, Washington, DC. October 2001 [online]. Available: http://www.epa.gov/iris/toxreviews/0025-tr.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2002a. A Review of the Reference Dose and Reference Concentration Processes. Final report. EPA/630/P-02/002F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. December 2002 [online]. Available: http://www.epa.gov/iris/RFD_FINAL%5B1%5D.pdf [accessed Jan. 14, 2008].
OCR for page 210
Science and Decisions: Advancing Risk Assessment EPA (U.S. Environmental Protection Agency). 2002b. Determination of the Appropriate FQPA Safety Factor(s) in Tolerance Assessment. Office of Pesticide Programs, U.S. Environmental Protection Agency, Washington, DC. February 28, 2002 [online]. Available: http://www.epa.gov/oppfead1/trac/science/determ.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2003. Exposure and Human Health Reassessment of 2,3,7,8-Tetrachlorodibenzo-p-Dioxin (TCDD) and Related Compounds. NAS Review Draft. National Center for Environmental Assessment, Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. December 2003 [online]. Available: http://www.epa.gov/NCEA/pdfs/dioxin/nas-review/ [accessed Jan. 9, 2008]. EPA (U.S. Environmental Protection Agency). 2004a. Risk Assessment Principles and Practices: Staff Paper. EPA/100/B-04/001. Office of the Science Advisor, U.S. Environmental Protection Agency, Washington, DC. March 2004 [online]. Available: http://www.epa.gov/osa/pdfs/ratf-final.pdf [accessed Jan. 9, 2008]. EPA (U.S. Environmental Protection Agency). 2004b. Toxicological Review of Boron and Compounds (CAS No. 7440-42-8) In Support of Summary Information on the Integrated Risk Information System (IRIS). EPA 635/04/052. U.S. Environmental Protection Agency, Washington, DC. June 2004 [online]. Available: http://www.epa.gov/iris/toxreviews/0410-tr.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2005a. Guidelines for Carcinogen Risk Assessment. EPA/630/P-03/001F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 2005 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=116283 [accessed Feb. 7, 2007]. EPA (U.S. Environmental Protection Agency). 2005b. Supplemental Guidance for Assessing Susceptibility for Early-Life Exposures to Carcinogens. EPA/630/R-03/003F. Risk Assessment Forum, U.S. Environmental Protection Agency, Washington, DC. March 2005 [online]. Available: http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=160003 [accessed Jan. 4, 2008]. EPA (U.S. Environmental Protection Agency). 2006. Modifying EPA Radiation Risk Models Based on BEIR VII. Draft White Paper. Office of Radiation and Indoor Air, U.S. Environmental Protection Agency. August 1, 2006 [online]. Available: http://www.epa.gov/rpdweb00/docs/assessment/white-paper8106.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2007a. Toxicological Review of 1,1,1-Trichloroethane (CAS No. 71-55-6) In Support of Summary Information on the Integrated Risk Information System (IRIS). EPA/635/R-03/013. U.S. Environmental Protection Agency, Washington, DC. August 2007 [online]. Available: http://www.epa.gov/IRIS/toxreviews/0197-tr.pdf [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2007b. Chemical Categories Report. New Chemicals Program, Office of Pollution Prevention and Toxics, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/opptintr/newchems/pubs/chemcat.htm [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2007c. Distributed Structure-Searchable Toxicity (DSSTox) Database Network. Computational Toxicology Program, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/comptox/dsstox/ [accessed Jan. 25, 2008]. EPA (U.S. Environmental Protection Agency). 2007d. Human Health Research Program: Research Progress to Benefit Public Health. EPA/600/F-07/001. Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. April 2007 [online]. Available: http://www.epa.gov/hhrp/files/g29888-gpi-gpo-epa-brochure.pdf [accessed Oct. 21, 2008] EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 1997. An SAB Report: Guidelines for Cancer Risk Assessment. Review of the Office of Research and Development’s Draft Guidelines for Cancer Risk Assessment. EPA-SAB-EHC-97-010. Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC. September 1997 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/6A6D30CFB1812384852571930066278B/$File/ehc9710.pdf [accessed Jan. 25, 2008]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 1999. Review of Revised Sections of the Proposed Guidelines for Carcinogen Risk Assessment. Review of the Draft Revised Cancer Risk Assessment Guidelines. EPA-SAB-EC-99-015. Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC. July 1999 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/857F46C5C8B4BE4985257193004CF904/$File/ec15.pdf [accessed Jan. 25, 2008]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2000. Review of EPA’s Draft Chloroform Risk Assessment. EPA-SAB-EC-00-009. Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC. April 2000 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/D0E41CF58569B1618525719B0064BC3A/$File/ec0009.pdf [accessed Jan. 25, 2008].
OCR for page 211
Science and Decisions: Advancing Risk Assessment EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2004a. Commentary on EPA’s Initiatives to Improve Human Health Risk Assessment. Letter from Rebecca Parkin, Chair of the SAB Integrated Human Exposure, and William Glaze, Chair of the Science Advisory Board, to Michael O. Levitt, Administrator, U.S. Environmental Protection Agency, Washington, DC. EPA-SAB-COM-05-001. October 24, 2004 [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/36a1ca3f683ae57a85256ce9006a32d0/733E51AAE52223F18525718D00587997/$File/sab_com_05_001.pdf [accessed Oct. 21, 2008]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2004b. EPA’s Multimedia, Multpathway, and Multireceptor Risk Assessment (3MRA) Modeling System. EPA-SAB-05-003. Science Advisory Board, U.S. Environmental Protection Agency, Washington, DC [online]. Available: http://yosemite.epa.gov/sab/sabproduct.nsf/99390EFBFC255AE885256FFE00579745/$File/SAB-05-003_unsigned.pdf [accessed Jan. 25, 2008]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2006. Science and Research Budgets for the U.S. Environmental Protection Agency for Fiscal Year 2007. EPA-SAB-ADV-06-003. Science Advisory Board, Office of the Administrator, U.S. Environmental Protection Agency, Washington, DC. March 30, 2006 [online]. Available: http://www.epa.gov/science1/pdf/sab-adv-06-003.pdf [accessed Dec. 5, 2007]. EPA SAB (U.S. Environmental Protection Agency, Science Advisory Board). 2007. Comments on EPA’s Strategic Research Directions and Research Budget for FY 2008. EPA-SAB-07-004. Science Advisory Board, Office of the Administrator, U.S. Environmental Protection Agency, Washington, DC. March 13, 2007 [online]. Available: http://www.epa.gov/science1/pdf/sab-07-004.pdf [accessed Dec. 5, 2007]. Finkel, A.M. 1994. The case for “plausible conservatism” in choosing and altering defaults. Appendix N-1 in Science and Judgment in Risk Assessment. Washington, DC: National Academy Press. Finkel, A.M. 1997. Disconnect brain and repeat after me: “Risk Assessments is too conservative.” Ann. N.Y. Acad. Sci. 837:397-417. Finkel, A.M. 2003. Too much of the “Red Book” is still (!) ahead of its time. Hum. Ecol. Risk Assess. 9(5): 1253-1271. Gilman, P. 2006. Response to “IRIS from the Inside.” Risk Anal. 26(6):1413. Gold, L.S., C.B. Sawyer, R. Magaw, G.M. Backman, M. de Veciana, R. Levinson, N.K. Hooper, W.R. Havender, L. Bernstein, R. Peto, M.C. Pike, and B.N. Ames. 1984. A carcinogenic potency database of the standardized results of animal bioassays. Environ. Health Perspect. 58:9-319. Hattis, D., and M.K. Lynch. 2007. Empirically observed distributions of pharmacokinetic and pharmacodynamic variability in humans: Implications for the derivation of single point component uncertainty factors providing equivalent protection as existing RfDs. Pp. 69-93 in Toxicokinetics and Risk Assessment, J.C. Lipscomb, and E.V. Ohanian, eds. New York: Informa Healthcare. Kaldor, J.M., N.E. Day, and K Hemminki. 1988. Quantifying the carcinogenicity of antineoplastic drugs. Eur. J. Cancer Clin. Oncol. 24(4):703-711. McClellan, R.O., and D.W. North. 1994. Making full use of scientific information in risk assessment. Appendix N-2 in Science and Judgment in Risk Assessment. Washington, DC: National Academy Press. Mills, A. 2006. IRIS from the Inside. Risk Anal. 26(6):1409-1410. NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press. NRC (National Research Council). 1994. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press. NRC (National Research Council). 2006a. Health Risks from Dioxin and Related Compounds: Evaluation of the EPA Reassessment. Washington, DC: The National Academies Press. NRC (National Research Council). 2006b. Toxicity Testing for Assessment of Environmental Agents: Interim Report. Washington, DC: The National Academies Press. NRC (National Research Council). 2007a. Quantitative Approaches to Characterizing Uncertainty in Human Cancer Risk Assessment Based on Bioassay Results. Second Workshop of the Standing Committee on Risk Analysis Issues and Reviews, June 5, 2007, Washington, DC [online]. Available: http://dels.nas.edu/best/risk_analysis/workshops.shtml [accessed Nov. 27, 2007]. NRC (National Research Council). 2007b. Toxicity Testing in the Twenty-first Century: A Vision and a Strategy. Washington, DC: The National Academies Press. OECD (Organisation for Economic Co-operation and Development). 2007. Guidance on Grouping Chemicals. Series on Testing and Assessment No. 80. ENV/JM/MONO(2007)28. Environment Directorate, Joint Meeting of the Chemicals Committee and the Working Party on Chemicals, Pesticides and Biotechnology, Organisation for Economic Co-operation and Development. September 28, 2007 [online]. Available: http://appli1.oecd.org/olis/2007doc.nsf/linkto/env-jm-mono(2007)28 [accessed Jan. 25, 2008].
OCR for page 212
Science and Decisions: Advancing Risk Assessment OMB (Office of Management and Budget). 1990. Current Regulatory Issues in Risk Assessment and Risk Management in Regulatory Program of the United States, April 1, 1990-March 31, 1991. Office of Management and Budget, Washington, DC. Pacala, S.W., E. Bulte, J.A. List, and S.A. Levin. 2003. False alarm over environmental false alarms. Science 301(5637):1187-1188. Perhac, R.M. 1996. Does Risk Aversion Make a Case for Conservatism? Risk Health Saf. Environ. 7:297. Risk Policy Report. 2004. EPA Boron Review Reflects Revised Process to Boost Scientific Certainty. Inside EPA’s Risk Policy Report 11(8):3. van den Berg, M., L.S. Birnbaum, M. Denison, M. De Vito, W. Farland, M. Feeley, H. Fiedler, H. Hakansson, A. Hanberg, L. Haws, M. Rose, S. Safe, D. Schrenk, C. Tohyama, A. Tritscher, J. Tuomisto, M. Tysklind, N. Walker, and R.E. Peterson. 2006. The 2005 World Health Organization reevaluation of human and mammalian toxic equivalency factors for dioxins and dioxin-like compounds. Toxicol. Sci. 93(2):223-241. Zeise, L. 1994. Assessment of carcinogenic risks in the workplace. Pp. 113-122 in Chemical Risk Assessment and Occupational Health: Current Applications, Limitations and Future Prospects, C.M. Smith, D.C. Christiani, and K.T. Kelsey, eds. Westport, CT: Auburn House. Zeise, L., R. Wilson, and E.A.C. Crouch. 1984. Use of acute toxicity to estimate carcinogenic risk. Risk Anal. 4(3):187-199. Zeise, L., E.A.C. Crouch, and R. Wilson. 1986. A possible relationship between toxicity and carcinogenicity. J. Am. Coll. Toxicol. 5(2):137-151. Zhao, Q., J. Unrine, and M. Dourson. 1999. Replacing the default values of 10 with data-derived values: A comparison of two different data-derived uncertainty factors for boron. Hum. Ecol. Risk Asses. 5(5):973-983.