Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 79
Page 79 Part II Strategies for Improving Risk Assessment Previous chapters have examined the various steps of the health risk-assessment process in the sequence developed by the 1983 Red Book committee. In considering the various steps to risk assessment, the committee observed that several common themes cut across the various stages of risk assessment and arise in criticisms of each individual step. These themes are as follows: • Default options. Is there a set of clear and consistent principles for choosing and departing from default options? • Validation. Has the Environmental Protection Agency (EPA) made a sufficient case that its methods and models for carrying out risk assessments are consistent with current scientific information available? • Data needs. Is enough information available to EPA to generate risk assessments that are protective of public health and are scientifically plausible? What types of information should EPA obtain and how should the information best be used? • Uncertainty. Has EPA taken sufficient account of the need to consider, describe, and make decisions in light of the inevitable uncertainty in risk assessment? • Variability. Has EPA sufficiently considered the extensive variation among individuals in their exposures to toxic substances and in their susceptibilities to cancer and other health effects? • Aggregation. Is EPA appropriately addressing the possibility of interactions among pollutants in their effects on human health, and addressing the consideration of multiple exposure pathways and multiple adverse health effects?
OCR for page 80
Page 80 The ''Red Book" paradigm should be supplemented by applying a cross-cutting approach that uses those themes. Such an approach could ameliorate the following problems in risk assessment as it is currently practiced within the agency: • The differing opinions in the scientific community on the merits of particular scientific evidence and the resulting lack of credibility caused by periodic revisions of particular "risk numbers" (e.g., those for dioxin). • The reluctance to incorporate new scientific information into risk assessments when it might (erroneously) appear to increase uncertainty. • The incompatibility of various inputs to risk characterization, e.g., dose estimates in units that cannot be combined with more sophisticated dose-response evaluations, or hazard-identification evidence that cannot readily be integrated into potency assessment. • The emphasis on theoretical modeling over measurement. • The production of risk assessments that are either insufficiently informative or too detailed for the needs of risk managers, and the related problem of lack of clear signals to guide risk-assessment research. Considering the six cross-cutting themes in the planning and analysis of risk assessment will not solve the problems of risk assessment by itself. Indeed, too much emphasis on a cross-cutting vision of risk assessment might create unanticipated problems. On balance, however, the view of risk assessment proposed in Chapters 6-11 will serve two important purposes: it will give the individual cross-cutting themes a more prominent place in the risk-assessment process, and it will encourage the gradual evolution of attempts to improve risk assessment from its current, somewhat piecemeal orientation to a more holistic one, with the goal of improving the precision, comprehensibility, and usefulness for regulatory decision-making of the entire risk-assessment process. Whatever conceptual framework is used, the committee believes that EPA must develop principles for choosing default options and for judging when and how to depart from them. This controversial issue is described in the next section. The Need For Risk-Assessment Principles Our scientific knowledge of hazardous air pollutants has numerous gaps. Hence, there are many uncertainties in the health risk assessments of those pollutants. Some of these can be referred to as model uncertaintiesfor example, uncertainties regarding dose-response model choices due to a lack of knowledge about the mechanisms by which hazardous air pollutants elicit toxicity. As discussed more fully in Chapter 6, EPA has developed "default options" to use when such uncertainties arise. These options are used in the absence of convincing scientific information on which of several competing models and theories is correct. The options are not rules that bind the agency; rather, they constitute
OCR for page 81
Page 81 guidelines from which the agency may depart when evaluating the risks posed by a specific substance. The agency may also change the guidelines as scientific knowledge accumulates. The committee, as discussed in Chapter 6, believes that EPA has acted reasonably in electing to issue default options. Without uniform guidelines, there is a danger that the models used in risk assessment will be selected on an ad hoc basis, according to whether regulating a substance is though to be politically feasible or according to other parochial concerns. In addition, guidelines can provide a predictable and consistent structure for risk assessment. The committee believes that only the description of default options in a risk assessment is not adequate. We believe that EPA should have principles for choosing default options and for judging when and how to depart from them. Without such principles, departures from defaults could be ad hoc, thereby undercutting the purpose of the default options. Neither the agency nor interested parties would have any guidance about the quality or quantity of evidence necessary to persuade the agency to depart from the default options or the point(s) in the process at which to present that evidence. Moreover, without an underlying set of principles, EPA and the public will have no way to judge the wisdom of the default options themselves. The individual default options inevitably vary in their scientific basis, foundation in empirical data, degree of conservatism, plausibility, simplicity, transparency, and other attributes. If defaults were chosen without conscious reference to these or other attributes, EPA would be unable to judge the extent to which they fulfill the desired attributes. Nor could the agency make intelligent and consistent judgment about when and how to add new default options when "missing defaults" are identified. In addition, the policies that underlie EPA's choice of risk-assessment methods would not be clear to the public and Congressfor example, it would be unclear whether EPA places the highest value on protecting public health, on generating scientifically accurate estimates, or on other concerns. The committee has identified a number of objectives that should be taken into account when considering principles for choosing and departing from default options: protecting the public health, ensuring scientific validity, minimizing serious errors in estimating risks, maximizing incentives for research, creating an orderly and predictable process, and fostering openness and trustworthiness. There might be additional relevant criteria as well. The choice of principles inevitably involves choosing how to balance such objectives. For instance, the most open process might not be the one that yields the result most likely to be scientifically valid. Similarly, the goal of minimizing errors in estimation might conflict with that of protecting the public health, inasmuch as (given the pervasiveness of uncertainty) achievement of the latter objective might involve accepting the possibility that a given risk assessment will overestimate the risk.
OCR for page 82
Page 82 The committee therefore found it difficult to agree on what principles EPA should adopt. For example, the committee debated whether EPA should base its practices on "plausible conservatism"that is, on attempting to use models that have support in the scientific community and that tend to minimize the possibility that risk estimates generated by these models will significantly underestimate true risks. The committee also discussed whether EPA instead should attempt as much as possible to base its practices on calculating the risk estimate most likely to be true in the light of current scientific knowledge. After extensive discussion, no consensus was reached on this issue. The committee also concluded that the choice of principles to guide risk assessment, although it requires a knowledge of science and scientific judgment, ultimately depends on policy judgments, and thus is not an issue for specific consideration by the committee, even if it could agree on the substance of specific recommendations. The choice reflects decisions about how scientific data and inferences should be used in the risk-assessment process, not about which data are correct or about what inferences should be drawn from those data. Thus, the selection of principles inevitably involves choices among competing values and among competing judgments about how best to respond to uncertainty. Many members contended that the committee ought not attempt to recommend principles, but should leave their formulation to the policy process. They concluded that weighing societal values is properly left to those who have been chosen, directly or indirectly, to represent the public. Indeed, in the view of these members, any recommendation by the committee would give the false impression that the choice of principles is ultimately an issue of science; noting the sharp differentiation that Congress made between the tasks of this committee and those of the Risk Assessment and Management Commission established by Section 303 of the Clean Air Act Amendments of 1990. That commission, rather than this committee, appears to have been intended to address issues of policy. Other members contended that the committee should attempt to recommend principles. They urged that the choice of risk-assessment principles is one of the most important decisions to be made in risk assessment and one on which risk assessment experts, because of their expertise on the scientific issues related to the choice, ought to make themselves heard. They believe that the choice of principles is no more policy-laden than many other issues addressed by the committee, and that the decision not to recommend principles is itself a policy choice. They also note that the scientific elements involved in making the choice distinguish the selection of principles from other pure "policy" issues that the committee agreed not to address such as the use of cost-benefit methods or the implications of the psychosocial dimensions of risk perception. The committee has decided not to recommend principles in its report. Instead, it has included in Appendix N papers by three of its members that offer various perspectives on the issue. One paper, by Adam Finkel, urges that EPA
OCR for page 83
Page 83 should strive to advance scientific consensus while minimizing serious errors of risk underestimation, by adopting an approach of "plausible conservatism." The other, by Roger McClellan and Warner North, argues that EPA should promote risk assessments that reflect current scientific understanding. Those perspectives are not intended to reflect the total range of opinion among committee members on the subject, but are presented to illustrate the issues involved. Reporting Risk Assessments As already mentioned, uncertainties are pervasive in risk assessment. When uncertainty concerns the magnitude of a physical quantity that can be measured or inferred from assumptions (e.g., ambient concentration), it can often be quantified, as Chapter 9 suggests. Model uncertainties result from an inability to determine which scientific theory is correct or what assumptions should be used to derive risk estimates. Such uncertainties cannot be quantified on the basis of data. Any expression of probability, whether qualitative (e.g., a scientist's statement that a threshold is likely) or quantitative (e.g., a scientist's statement that there is a 90% probability of a threshold), is likely to be subjective. Subjective quantitative probabilities could be useful in conveying the judgments of individual scientists to risk managers and to the public, but the process of assessing subjective probabilities is difficult and essentially untried in a regulatory context. Substantial disagreement and misunderstanding about the reliability of quantitative probabilities could occur, especially if their basis is not set forth clearly and in detail. In the face of important model uncertainties, it may still be undesirable to reduce a risk characterization to a single number, or even to a range of numbers intended to portray uncertainty. Instead, EPA should consider giving risk managers risk characterizations that are both qualitative and quantitative and both verbal and mathematical. If EPA takes this route, quantitative assessments provided to risk managers should be based on the principles selected by EPA. EPA might choose to require that a risk assessment be accompanied by a statement describing alternative assumptions presented to the agency that, although they do not meet the principles selected by EPA for use in the risk characterization, satisfy some lesser test (e.g., plausibility). For example, EPA generally assumes that no threshold exists for carcinogenicity and calculates cancer potency using the linearized multistage model as the default. Commenters to the agency on a specific substance might attempt to show that there is a threshold for that substance on the basis of what is known about its mechanism of action. If the threshold can be demonstrated in a manner that is satisfactory under the agency's risk-assessment principles, the risk characterization would be based on the threshold assumption. If such a demonstration cannot be made, then the risk characterization would be based on the no-threshold assumption; but if the threshold assumption were found to be
OCR for page 84
Page 84 plausible, the risk manager might be informed of its existence as a plausible assumption, its rationale, and its effect on the risk estimate. In this way, risk assessors would receive both qualitative and quantitative information relevant to characterizing the uncertainty associated with the risk estimate. The Iterative Approach One strategy component that deserves emphasis is the need for iteration. Neither the resources nor the necessary scientific data exist to perform a full-scale risk assessment on each of the 189 chemicals listed as hazardous air pollutants by Section 112 of the Clean Air Act. Nor, in many cases, is such an assessment needed. Some of the chemicals are unlikely to pose more than a de minimis (trivial) risk once the maximum available control technology is applied to their sources as required by Section 112. Moreover, most sources of Section 112 pollutants emit more than one such pollutant, and control technology for Section 112 pollutants is rarely pollutant-specific. Therefore, there might not be much incentive for industry to petition EPA to remove substances from Section 112's list (or much need for EPA to devote its resources to carrying out risk assessments in response to such petitions). An iterative approach to risk assessment would start with relatively inexpensive screening techniques and move to more resource-intensive levels of datagathering, model construction, and model application as the particular situation warranted. To guard against the possibility of underestimating risk, screening techniques must be constructed that err on the side of caution when there is uncertainty. (As discussed in Chapter 12, the committee has some doubts about whether EPA's current screening techniques are so constructed.) The results of such screening should be used to set priorities for gathering further data and applying successively more complex techniques. These techniques should then be used to the extent necessary to make a judgment. In Chapter 7, the kinds of data that should be obtained at each stage of such an iterative process are described. The result would be a process that yields the risk-management decisions required by the Clean Air Act and that provides incentives for further research without the need for costly case-by-case evaluations of individual chemicals. Use of an iterative approach can improve the scientific basis of risk-assessment decisions and account for risk-management concerns, such as the level of protection and resource constraints.
OCR for page 85
Page 85 6 Default Options EPA's risk-assessment practices rest heavily on "inference guidelines" or, as they are often called, "default options." These options are generic approaches, based on general scientific knowledge and policy judgment, that are applied to various elements of the risk-assessment process when the correct scientific model is unknown or uncertain. The 1983 NRC report Risk Assessment in the Federal Government: Managing the Process defined default Option as "the option chosen on the basis of risk assessment policy that appears to be the best choice in the absence of data to the contrary" (NRC, 1983a, p. 63). Default options are not rules that bind the agency; rather, as the alternative term inference guidelines implies, the agency may depart from them in evaluating the risks posed by a specific substance when it believes this to be appropriate. In this chapter, we discuss EPA's practice of adopting guidelines containing default options and departing from them in specific cases. Adoption Of Guidelines As our discussion of risk assessment has made clear, current knowledge of carcinogenesis, although rapidly advancing, still contains many important gaps. For instance, for most carcinogens, we do not know the complete relationship between the dose of a carcinogen and the risk it poses. Thus, when there is evidence of a carcinogenic effect at a high concentration (for instance, in the workplace or in animal testing), we do not know for certain how strong the effect (if any) would be at the lower concentrations typically found in the environment. Similarly, we do not know how much importance to attach to experiments that
OCR for page 86
Page 86 show that exposure to a substance causes only benign tumors in animals or how to adjust for metabolic differences between animals and humans in calculating the carcinogenic potency of a chemical. Other uncertainties are not peculiar to carcinogenesis, but are characteristic of many aspects of risk assessment. For example, calculating the doses received by individuals might require knowledge of the relationship between emission of a substance by a source and the ambient concentration of that substance at a particular place and time. It is impossible to install a monitor at every place where people might be exposed; moreover, monitoring results are subject to error. Thus, regulators attempt to use air-quality models to predict ambient concentrations. But because our knowledge of atmospheric processes is imperfect and the data needed to use the models cannot always be obtained, the predictions from atmospheric-transport models can differ substantially from measured ambient concentrations (NRC, 1991a). In time, we hope, our knowledge and data will improve. Indeed, we believe that EPA and other government agencies must engage in scientific research and be receptive to the results of sound scientific research conducted by others. In the meantime, decisions about regulating hazardous air pollutants must be made under conditions of uncertainty. It is vital that the risk-assessment process handle uncertainties in a predictable way that is scientifically defensible, consistent with the agency's statutory mission, and responsive to the needs of decisionmakers. These uncertainties, as we explain further in Chapter 9, are of two major types. One type, which we call parameter uncertainty, is caused by our inability to determine accurately the values of key inputs to scientific models, such as emissions, ambient concentrations, and rates of metabolic action. The second type, model uncertainty, is caused by gaps in our knowledge of mechanisms of exposure and toxicitygaps that make it impossible to know for certain which of several competing models is correct. For instance, as mentioned above, we often do not know whether a threshold may exist below which a dose of a carcinogen will not result in an adverse effect. As we discuss in Chapter 9, model uncertainties, unlike parameter uncertainties, are often difficult to quantify. The Red Book recommended that model uncertainties be handled through the development of uniform inference guidelines for the use of federal regulatory agencies in the risk-assessment process. Such guidelines would structure the interpretation of scientific and technical information relevant to the assessment of health risks. The guidelines, the report urged, should not be rigid, but instead should allow flexibility to consider unique scientific evidence in particular instances. The Red Book described the advantages of such guidelines as follows (pp. 7-8):
OCR for page 87
Page 87 The use of uniform guidelines would promote clarity, completeness, and consistency in risk assessment; would clarify the relative roles of scientific and other factors in risk assessment policy; would help to ensure that assessments reflect the latest scientific understanding; and would enable regulated parties to anticipate government decisions. In addition, adherence to inference guidelines will aid in maintaining the distinction between risk assessment and risk management. This committee believes that those considerations continue to be valid. In particular, we stress the importance of inference guidelines as a way of keeping risk assessment and risk management from unduly influencing each other. Without uniform guidelines, risk assessments might be manipulated on an ad hoc basis according to whether regulating a substance is thought to be politically feasible. In addition, we believe that inference guidelines can provide a predictable and consistent structure for risk assessment and that a statement of guidelines forces an agency to articulate publicly its approach to model uncertainty. Like the committee that produced the 1983 NRC report, we recognize that there is an inevitable interplay between risk assessment and risk management. As the 1983 report states (pp. 76, 81), "risk assessment must always include policy, as well as science," and "guidelines must include both scientific knowledge and policy judgments." Any choice of defaults, or the decision not to have defaults at all, therefore amounts to a policy decision. Indeed, without a policy decision, the report stated, risk-assessment guidelines could do no more than "state the scientifically plausible inference options for each risk assessment component without attempting to select or even suggest a preferred inference option" (NRC, 1983a, p. 77). Such guidelines would be virtually useless. The report urged that risk-assessment guidelines include risk-assessment policy and explicitly distinguish between scientific knowledge and risk-assessment policy to keep policy decisions from being disguised as scientific conclusions (NRC, 1983a, p. 7). That report urged that for consistency, policy judgments related to risk assessment ought to be based on a common principle or principles. We believe that EPA acted reasonably in electing to issue Guidelines for Carcinogen Risk Assessment (EPA, 1986a). Those guidelines set out policy judgments about the accommodation of model uncertainties that are used to assess risk in the absence of a clear demonstration that a particular theory or model should be used. For instance, the default options indicate that, in assessing the magnitude of risk to humans associated with low doses of a substance, "in the absence of adequate information to the contrary, the linearized multistage procedure will be employed" (EPA, 1986a, p. 33997). The linearized multistage procedure implies low-dose linearity. At low doses, if the dose is reduced by, say, a factor of 1,000, the risk is also reduced by a factor of 1,000; dose is linearly related to risk. Departure from this default option is allowed, under EPA's current guide-
OCR for page 88
Page 88 lines, if there is "adequate evidence" that the mechanism through which the substance is carcinogenic is more consistent with a different modelfor instance, that there is a threshold below which exposure is not associated with a risk. Thus, the default option in guiding a decision-maker, in the absence of evidence to the contrary, assigns the burden of persuasion to those who wish to show that the linearized multistage procedure should not be used. Similar default options cover such important issues as the calculation of effective dose, the treatment of benign tumors, and the procedure for scaling animal-test results to estimates of potency in humans. Some default options are concerned with issues of extrapolationfrom laboratory animals to humans, from large to small exposures (or doses), from intermittent to chronic lifetime exposures, and from route to route (as from ingestion to inhalation). That is because few chemicals have been shown in epidemiologic studies to cause measurable numbers of human cancers directly, and epidemiologic data on only a few of these are sufficient to support quantitative estimates of human epidemiologic cancer risk. In the absence of adequate human data, it is necessary to use laboratory animals as surrogates for humans. One advantage of guidelines, as already noted, is that they can articulate both the agency's choice of individual default options and its rationale for choosing all of the options. EPA's guidelines set out individual options but do not do so with ideal clarity. Nor has the agency explicitly articulated the scientific and policy bases for its options. Hence, there might be disagreement about precisely what the agency's default options are and the rationales for these options. We attempt here to identify the most important of the options (numbered points in the 1986 guidelines are cited): • Laboratory animals are a surrogate for humans in assessing cancer risks; positive cancer-bioassay results in laboratory animals are taken as evidence of a chemical's cancer-causing potential in humans (IV). • Humans are as sensitive as the most sensitive animal species, strain, or sex evaluated in a bioassay with appropriate study-design characteristics (III.A.1). • Agents that are positive in long-term animal experiments and also show evidence of promoting or cocarcinogenic activity should be considered as complete carcinogens (II.B.6). • Benign tumors are surrogates for malignant tumors, so benign and malignant tumors are added in evaluating whether a chemical is carcinogenic and in assessing its potency (III.A.1 and IV.B.1). • Chemicals act like radiation at low exposures (doses) in inducing cancer; i.e., intake of even one molecule of a chemical has an associated probability for cancer induction that can be calculated, so the appropriate model for relating exposure-response relationships is the linearized multistage model (III.A.2). • Important biological parameters, including the rate of metabolism of
OCR for page 89
Page 89 chemicals, in humans and laboratory animals are related to body surface area. When extrapolating metabolic data from laboratory animals to humans, one may use the relationship of surface area in the test species to that in humans in modifying the laboratory animal data (III.A.3). • A given unit of intake of a chemical has the same effect, regardless of the time of its intake; chemical intake is integrated over time, irrespective of intake rate and duration (III.B). • Individual chemicals act independently of other chemicals in inducing cancer when multiple chemicals are taken into the body; when assessing the risks associated with exposures to mixtures of chemicals, one treats the risks additively (III.C.2). EPA has never articulated the policy basis for those options. As we discuss in the previous introductory section (Part II), the agency should choose and explain the principles underlying its choices to avoid the dangers of ad hoc decision-making. The agency's choices are for the most part intended to be conservativethat is, they represent an implicit choice by the agency, in dealing with competing plausible assumptions, to use (as default options) the assumptions that lead to risk estimates that, although plausible, are believed to be more likely to overestimate than to underestimate the risk to human health and the environment. EPA's risk estimates thus are intended to reflect the upper region of the range of risks suggested by current scientific knowledge. EPA appears to use conservative assumptions to implement Congress's authorization in several statutes, including the Clean Air Act, for the agency to undertake preventive action in the face of scientific uncertainty (see, e.g., Ethyl v. EPA, 541 F.2d 1 (D.C. Cir.) (en banc), certiorari denied 426 U.S. 941 (1976), ratified by Section 401 of the Clean Air Act Amendments of 1977) and to set standards that include a precautionary margin of safety against unknown effects and errors in calculating risks (see Environmental Defense Fund v. EPA, 598 F.2d 62, 70 (D.C. Cir. 1978) and Natural Resources Defense Council v. EPA, 824 F.2d 1146, 1165 (en banc) (D.C. Cir. 1987)). EPA's choice of defaults has been controversial. We note, though, that some of the arguments about EPA's practices are directed less at conservatism than at the means of implementation that the agency has adopted. We believe that the iterative approach recommended in the previous chapter combined with quantitative uncertainty analysis will improve the agency's practices regardless of the degree of conservatism chosen by the agency. We also note that with an iterative approach, the agency must use relatively conservative models in performing screening estimates designed to indicate whether a pollutant is worthy of further analysis and comprehensive risk assessment. Such estimates are intended to obviate the detailed assessment of risks that can with a high degree of confidence be deemed acceptable or de minimis (trivial). By definition, therefore, screening analyses must be sufficiently conservative to make sure that a pollutant that could pose dangers to health or welfare will receive full scrutiny.
OCR for page 95
Page 95 TABLE 6-1 Cancer Incidence in B6C3F1 Female Mice Exposed to Methylene Chloride and Human Cancer Risk Estimates Derived from Animal Data Animal Data Concentration, Administered Transformed Animal mg/kg, day Human Equivalent mg/kg, day Incidence of Liver Tumors Incidence of Lung Tumors 4000 3162 712 40/46 41/46 2000 1582 356 16/46 16/46 0 0 0 3/45 3/45 Human Risk Estimates Extrapolation Model Cancer Riskb for 1 µg/m3 LMSa, surface area 4.1 × 10-6 LMS, PB-PKc 3.7 × 10-8 Logit 2.1 × 10-13 Weibull 9.8 × 10-8 Probit ‹10-15 LMS-PB-PK with scaling for sensitivity 4.7 × 10-7 aLMS = linearized multistage model. bUpper 95% confidence limit. cPB-PK = physiologically based pharmacokinetic. SOURCE: Modified from Reitz et al., 1989. A correct calculation of the risk posed by methylene chloride therefore rests on understanding the human body's processes for metabolizing this chemical. Research with animal species used in the bioassays and human tissue has shed light on the metabolism of methylene chloride. Much of the research was conducted with the goal of providing input for physiologically based pharmacokinetic (PBPK) models (Andersen et al., 1987, 1991; Reitz et al., 1989). The data were modeled in various ways, including consideration of two metabolic pathways. One involves oxidation by mixed-function oxidase (MFO) enzymes, and the other involves a glutathione-S-transferase (GST). Both pathways involve the formation of potentially reactive intermediates: formyl chloride in the MFO pathway and chloromethyl glutathione in the GST-mediated pathway. The MFO pathway was modeled as having saturable, or Michaelis-Menten, kinetics, and the GST pathway as a first-order reaction, i.e., proportional to concentration. The analyses suggested that a reactive metabolite formed in the GST pathway
OCR for page 96
Page 96 was responsible for tumor formation. This pathway, according to the analyses, contributes importantly to the disposition of methylene chloride only at exposures that saturate the primary MFO pathway. The analyses further indicated that the GST pathway is less active in human tissues than in mice. This suggests that the default option of scaling for surface area yields a human risk estimate that is too high to be plausible. EPA incorporated the data on pharmacokinetics and metabolism into its most recent risk assessment for methylene chloride, although it retained a surface-area correction factornow identifying it as a correction for interspecies differences in sensitivity. The new risk estimate is 4.7 × 10-7 for continuous exposure at 1 µg/m3 (Table 6-1). The process by which EPA arrived at the current risk estimate for methylene chloride with PBPK modeling involved use of peer-review groups and SAB review to achieve a scientifically acceptable consensus position on the validity of the alternative model. After EPA's re-evaluation, however, articles in the peer-reviewed literature began to focus attention on parameter uncertainties in PBPK modeling, which neither EPA nor the original researchers in the methylene chloride case had considered. In the specific case of methylene chloride, at least one of the analyses (Portier and Kaplan, 1989) suggested that according to the new PBPK information EPA should have raised, rather than lowered, its original unit risk estimate if it wanted to continue to take a conservative stance. The more general point, which we discuss in Chapter 9, is that EPA must simultaneously consider both the evidence for departing from default models and the need to generate or modify the parameters that drive both the alternative and default models. Formaldehyde The toxicity and carcinogenicity of formaldehyde, a widely used commodity chemical, have been intensely studied and recently reviewed (Heck et al., 1990; EPA, 1991e). Concern for the potential human carcinogenicity of formaldehyde was heightened by the observation that exposure of rats at high concentrations (14.3 ppm) resulted in a very large increase in the incidence of nasal cancer. That observation gave impetus to the conduct and interpretation of epidemiologic studies of formaldehyde-exposed human populations. In the aggregate, the 28 studies that have been reported provide limited evidence of human carcinogenicity (EPA, 1991e). The "limited" classification is used primarily because the incidence of cancers of the upper respiratory tract has been confounded by exposure to other agents known to increase the rate of cancer, such as cigarette smoke and wood dusts. The effects of chronic inhalation of formaldehyde have been investigated in rats, mice, hamsters, and monkeys. The principal evidence of carcinogenicity comes from studies in both sexes and two strains of rats and the males of one strain of mice, all showing squamous cell carcinomas of the nasal cavity.
OCR for page 97
Page 97 The results of the rat bioassay have been used to derive quantitative risk estimates for cancer induction in humans (Kerns et al., 1983). Table 6-2 shows these animal data and the estimates of human cancer risk based on different exposure-dose models. (The table uses the inhalation cancer unit riskthe lifetime risk of developing cancer from continuous exposure at 1 ppm.) The 1987 EPA risk estimate (EPA, 1987c) measured exposure as the airborne concentration of formaldehyde. The rat bioassay shows a steep nonlinear exposure-response relationship for nasal-tumor induction. For example, two tumors were observed at 5.6 ppm, whereas 37 would have been expected from linear extrapolation from 14.3 ppm. Similarly, no tumors were observed at 2 ppm, whereas linear extrapolation from 14.3 ppm would have predicted 15. The key issue became whether the same exposure-response relationship exists in people as in rats. To determine the answer, researchers directed substantial effort toward investigating the mechanisms by which formaldehyde exerted a carcinogenic effect. One avenue of investigation was directed toward character- TABLE 6-2 Incidence of Nasal Tumors in F344 Rats Exposed to Formaldehyde and Comparison of EPA Estimates of Human Cancer Risk Associated with Continuous Exposure to Formaldehyde Exposure rate, ppma Incidence of Rat Nasal Tumors 14.3 94/140 5.6 2/153 2.0 0/159 0 0/156 Upper 95% Confidence Limit Estimates Exposure Concentration, ppm 1987 Risk Estimatesb 1991 Risk Estimatesc Monkey-Based Rat-Based 1.0 2 × 10-2 7 × 10-4 1 × 10-2 0.5 8 × 10-3 2 × 10-4 3 × 10-3 0.1 2 × 10-3 3 × 10-5 3 × 10-4 Maximum Likelihood Estimates 1.0 1 × 10-2 1 × 10-4 1 × 10-2 0.5 5 × 10-4 1 × 10-5 1 × 10-3 0.1 5 × 10-7 4 × 10-7 3 × 10-5 aExposed 6 hr/day, 5 days/week for 2 years. bEstimated with 1987 inhalation cancer unit risk of 1.6 × 10-2 per ppm, which used airborne concentration as measure of exposure. cEstimated with 1991 inhalation cancer unit risks of 2.8 × 10-3 per ppm (rat) and 3.3 × 10-4 per ppm (monkey), which used DNA-protein cross-links as measure of exposure. SOURCE: Adapted from EPA, 1991b.
OCR for page 98
Page 98 izing DNA-protein cross-links as a measure of internal dose of formaldehyde (Heck et al., 1990). That work, initially conducted in rats, demonstrated a steep nonlinear relationship between formaldehyde concentration and formation of DNA-protein cross-links in nasal tissue, where most inhaled formaldehyde is deposited in rats. This suggested a correlation between such cross-links and tumors. When the studies were extended to monkeys, a similar nonlinear relationship was observed between exposure concentration and DNA-protein cross-links in nasal tissue, but the concentration of DNA-protein cross-links per unit of exposure concentration was substantially lower than in the rat. Because the breathing patterns of humans more closely resemble those of monkeys than those of rats, the results of these studies suggested that using rats as a surrogate for humans might overestimate doses to humans, and hence the risk presented to humans by formaldehyde. EPA's most recent risk assessment (EPA, 1991e) used DNA-protein cross-links as the exposure indicator and estimated the human cancer risk (Table 6-2). EPA noted that the cross-links were being used only as a measure of delivered dose and that present knowledge was insufficient to ascribe a mechanistic role to the DNA-protein cross-links in the carcinogenic process. The EPA risk estimates for formaldehyde have been the subject of extensive peer review and review by the SAB. The 1992 update was reviewed by the SAB Environmental Health Committee and Executive Committee. The SAB recommended that the agency attempt to develop an additional risk estimate using the epidemiological data and prepare a revised document reporting all the risk estimates developed by the alternative approaches with their associated uncertainties. The two examples just discussed used mechanistic data and modeling to improve the characterization of the exposure-dose link. It is possible that as knowledge increases, models can be developed that link dose to response; the possibility is further discussed in Chapter 7. The same is true of the linearized multistage model. As noted earlier, this model assumes that risk is linear in dose. As noted earlier, however, rats exposed to formaldehyde show a steep nonlinear exposure-response relationship. This raises the possibility that the linearized multistage model might be inappropriate for at least some chemicals. It is possible that advances in knowledge of the molecular and cellular mechanisms of carcinogenesis will show a need to use other models either case by case or generically. More discussion of this matter can be found in Chapter 7. The strategy advocated for formaldehyde would build on multistage models of the carcinogenic process that describe the accumulation of procarcinogenic mutations in target cells and the consequent malignant conversion of these cells (Figure 6-1). The Moolgavkar-Venzon-Knudson model substantially oversimplifies the carcinogenic process but provides structural framework for integrating and examining data on the role of DNA-protein cross-links, cell replication, and other biologic phenomena in formaldehyde-induced carcinogenesis (Mool-
OCR for page 99
Page 99 FIGURE 6-1 Model of chemical carcinogenesis built on multi-stage carcinogenesis model of Moolgavkar-Venzon-Knudson. SOURCE: Conolly et al., 1992. R eprinted with permission, copyright 1992 by Gordon & Breach, London. gavkar and Venzon, 1979; Moolgavkar and Knudson, 1981; Moolgavkar et al., 1988; NRC, 1993b). Key features of this model are definition of the relationship of target-tissue dose to exposure and the use of that dose as a determinant of three outcomes: reactivity with DNA, mitogenic alterations, and cytolethality. These, in turn, cause further biologic effects: DNA reactivity leads to mutations, the mitogenic stimuli increase the rate of cell division, and cells die (cell death stimulates compensatory cell proliferation). Models like that shown provide a structured approach for integrating data on a toxicant, such as formaldehyde. It is anticipated that modeling will provide insight into the relative importance, at various exposure concentrations, of the two mechanisms that appear to have a dominant role in formaldehyde carcinogenesis: mutation and cell proliferation. Improved insight into their role could provide a mechanistic basis for selecting between the linearized multistage mathematical model now used for extrapolation from high to low doses and alternative models that might have more biologic plausibility. Trichloroethylene Trichloroethylene (TCE) is a chlorinated solvent that has been widely used in the industrial degreasing of metals. TCE is a concern to EPA as an air pollutant, a water pollutant, and a substance frequently present in ground water at Superfund sites. EPA carried out a risk assessment for TCE documented in a health assessment document (HAD) (EPA, 1985d) and a draft addendum incor-
OCR for page 100
Page 100 porating additional inhalation-bioassay data (EPA, 1987e). Both documents were reviewed by the SAB (EPA, 1984a; EPA, 1988j,k). The second document has not been issued in final form, and no further revision of EPA's risk assessment on TCE has been made since 1987. The carcinogenic potency of TCE is based on the liver-tumor response in B6C3F1 mice, a strain particularly prone to liver tumors. The carcinogenicity of TCE might result from trichloroacetic acid (TCA), a metabolite of TCE that is itself known to cause liver tumors in mice. TCA is one of a number of chemicals that cause proliferation of peroxisomes, an intracellular organelle, in liver cells. Peroxisome proliferation has been proposed as a causal mechanism for the liver tumors, and proponents have asserted that such tumors should receive treatment in risk assessments different from evaluation under EPA's default assumptions. In particular, human liver cells might be much less sensitive than mouse liver cells to tumor formation from this mechanism, and the dose-response relationship might be nonlinear at low doses. The SAB held a workshop in 1987 on peroxisome proliferation as part of its reviews on risk assessments for TCE and other chlorinated solvents. While endorsing a departure from the default on the alpha-2-globulin mechanism described in example 1 above, the SAB declined to endorse such a departure for peroxisome proliferation, noting that a causal relationship for this mechanism was ''plausible but unproven." The SAB strongly encouraged further research, describing this mechanism for mouse liver tumors as "most promising for immediate application to risk assessment" (EPA, 1988k). The SAB criticized EPA on the draft Addendum on TCE (EPA, 1987e) for not adequately presenting uncertainties and for not seriously evaluating recent studies on the role of peroxisome proliferation (EPA, 1988l). In the TCE case, departure from the defaults was rejected after an SAB review that recognized the peroxisome proliferation mechanism as plausible. Controversy over the interpretation of liver tumors in B6C3F1 mice continues. Some scientists assert that EPA's use of the tumor-response data from this particularly sensitive strain has been inappropriate (Abelson, 1993; ILSI, 1992). In the TCE example, departure from the defaults might become appropriate, on the basis of improved understanding of mouse liver tumors and their implications for human cancer. Although the SAB declined to endorse such a departure in 1987, it strongly encouraged further research as appropriate for supporting improved risk assessment. Cadmium Cadmium compounds are naturally present at trace levels in most environmental media, including air, water, soil, and food. Substantial additional amounts might result from human activities, including mining, electroplating, and disposal of municipal wastes. EPA produced an HAD on cadmium (EPA, 1981b) and
OCR for page 101
Page 101 later an updated mutagenicity and carcinogenicity assessment (EPA, 1985e). The latter went through SAB review (EPA, 1984b), which pointed out many weaknesses and research needs for improving the risk assessment. No revision of the risk assessment on cadmium has occurred since 1985. EPA used epidemiological data for developing a single unit risk estimate for all cadmium compounds. Use of the estimate from the best available bioassay would have given a unit risk for cadmium compounds higher by a factor of 50. The SAB and EPA in its response to SAB comments (EPA, 1985f) agreed that the solubility and bioavailability of different cadmium compounds were important in determining the risk associated with different cadmium compounds and that such differences might explain the discrepancy between the epidemiological data and the bioassay data. No implementation of the principle that cadmium compounds should be evaluated on the basis of bioavailability has yet been devised, although its importance to risk assessment for some air pollutants that contain cadmium is clearly set forth in EPA's response to the SAB (EPA, 1985f). EPA's existing risk assessment for cadmium might be judged adequate for screening purposes. But the SAB review and the EPA response to it suggest that the carcinogenic risk associated with a specific cadmium compound could be overestimated or underestimated, because bioavailability has not been included in the risk assessment. A refined version of the risk assessment that includes bioavailability might be appropriate, especially if residual risks for cadmium compounds appear to be important under the Clean Air Act Amendments of 1990. Nickel Nickel compounds are found at detectable levels in air, water, food, and soil. Increased concentrations of airborne nickel result from mining and smelting and from combustion of fuel that contains nickel as a trace element. Nickel compounds present in smelters that use the pyrometallurgical refining process are clearly implicated as human carcinogens. EPA's HAD on nickel (EPA, 1986b) lists dust from such refineries and nickel subsulfide as category A (known human) carcinogens. A rare nickel compound, nickel carbonyl, is listed, on the basis of sufficient evidence in animals, as category B2. Other nickel compounds are not listed as carcinogens, although EPA states (EPA, 1986b, p. 2-11): The carcinogenic potential of other nickel compounds remains an important area for further investigation. Some biochemical and in vitro toxicological studies seem to indicate the nickel ion as a potentially carcinogenic form of nickel and nickel compounds. If this is true, all nickel compounds might be potentially carcinogenic with potency differences related to their ability to enter and to make the carcinogenic form of nickel available to a susceptible cell. However, at the present time, neither the bioavailability nor the carcinogenesis mechanism of nickel compounds is well understood.
OCR for page 102
Page 102 The SAB reviewed the nickel HAD and concurred with EPA's listing of only the three rare nickel species as category A and B2 carcinogens (EPA, 1986c). The results of bioassays on three nickel species by the National Toxicology Program are due to be released soon, and these results should provide a basis for revision of risk assessments for nickel compounds. The cadmium and nickel examples point out an important additional default option: Which compounds should be listed as carcinogens when it is suspected that a class of chemical compounds is carcinogenic? Neither the cadmium risk assessment, the nickel risk assessment, or EPA's Guidelines for Carcinogen Risk Assessment (EPA, 1986a) provide specific guidance on this issue. Dioxins Dioxins is a commonly used name for a class of organochlorine compounds that can form as the result of the combustion or synthesis of hydrocarbons and chlorine-containing substances. One isomer, 2,3,7,8-tetrachlorodibenzo-p-diox-in (TCDD), is one of the most potent carcinogens ever tested in bioassays. EPA issued an HAD for dioxins (EPA, 1985g), which the SAB criticized for its treatment of the non-TCDD isomers that may contribute substantially to the overall toxicity of a mixture of dioxins (EPA, 1985h). The potency calculation for TCDD has continued to be a subject of controversy. Research indicates that the toxic effects of TCDD may result from the binding of TCDD to the Ah (aromatic hydrocarbon) receptor. In 1988, EPA asked the SAB to review a proposal to revise its risk estimate for TCDD. SAB agreed with EPA's criticism of the linearized multistage model and its assessment of the promise of alternative models based on the receptor mechanism. But SAB did not agree that there was adequate scientific support for a change in the risk estimate. SAB carefully distinguished its recommendation from a change that EPA might wish to make as part of risk management (EPA, 1989f) The Panel thus concluded that at the present time the important new scientific information about 2,3,7,8-TCDD does not compel a change in the current assessment of the carcinogenic risk of 2,3,7,8-TCDD to humans. EPA may for policy reasons set a different risk-specific dose number for the cancer risk of 2,3,7,8-TCDD, but the Panel finds no scientific basis for such a change at this time. The Panel does not exclude the possibility that the actual risks of dioxin-induced cancer may be less than or greater than those currently estimated using a linear extrapolation approach. A recent conference affirmed the scientific consensus on the receptor mechanism for TCDD, but there was not a consensus that this mechanism implied a basis for departure from low-dose linearity (Roberts, 1991). After the conference, and after the recommendations of the SAB (EPA, 1989f), EPA initiated a new study to reassess the risk for TCDD. That study is now in draft from and scheduled for SAB review in 1994.
OCR for page 103
Page 103 The potencies of other dioxin isomers and isomers of a closely related chemical class, dibenzofurans, have been estimated by EPA with a toxic-equivalency-factor (TEF) method (EPA, 1986d). The TEF method was endorsed by the SAB as a reasonable interim approach in the absence of data on these other isomers (EPA, 1986e). The SAB urged additional research to collect such data. Municipal incinerator fly ash was used as an example of a mixture of isomers of regulatory importance that might be appropriate for long-term animal testing. The EPA initiative for a review of TCDD is one of the few instances in which the agency has initiated revision of a carcinogen risk assessment on the basis of new scientific information. Dioxins and dibenzofurans are unique in that potency differences within this class of closely related chemical isomers are dealt with through a formal method that has undergone peer review by the SAB. Example 3: Modeling Exposure-Response Relationship If chemicals act like radiation at low exposures (doses) inducing canceri.e., if intake of even one molecule of a chemical has an associated probability for cancer induction that can be calculatedthe appropriate model for relating exposure-response relationships is a linearized multistage model. Of the 189 hazardous air pollutants, unit risk estimates are available for only 51: 38 with inhalation unit risks, which are applicable to airborne materials, and 13 with oral unit risks. The latter probably have less applicability to estimating the health risks associated with airborne materials. All 38 inhalation unit risk values have been derived with a linearized multistage model; i.e., it is assumed that the chemicals act like radiation. That might be an appropriate assumption for chemicals known to affect DNA directly in a manner analogous to that of radiation. For other chemicalse.g., such nongenotoxic chemicals as chloroformthe assumption of a mode of action similar to that of radiation might be erroneous, and it would be appropriate to consider the use of biologically-based exposure-response models other than the linearized multistage model. The process of choosing between alternative exposure-response models is difficult because the models cannot be validated directly for their applicability for estimating lifetime cancer risks at exposures of regulatory concern. Indeed, it is possible to obtain cancer incidence data on exposed laboratory animals and distinguish them from the control incidence only over a narrow range, from some value over 1% (10-2) to about 50% (5 × 10-1) cancer incidence. In regulation of chemicals, the extrapolation may be over a range of up to 4 orders of magnitude (from 10-2 to 10-6), going from experimental observations to estimated risks of cancer incidence at exposures of regulatory concern. One approach to increasing the accuracy with which comparisons between measured outcome and model projections can be made involves increasing the size of the experimental populations. However, statistical considerations, the cost of studying large numbers of animals, and the greater difficulty of experimental control in
OCR for page 104
Page 104 larger studies put narrow limitations on the use of this approach. Similar problems exist in conducting epidemiological studies. An attractive alternative is to use advances in knowledge of the molecular and cellular mechanisms of carcinogenesis. Identification of events (e.g., cell proliferation) and markers (e.g., DNA adducts, suppressor genes, oncogenes, and gene products) associated with various steps in the multistep process of carcinogenesis creates a potential for modeling these events and products at low exposure. Direct tests of the validity of exposure-response models at risks of around 10-6 are not likely in the near future. However, with an order-of-magnitude improvement in sensitivity of detection of precancerous events with a probability of occurrence down to around 10-3-10-2, the opportunity will be available to evaluate alternative modes of action and related exposure-response models at substantially lower exposure concentrations than has been possible in the past. For example, it should soon be possible to evaluate compounds that are presumed to have different modes of action (direct interaction with DNA and genotoxicity versus cytotoxicity) and alternative models (linearized multistage versus nonthreshold) that might yield markedly different risks when extrapolated to realistic exposures and low risks. Findings And Recommendations Use of Default Options FINDING: EPA's practice of using default options when there is doubt about the choice of appropriate models or theory is reasonable. EPA should have a means of filling the gap when scientific theory is not sufficiently advanced to ascertain the correct answer, e.g., in extrapolating from animal data to responses in humans. RECOMMENDATION: EPA should continue to regard the use of default options as a reasonable way to cope with uncertainty about the choice of appropriate models or theory. Articulation of Defaults FINDING: EPA does not clearly articulate in its risk-assessment guidelines that a specific assumption is a default option. RECOMMENDATION: EPA should clearly identify each use of a default option in future guidelines. Justification for Defaults FINDING: EPA does not fully explain in its guidelines the basis for each default option. RECOMMENDATION: EPA should clearly state the scientific and policy basis for each default option.
OCR for page 105
Page 105 Alternatives to Default Options FINDING: EPA's practice appears to be to allow departure from a default option in a specific case when it ascertains that there is a consensus among knowledgeable scientists that the available scientific evidence justifies departure from the default option. EPA, though, has not articulated criteria for allowing departures. RECOMMENDATION: The agency should consider attempting to give greater formality to its criteria for a departure, to give greater guidance to the public and to lessen the possibility of ad hoc, undocumented departures from default options that would undercut the scientific credibility of the agency's risk assessments. At the same time, the agency should be aware of the undesirability of having its guidelines evolve into inflexible rules. Process For Departures FINDING: EPA has relied on its Science Advisory Board and other expert bodies to determine when a consensus among knowledgeable scientists exists. RECOMMENDATION: EPA should continue to use the Science Advisory Board and other expert bodies. In particular, the agency should continue to make the greatest possible use of peer review, workshops, and other devices to ensure broad peer and scientific participation to guarantee that its risk-assessment decisions will have access to the best science available through a process that allows full public discussion and peer participation by the scientific community, Missing Defaults FINDING: EPA has not stated all the default options in each step in the risk-assessment process, nor the steps used when there is no default. Chapters 7 and 10 elaborate on this matter and identify several possible "missing defaults." RECOMMENDATION: EPA should explicitly identify each generic default option in the risk-assessment process.
Representative terms from entire chapter: