Click for next page ( 2


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
DART I ntroduction : The Problem and an Approach

OCR for page 1

OCR for page 1
Risk Assessment: Historical Perspectives Seymour L. Friess Beginning in the 1930s in the United States and Europe, the protection of human health from chemicals in the workplace, the market place, and the environment became a commonly recognized international goal. The general approach toward that goal developed over time, but it was to be characterized roughly by processes that involved the development of some form of human dosage versus response relationships for undesirable health effects, the assessment of risk for those effects under specified modes of exposure to the chemical in question, and finally, the setting of permissible exposure limits for the chemical in various exposure situations based on some form of societal perception of acceptable risk. To set this chain of procedures into action, it was first necessary to generate some form of trustworthy picture of dosage versus response for the most serious or most sensitive health effect that a chemical might produce in a human target. Indeed, since the early l900s data had begun to accumulate on such health effects in human populations occupationally exposed to major industrial chemicals. In the 1930s and thereafter, data on dosage versus response were also generated in ever-increasing volume by dosing experiments in experimental animal systems under laboratory conditions. These toxicological experiments with rodents or larger mam- mals had the special merit of permitting exposure to much larger dosages and concentrations of chemical than adventitious occupational exposures of worker populations, and of providing precise measures of the total exposure of the biological target rather than the general guesses found in the early industrial epidemiological studies. The animal toxicology data, 3

OCR for page 1
4 SEYMOUR L. FRIESS usually from subchronic or chronic exposure experiments, were always clearly indicated in the early studies as being pointed toward their ultimate use of predicting the risk of the corresponding health effects in human populations exposed to the chemical of concern. Toxicology was, and is, intended as a predictive science, with test data from animal experiments to be translated into assessments of risk for human and other populations. Generally, only the toxicological experiments furnished quantitative dose- response data for specific effects produced at specific target tissues or organs. The risk assessment process, therefore, beginning roughly in the 1930s, took the form of an initial review of the epidemiological health effects data available for a given chemical in worker and user populations and of the dose versus response data generated in test animal systems. Some- how, usually by deliberations of a committee of specialists in the health sciences, the body of epidemiological and animal toxicological data for the chemical was assessed for scope and reliability, and then interpreted in terms of the most probable form for the dosage versus response rela- tionship for each serious health effect in the human as a general target. For a given health effect, then, the relationship could be displayed either as a curve of dosage versus anticipated response or, in an attempt to linearize the relationship, as a curve of log dosage versus percentage response. The display could also take other forms, in parallel with modes of display developed in pharmacology. Whatever the display mode, however, the predicted human dose-re- sponse curve was then used for two purposes. First, it could be used to predict human response amplitudes under a specified exposure scenario. Second, by accepting a 5 percent response amplitude as being essentially a no-effect level within the limits of biological variability in populations, the curve could be used to establish the human no-observable-effect level (NOEL). This procedure was, and is, a primitive quantitative risk as- sessment methodology. A variant on this mode of generating human NOELs for specific health effects produced by a given chemical has also enjoyed wide international usage, beginning in the 1940s. Since the toxicological data base is usually far more extensive and more quantitative than that available from occu- pational epidemiological observations, the concept developed that human risk assessment (in the form of human NOELs) for health effects from a given chemical could be generated from the animal data base alone and later validated as human data accumulated. In this mode, if data exist for a spectrum of adverse health effects produced by a chemical in an as- sortment of test animal systems, either reversible or irreversible effects, the risk assessor selects the most serious health effect in the most sensitive animal species tested and uses the data to estimate the animal NOEL for

OCR for page 1
Risk Assessment: Historical Perspectives 5 that effect. In the choice among several health effects involving different tissues and organs, that effect may be selected which is manifested at the lowest dose range. Then, by application of a suitable safety factor (SF) or translation factor to the animal NOEL, a human NOEL is projected. This translation procedure evolved. At least two key review papers in the 1950s serve as milestones in this evolution, that of Barnes and Denz (1954) and that produced by Lehman and colleagues (1959) at the U.S. Food and Drug Administration. In particular, the rationale for and size of the safety factor to be employed in the animal-human NOEL translation was developed largely at the hands of Lehman et al., although variants are still being discussed in today's literature. For a well-defined toxic action at a target tissue which appears to display a dose threshold and which is at least moderately reversible, a widely applicable SF of 100 was postulated, with a first factor of 10 for the NOEL translation from animal to human and a second factor of 10 to account for the variability in sensitivities in human populations. For more serious, irreversible types of effects, even including carcinogenesis in earlier considerations of risk assessment for this chronic effect, additional safety factors (range, 2-10 and higher) were factored into the fundamental SF of 100. For example, at times in the last two decades, total safety factors of 2,000 or 3,000 have been mentioned as being applicable to the apparent NOEL for tu- morigenesis in a chronic animal bioassay of a chemical in converting to an estimate of a NOEL for tumorigenesis in human populations. It should be noted that the simplistic form of risk assessment considered up to this point was always generated singly, chemical by chemical, and was also viewed as a prediction that should be validated or rejected as additional animal and human dose versus response data for a chemical were developed. When multiple chemical exposures were considered, the state of the art and knowledge only extended to the possibility of additivity for closely related structural analogs that produced simmilar effects on a given target tissue by similar interaction mechanisms. The possibilities of synergism, potentiation, or antagonism in multichemical exposures were discussed, but were rarely attacked in the form of a joint risk assessment. An important point of departure from this relatively simplistic but prac- tical methodology for risk assessment related to health effects from chem- ical exposures took place in the 1960s-1970s, with the evolution of the regulatory concept that there could be no such thing as a NOEL for chemical carcinogenesis in humans and no such observable as a practical threshold for the complex carcinogenic process in mammals. All exposures were conceived of as contributing finite increments of excess lifetime cancer risk from the chemical in question, regardless of whether repair processes were operable at some level after the initial attack on DNA. From this regulatory philosophy there evolved the process of modeling

OCR for page 1
6 SEYMOUR L. FRIESS these excess lifetime risks for carcinogenesis in the humans based on postulated human exposure scenarios and the observed tumorigenic re- sponses measured in chronic bioassay experiments with test animal sys- tems (usually rodents). The modeling process has been labeled quantitative risk assessment for chemical carcinogenesis, and because it has moved in certain sectors of the public perception from being viewed as a predictive technique into the status of a supposedly factual presentation of real human risks, it deserves some explicit analysis, as follows. 1. The process starts with a chronic bioassay, usually in rodent systems, in which dosing with the test chemical extends over more than 0.5 lifetime of the species involved, with daily administered doses at the maximum tolerated level and one or more submultiples of the maximum tolerated dose (MTD). 2. The data, in the form of administered dosages versus tumorigenic responses, are then fitted to one or more modeling equations (for example, one-hit, multistage, Weibull) of a form in which the probability of excess cancer development is some function of the average daily lifetime dose. 3. Generally, additional restrictions are placed on this modeling step in the form of assumptions that the tumorigenesis process has no threshold (zero probability only at zero dose), and that the modeling equation is linear in the very low dose region, regardless of the curve's shape at the very high administered doses (MTD, 0.5 MID, 0.33 MID) under which the bioassay is performed. 4. The fitted equation of the modeler's choice is then used to extrapolate from the high-dose region of animal administered dose versus response down to theoretical response levels at a very low dose. 5. The assumption is then made (explicitly or implicitly) that the low- dose sector of the animal-fitted curve can be used to predict human risks for tumorigenesis by the test chemical at very low ambient exposure levels. In this translation process, it is then customary to make a gross correction for metabolic differences in handling of the chemical by the animal and the human in the form of a correction ratio applied to dose based on either relative body weights or body surface areas. The implicit assumption in this correction process is that the mechanisms of handling, target organ specificity, etc., in the two mammalian species are similar. 6. Finally, under a series of postulated human exposure scenarios, each of which leads to a calculated average daily lifetime dose (administered), the low-dose equation for the animal is used to predict a corresponding series of excess lifetime cancer risks in humans. Regulatory communities worldwide now use these modeled risks for prioritizing their regulatory activities over a wide range of potentially

OCR for page 1
Risk Assessment: Historical Perspectives 7 carcinogenic chemicals in the environment or in the workplace, and for setting numerical triggers to be used in initiating rulemaking or restrictions on specific chemicals. These regulatory activities are important and are powerfully assisted by the quantitative risk assessment process. The pro- cess itself is being extended by investigators to cover simultaneous ex- posures to many potentially carcinogenic chemicals found at low levels in the environment, such as the assortment of chemicals found in drinking water supplies, by the use of risk combination techniques (e.g., Crouch et al., 19831. It should be realized, however, that there are scientific problems inherent in the use of the modeling techniques based on animal bioassay data. To cite just a few: (1) There are no general experimental/theoretical justifi- cations for the modeling assumptions, the validity of high dose-low dose extrapolations of the animal bioassay data, or animal to human translations; (2) the methodology provides no insight as to what delivered dose of what active material (original chemical or a metabolite) delivered to what target tissue should actually be modeled in a truly meaningful risk assessment; and (3) there is no assurance in any given risk assessment that modeling of the administered dosage data has any direct relationship to delivered dosage, for the test compound or an active metabolite. Indeed, recent examples abound to show a lack of administered dose/delivered dose congruence. All of these points have been well recognized and discussed in the 1970-1986 time frame by scientists concerned with making quan- titative risk assessment more meaningful and sound. Therefore, the need has been recognized for moving quantitative risk assessment to a more realistic dimension, particularly by the use of data from metabolic and comparative pharmacokinetic studies of a given chem- ical which, when combined with chronic bioassay data from animal ex- periments, can lead to knowledge about the active chemical species that reaches specific target tissues at measurable delivered concentrations (as a function of time) in the species of prime interest, humans. The purpose of the workshop on which this volume is based was to review progress in this development of the risk assessment process and to probe the current strengths and weaknesses in this area. REFERENCES Barnes, J. M., and F. A. Denz. 1954. Experimental methods used in determining chronic toxicity. Pharmacol. Rev. 6:191-242. Crouch E. A. C., R. Wilson, and L. Zeise. 1983. The risks of drinking water. Water Resources Res. 19: 1359- 1375. Lehman, A. J., F. A. Vorhes, et al. 1959. Appraisal of the Safety of Chemicals in Foods, Drugs and Cosmetics. The Association of Food and Drug Officials of the United States. 107 pp.

OCR for page 1
Tissue Dosimetry in Risk Assessment, or What's the Problem Here Anyway? Meivin E. Andersen I NTRODUCTION The overall risk assessment process integrates hazard assessment data on chemical toxicity with exposure assessment information (Figure 11. Hazard assessment is the process by which the toxicity of a chemical is determined either by a series of bioassay experiments with intact test animals or by observing increased morbidity/mortality in exposed humans. Often there is no human epidemiology on particular chemicals, and risk managers have to rely solely on results of animal toxicity experiments for the hazard assessment. These animal experiments allow us to generate dose-response information on how much chemical is required to produce a specified degree of toxicity in test animals. The major challenges in the hazard assessment process are to generalize toxicity results in the test animal to (1) predict what will happen in test animals given much lower amounts of chemical; (2) predict what will happen in an entirely different species of animal, namely, humans; and (3) predict what will happen in a different species receiving a chemical by a route of administration dif- ferent from that used in the animal studies. These are all problems of extrapolating beyond the conditions used in the actual toxicity studies to predict outcome under very different exposure conditions in a variety of species. What concepts tie these problems together and give us some confidence in the ultimate success of efforts to develop methods to conduct these extrapolations? 8

OCR for page 1
Tissue Dosimetry in Risk Assessment 9 . l TOXICOLOGY _ ~ ~ hAPOLATIONS Ill 1 . L HARD ASSESS - ENT CONTAMINANT l CONCENTRATION AND LOCATION l 1 1 1 1 RISK ASSESSMENT I - RISK MANAGEMENT ENGINEERING DESIGN OR RElIIEDIAL ACTION EXPOSURE l DURATION AND TECH - ISLES l EXPOSURE ASSESS - ENT ENGINEERING / COST: I 1~DE4FF ANALYSIS , FIGURE 1 Elements of chemical risk assessment. The overall risk assessment process consists of both a hazard assessment and an exposure assessment. These provide information on which to make a risk analysis and give the risk manager detailed information on which to make decisions regarding acceptable environmental concentrations of a toxic chemical, cost-effective engineering design enters for reducing effluent or emission concentrations, and the feasibility of replacing one chemical by another in a particular industrial application. Pharmacokinetic modeling is useful in hazard assessment where it can aid in estimating realistic measures of target tissue dose in exposed animals and be used to support extrapolations to estimate tissue dose in humans. Basically, there seem to be two fundamental assumptions which toxi- cologists are forced to make in attempting quantitative extrapolations based on animal toxicity experiments. The first is that experimental animals are true surrogates for exposed humans. That is, chemicals would cause effects in the same tissues in humans as those tissues in which they cause effects in the exposed test animals, and the mechanisms of these effects would be qualitatively similar in the two different species. This assumption ac- cepts that there is a qualitative similarity in effects in different species. There are instances where this assumption is suspect. For instance, vinyl chloride causes zymbal gland cancer in rats (Maltoni and Lefemine, 1974), but humans do not have this structure. However, vinyl chloride is ob- viously carcinogenic in a variety of animal species. In any case, the universal validity of this assumption of qualitiative similarity in toxicity is really not within the purview of the papers in this volume. It does seem to be valid in the great majority of instances. Instead, this volume focuses our attention on the other basic assumption that we are forced to make to conduct our risk assessment calculations.

OCR for page 1
TO MELVIN E. ANDERSEN This second tenet is that there is a quantitative equivalence in the tissue chemical exposure required to produce an equivalent intensity of biological effect in various species. This is the concept of tissue dose equivalence. More simply stated, all species are regarded to have equal sensitivity to the toxic chemical. Again, there are notable exceptions, such as the ex- treme interspecies differences in toxicity of 2,3,7,8-tetrachlorodibenzo-p- dioxin (Kociba and Schwetz, 19821. For more simple toxicities, related to reactive chemical moieties, this assumption seems entirely appropriate. In addition, the species and strain differences in dioxin toxicity might diminish substantially if the dose were expressed in relation to the con- centration and affinity of the dioxin receptorts) in these various animal species (Poland and Knutson, 19821. The catch, of course, is that tissue dose is not a simple concept and will be different for different chemicals. In fact, the real problem in hazard assessment is defining and measuring tissue dose under a variety of exposure conditions in several species. A DOSE OF WHAT? This hazard assessment process sounds deceptively simple. Determine the toxic tissue dose in the test species and calculate the exposure con- ditions under which this dose is likely to be achieved in humans. All we need to do is to define tissue dose. As a working definition, we can say that an appropriate measure of tissue dose is some measure of the intensity of chemical exposure which is directly linked to the biological processes leading to toxicity or tumor formation. With this definition, it is clear that some presumption of the mechanism of interaction between the chemical and the tissue is required before we can define tissue dose for any particular chemical. What then are the primary processes by which chemicals interact with tissue constituents to cause biological changes in the tissue? The first process is by direct chemical reaction in which the toxic chemical reacts with and consumes cellular constituents (Figure 21. With this type of interaction the expected degree of damage, as loss of cellular constituents or accumulation of bound reactive intermediate, should be related to the time integral of tissue exposure to the reactive chemical. This time integral of tissue exposure is also called the area under the tissue concentration curve for the reactive chemical. The equations for reactivity in Figure 2 are true only for acute-exposure situations. In chronic administration, the equation should be expanded to include terms for the synthesis and normal catabolism of the macromolecules. The second common process by which chemicals interact with tissue is by noncovalent binding to cellular receptor molecules. This is the mech- anism by which dioxin is presumed to interact to initiate toxic changes in cells. This binding with concomitant changes in receptor occupancy causes

OCR for page 1
Tissue Dosimetry in Risk Assessment ~ ~ HOW DOES TISSUE DOSE ELICIT TOXIC RESPONSE? CHEMICAL REACTIVY: k1 T + MM- , T - MM (DEPLETION) rem rt t d(MM) = _ | kl(T)dt MMc MM J o ln[(MM)t / (MM)o]=-kl (AUTC) RECEPTOR BINDING: K1 T + R ~ ~ TR (Occupancy) TR (T) TR + R (T) + K FIGURE 2 Tissue dose metrics and their relation to toxicity. Toxic chemicals interact with tissues by two general processes. In one case, chemical reactivity, the toxic chemical, T. reacts with cellular macromolecules, MM, to cause covalent binding of the toxic chemical and a depletion in the concentration of MM. Simultaneously, there are increases in the bound toxic chemical, T-MM, which for genotoxic carcinogens can be regarded as similar to a DNA adduct. Solution of the rate equation for loss of MM with time shows that the natural logarithm of the remaining MM is proportional to the second-order rate constant for the reaction of T with MM and the area under the tissue time course concentration of T (AUTC). In the second case, receptor binding, T binds to a receptor molecule, R. with a dissociation constant, Kit. Toxicity develops due to occupancy of the receptor with some attendant biological consequence. Occupancy is the ratio of bound receptor (TR) and total receptor (TR + R). As shown, this is equivalent to Tl(T + Kit), i.e., occupancy is determined by the free concentration of T and the binding constant. some response on the part of the organism which is ultimately expressed as toxicity. The therapeutic action of most drugs is also related to specific receptor binding (Goldstein et al., 19741. With this type of interaction, the response of the cell is dependent on the occupancy of the receptor and occupancy is determined by the binding constant for the chemical and the

OCR for page 1
|4 MELVIN E. ANDERSEN have provided a very good review of the status of PB-PK modeling of chemical disposition. These PB-PK models describe the body in terms of realistic tissue compartments with specified volumes, blood flows, par- tition coefficients, and tissue binding characteristics (Gargas et al., 1986; Ramsey and Andersen, 19841. Biochemical constants for metabolic path- ways and for tissue binding can be included in the mass-balance equations for organs in which these interactions are important. For most of these metabolic pathways the important constants are the maximum velocity of the reaction (Vmax; in milligrams per kilogram) and the binding affinity of the particular substrate for the metabolizing enzyme (in milligrams per liter). Complex metabolic pathways involving parallel or sequential re- actions of the parent chemical or involving interactions between chemical metabolism and cofactor depletion can also be readily incorporated into these models, as necessary (H. J. Clewell III and M. E. Andersen, this volume). The entire process of problem definition, tissue dose assignment, and pharmacokinetic model development can be captured in a simplistic flow diagram (Figure 4~. In this representation the process of model formulation comes after evaluation of the nature of the problem and consideration of the impact of mechanism on the choice of tissue dose metric. It is followed by exercising the model, evaluating its success at predicting known kinetic and toxicity behavior, designing necessary experiments to collect crucial data for verifying or improving model performance, and refining the model when needed. A successful model can then be used as an integral part of the hazard assessment process. The take-home lesson here is that phar- macokinetic modeling is not some kneejerk process where the investigator collects blood time course curves and draws limited inferences about the behavior of the chemical in the body by an abstract mathematical curve- fitting procedure. Instead, the pharmacokinetic modeling intended for risk assessment use Ivan integrating process which should be done early on before major data collection efforts. It should provide a comprehensive description or chemical disposition in target organs and be designed to predict human kinetic behavior when the biochemical metabolic constants and the tissue-binding characteristics of the chemical have been determined . . n human tissues. The remainder of this chapter discusses tissue dose for various mech- anisms of carcinogenesis, identifies essential elements required in PK models for tracking these particular forms of tissue dose, and emphasizes that pharmacokinetic model development will often suggest a need to collect critical metabolic or kinetic data that might not be available from the literature. In fact, it would be completely wrong to believe that PK models should be developed on existing toxicity data bases. The existing

OCR for page 1
Tissue Dosimetry in Risk Assessment 15 PROBLEM IDENTIFICATION - | L~ERAT - E L EVALUATION MECHANISIIIS BIOCHEMICAL OF TOXICITY CONSTANTS 1 ' . PHYSIOLOGICAL | CONSTANTS ~ MODEL FORMULATION _ SIMULATION REP INK COMPARE TO MODEL ~ ~ ~ ~~R VALIDATED IdIODEL . DESIGN/ CONDUCT E X TRAPOLATION CRITICAL EXPERII`IIENTS TO HUI`IIANS FIGURE 4 Simplified flow chart of the development of a pharmacokinetic model intended for use in risk assessment. Problem identification: the finding of a particular toxicity, in a particular organ(s), in a particular species. This effect is the benchmark on which the risk assessment will be conducted. Literature evaluation: the integration of available information about the mechanism of toxicity, the pathways of chemical metabolism, the nature of the toxic chemical species, the tissue-binding characteristics, and the physiological parameters of the target species. From these data a model is developed to estimate the appropriate measure of tissue exposure for a wide variety of exposure conditions. The essential elements to be included in such a model are outlined in the text. Before a PK model can be used in human risk assessment it has to be validated against kinetic, metabolic, and toxicity information and in many cases refined based on com- parison with experimental results. The model development process can frequently be used to design critical experiments to collect data needed for final validation. literature can be helpful for model definition, for drawing conclusions about the nature of appropriate measures of tissue dose, and for providing limited PK information, but it is also replete with experiments which are virtually useless for hazard assessment. If a new approach, such as PB- PK modeling, is proposed as an adjunct to existing hazard assessment techniques, it will have data requirements of its own and require some independent experimentation not previously conducted on a routine basis for each chemical for which a risk assessment was planned. In general, this does not mean that there has to be major new data acquisition needs for each PK model that might be developed for risk assessment use. For many cases, this will be only limited, critical experiments that are required to provide important constants for use in the PK model (Figure 4) or to fill data gaps identified in the literature survey.

OCR for page 1
16 MELVIN E. ANDERSEN GENOTOXIC CARCINOGENS In terms of the chemical carcinogens themselves, there are two broad mechanisms by which chemicals cause cancer: by some direct chemical interaction with the DNA structures of the cell or by indirect effects on the cellular environment which increase tumor yield without direct chem- ical alteration of DNA. The former are called genotoxic carcinogens and the latter, epigenetic carcinogens (Weisburger and Williams, 1980~. As might be expected, the distinction between these two categories of car- cinogens is not always clear-cut. Many substances appear to possess prop- erties characteristic of both categories of carcinogens. However, this division can be profitably examined in terms of the importance of a proper definition of tissue dose for both types of chemical carcinogens. Genotoxic chemical carcinogens themselves can be further subdivided on the basis of whether parent chemical or a metabolite is the moiety that reacts with DNA. The possibilities include cases where parent chemicals, such as ethylene oxide or dimethylsulfate, are genotoxic; cases where stable metabolites, such as ethylene oxide formed from ethylene or bu- tadiene epoxides formed from butadiene, are genotoxic; and cases where reactive, nonisolatable metabolites, such as the epoxide formed from vinyl chloride or the chloromethylgluthathione formed from methylene chloride, are presumed to be responsible for genotoxicity (Figure 51. These three possibilities for the nature of the DNA-reactive chemical need to be con- sidered independently. PARENT CH EM ICAL For the simplest case there is a chemical reaction between DNA and parent chemical leading to chemical alteration of the DNA which can cause mutation during cell replication. As discussed for cases of chemical reactivity, the tissue burden of altered DNA is expected to be associated with integrated tissue exposure to the DNA-reactive chemical. The mod- eling problem is to identify the chemical-specific tissue solubilities or tissue-binding characteristics and the distribution and activity of chemical- specific detoxifying enzymes in various tissues. The goal of the PK model is to identify and understand the metabolic and physiological processes that limit the action of the parent chemical in the cells. Interspecies scaling is the determination of how target tissue exposure is affected by animal size for a particular administered dose. An attempt was made in volume 6 of Drinking Water and Health (National Research Council, 1986) to predict the interspecies scaling of tissue dose, depending on the nature of the toxic moiety for example, parent or metabolite, etc. This analysis assumed that both metabolic and physiological clearances

OCR for page 1

OCR for page 1
|8 MELVIN E. ANDERSEN ical risk assessments conducted by the Environmental Protection Agency (EPA). The use of this factor is usually justified by reference to studies on the interspecies differences in acute toxicity of a variety of chemicals used medically in cancer chemotherapy (Freireich et al., 19661. STABLE METABOLITES Ethylene oxide can also be produced in vivo by the oxidation of ethylene by microsomal metabolism. In developing a pharamacokinetic model for ethylene oxide as a DNA-reactive, stable metabolite, other PK modeling considerations become important. These include the rate of formation of the epoxide in various tissues, the stoichiometric yield of the epoxide from ethylene in vivo, and the distribution of the stable metabolite to target tissues. With butadiene, there are two epoxide metabolites that have gen- otoxic potential, and some provision for their differential DNA reactivities might have to be included in model development. A very elegant analysis of the relative risks of ethylene and ethylene oxide has been conducted by Bolt and Filser (in press). They developed pharmacokinetic models for both of these chemicals and attempted to predict ethylene exposure conditions that would yield carcinogenic tissue doses of the epoxide. They did not use a physiological model, and their results are limited to inter- pretation of the bioassay results in experimental animals. Nonetheless, their study is an excellent example of the use of sound pharmacokinetic principles in the analysis, interpretation, and design of toxicity experi- ments. In instances in which the genotoxic chemical is a stable, freely circu- lating metabolite, the analysis of the effect of body size on tissue dose includes consideration of the metabolic formation of a DNA-reactive me- tabolite and its consumption by various clearance pathways. For the pur- poses of risk assessment, when there is no available information on the human population, it seems appropriate to assume that both the metabolic production and the clearance pathways are related to the same fractional power of body weight (National Research Council, 19861. Thus, equiv- alent doses on a body weight basis are expected to produce approximately equal tissue exposures expressed as area under the tissue curve of the genotoxic, stable metabolite. This suggests that larger animals should be at the same risk from equivalent doses of these chemicals as smaller animals. For this class of chemicals, the standard surface area correction used by EPA would overestimate the expected risk in humans based on extrapolation of toxicity results in small laboratory animals. The Food and Drug Administration approach which uses body weight for interspecies dose conversion is the more appropriate correction factor for this class of chemical carcinogens.

OCR for page 1
Tissue Dosimetry in Risk Assessment 19 REACTIVE, NONISOLATABLE METABOLITES In a recent paper we attempted to develop a strategy for conducting a pharmacokinetically based risk assessment for methylene chloride (An- dersen et al., 19871. On the basis of a variety of kinetic and chemical arguments, we suggested that the carcinogenicity of methylene chloride was related to the metabolites produced by conjugation of parent chemical with glutathione. If this proposed mechanism of toxicity is correct, the appropriate measure of tissue dose should be the time integral of the tissue concentration of the glutathione conjugate. This material is too reactive to measure directly, and a surrogate measure of tissue concentration of this chemical must be utilized in place of its concentration. The surrogate dose metric that was developed based on kinetic principles was a ratio of the integral of the amount of chemical metabolized by this pathway in the target tissue divided by target tissue volume. This same approach could be used with other chemicals, like vinyl chloride, where the presumed genotoxic metabolite is also too short-lived to measure directly. When reactive metabolites are associated with carcinogenicity, the sim- plified pharmacokinetic analysis of the effect of animal size on integrated tissue exposure suggests that larger species will be at proportionately less risk than smaller species (National Research Council, 19861. The reason for this dependence is that metabolic production (the numerator) is pro- portional to a fractional power of body weight, while tissue volume (the denominator) is directly proportional to body weight. The ratio of the two then decreases with increasing body weight. This approach to interspecies scaling for vinyl chloride was previously suggested by Gehring et al. (1978) based on somewhat different arguments. The above examples point out that it is very difficult to depend on a single approach to interspecies scaling. When scaling strategies are de- veloped based on generalized pharmacokinetic principles, there are several very different types of interspecies scaling behaviors depending on the nature of the DNA-reactive chemical whether it is parent chemical, stable metabolite, or a highly reactive, nonisolatable metabolite. These differences should in some way be reflected in the process of hazard assessment when the mechanism of carcinogenicity of a chemical is fairly well-established. Universal reliance on the surface area correction, or any one particular adjustment factor, should be avoided; however, in the ab- sence of information on the mechansim of toxicity, the surface area cor- rection would at least err on the conservative side. INTERCALATING AGENTS Another group of direct-acting, DNA-interactive chemicals are the in- tercalating agents, represented by acridine-type dyes and related chemicals

OCR for page 1
20 MELVIN E. ANDERSEN (Rogers and Back, 19821. With these materials there is noncovalent bond- ing between the dye and DNA, and the interactions are probably best described by the mass action law with critical receptor site occupancy by the intercalated dye. For this type of tissue interaction we would need to know the dissociation constants and binding capacity for the agent-DNA binding processes, and the time course of intercalator concentration in the target tissues. Tissue dose in this case is probably best represented as a time-weighted average receptor occupancy by the intercalating ligand. Thus, even for genotoxic chemicals there are possibilities that interactions can occur either by chemical reactivity or by mass action effects of par- ticular chemical ligands. These two mechanisms lead to two very different expressions for tissue dose. These estimates of tissue exposure with chemically reactive or inter- calating agents can be used as the dose inputs to drive increased mutational frequency in biologically based cancer models such as that proposed by Moolgavkaar and Knudson (19811. Combining pharmacokinetic and phar- macodynamic modeling of the cancer process (Figure 6) promises to greatly improve our ability to conduct interspecies scaling and support risk as- sessment extrapolations. It is important to remember, however, that tissue dose will often be nonlinear with respect to administered dose, and it is clearly wrong to use administered dose uncritically in developing realistic cancer models. To a very great extent, it is only the availability of accurate pharmacokinetic descriptions of tissue exposure which permits validation of biologically motivated models of chemical carcinogenesis. In fact, it is essential to have an adequate understanding of the pharmacokinetic characteristics of target tissue exposure before pharmacodynamic models are developed for any kind of toxic response. The mechanisms of carcinogenicity of directly acting genotoxic chem- icals are still under active investigation in terms of fundamental questions about the nature of DNA adducts formed, rates of repair of damaged DNA, the presence of critical mutational sites on DNA, etc. Eventually, as more information is developed, it may even be possible to use the formation of particular adducts as the measure of tissue dose instead of the use of integrated tissue exposure. This would give metrics for tissue doses of carcinogens which were closer to the biological process of tumor . . induction. EPIGENETIC CARCINOGENS Despite the many outstanding questions with regard to the mode of action of genotoxic carcinogens and to the relative importance of particular DNA adducts, it is clear that a great deal more is known about the mech- anisms of tumor initiation with these chemicals than about the detailed

OCR for page 1
Tissue Dosimetry in Risk Assessment 21 D1 ' ~'- ~D2 _~ 1B1 (3 1 B2 me, .O FIGURE 6 A schematic of a two-stage cancer model: Normal cells (1) have a basal birth rate (B1) and death rate (D1). There is also a background mutational rate (Pl). The probability of undergoing a mutational event is the product of the birth rate and the mutational frequency (i.e., P1 x B1). The stage 2 cell (2) has undergone a single mutation but is not a tumor-forming cell. It has its own birth and death rate (B2 and D2) and some probability of undergoing a second mutational event (P2) to become a cancer cell capable of progressing to a tumor. Genotoxic carcinogens are expected to increase P1 and P2. The increase in P1 and P2 will be related to the integrated area under the tissue concentration curve of the DNA-reactive chemical. These tissue exposure estimates can be derived from physiologically realistic PK models. There are two major classes of epigenetic carcinogens discussed in this paper: those which cause cell toxicity with continuous cell division during chronic exposure, and those which act as promoters by increasing synthesis of new enzymes (or altered levels of existing enzymes). These mechanisms are more fully developed in the text. The former class of chemicals will primarily affect D1 and D2 with subsequent changes in B 1 and B2. The latter group is believed to affect the balance of B2 and D2 to give the cells with single mutations a growth advantage. PK models will have to be developed to link tissue exposure, cell toxicity, enzyme induction, and changes in cell growth kinetics. These coupled PK and pharmacodynamic models of the cancer process should greatly improve cancer risk assessment for most chemical carcinogens. mechanisms by which epigenetic carcinogens cause tumor development. In general, there seem to be two very different groups of epigenetic carcinogens. The first group consists of those chemicals that cause overt cytotoxicity and cancer appears secondary to chronic tissue damage. Chlo- roform and carbon tetrachloride are examples from this group (Reitz et al., 19829. The second group consists of the tumor promoters which interact with the cell in such a way to induce expression of new, char- acteristic sets of enzyme activities. The altered cellular environment caused by these promoters then leads to enhanced tumor yield under appropriate exposure conditions. Examples here include phenobarbital and dioxin. Tissue dosimetry for these epigenetic carcinogens will be more complex . . . . t nan it Is for genotoxlc carcinogens. For epigenetic mechanisms, it will be necessary to include some kind of pha~n~acodynamic modeling along with the pharmacokinetic descrip-

OCR for page 1
22 MELVIN E. ANDERSEN lion. For cytotoxicity (Figure 6) it will be necessary to model the linkage between tissue reactivity of reactive chemical with depletion of critical cellular macromolecules, cell death, and attendant hyperplasia. This new birth rate function can then be used in a biologically motivated cancer model (see R. B. Conolly, R. H. Reitz, and M. E. Andersen, this vol- ume). With promoters the proper tissue dose metric will be related to receptor occupancy and a resulting induction of new protein synthesis. The pharmacokinetic modeling strategy ultimately devised for these pro- moters will require physiologically accurate models for the processes in- volved in enzyme induction. In the two-stage cancer model, the action of these promoters is considered to be on the birth and death rates of the stage 2 cell, providing cells with a single mutation with a selective growth advantage. While dosimetry with these epigenetic carcinogens is more demanding than with genetic carcinogens, there still appears to be two dose metrics that emerge integrated exposure to reactive chemical for cytotoxicants and time-weighted average receptor occupancy for chemicals such as dioxin and phenobarbital. These seem to be the primary expres- sions required for understanding our problem here: that is, just what is tissue dose? On the other hand, what can be said of pharmacokinetics in these cases. The modeling strategy is still the same- produce an integrated description of chemical disposition and tissue exposure which is readily amenable to interspecies extrapolation and in which all biochemical, phys- ical chemical, biological, and physiological processes are as clearly de- fined as possible. SUMMARY Mechanistic information on the processes involved in cancer causation by a particular chemical is essential for defining the appropriate measure of target tissue dose. Tissue dose will usually either be a function of integrated tissue exposure or a function of the extent of receptor binding in a tissue. This latter metric of tissue dose is dependent on mass action principles and not simply on integrated tissue exposure. Measures of dose related simply to administered dose or total amount metabolized should be viewed with great caution unless there are compelling reasons for believing there is a direct correlation of these very coarse measures of dose with actual tissue exposure. Once the proper measure of dose is defined, a pharmacokinetic model should be developed to predict this dose metric for various exposure scenarios in a variety of species. The biological realism of physiologically based models confers on them certain advan- tages for use in the risk assessment arena. Finally, much better cancer risk assessments will be possible when validated pharmacokinetic models for tissue dose are used in conjunction with more biologically, realistic

OCR for page 1
Tissue Dosimetry in Risk Assessment 23 pharmacodynamic descriptions of the biological processes involved in chemical carcinogenesis. REFERENCES Andersen, M. E., H. J. Clewell III, M. L. Gargas, F. A. Smith, and R. H. Reitz. 1987. Physiologically-based pharmacokinetics and the risk assessment process for methylene chloride. Toxicol. Appl. Pharmacol. 87:185-205. Bischoff, K. B., and R. G. Brown. 1966. Drug distribution in mammals. Chem. Eng. Prog. Symp. Ser. 62:33-45. Bolt, H. M., and J. G. Filser. In press. Kinetics and disposition in toxicology. Example: Carcinogeinc risk assessment for ethylene. Proceedings of the International Conference on Contributions of Toxicology to Risk Assessment and Health Regulation. Arch. Tox- icol. Dedrick, R. L. 1973. Animal scale-up. J. Pharmacokinet. Biopharm. 1:435-461. Freireich, E. J., E. A. Gehan, D. P. Rall, L. H. Schmidt, and H. E. Skipper. 1966. Quantitative comparison of toxicity of anticancer agents in mouse, rat, hamster, dog, monkey and man. Cancer Chemother. Res. 66:55-68. Gargas, M. L., H. J. Clewell III, and M. E. Andersen. 1986. Metabolism of inhaled dihalomethanes in vivo: Differentiation of kinetic constants for two independent path- ways. Toxicol. Appl. Pharmacol. 82:211-223. Gehring, P. J., P. G. Watanabe, and C. N. Park. 1978. Resolution of dose-response tox- icity data for chemicals requiring metabolic activation: Example vinyl chloride. Tox- icol. Appl. Pharmacol. 44:581-591. Gerlowski, L. E., and R. K. Jain. 1983. Physiologically-based pharmacokinetic modeling: Principles and applications. J. Pharm. Sci. 72:1103-1126. Goldstein, A., L. Aronow, and S. M. Kalman. 1974. Molecular mechanisms of drug action. Chapter 1 in Principles of Drug Action: The Basis of Pharmacology, 2nd ed. New York: John Wiley & Sons. Kociba, R. J., and B. S. Schwetz.1982. Toxicity of 2,3,7,8-tetra-chlorodibenzo-p-dioxin (TCDD). Drug Metab. Rev. 13:387-406. Maltoni, C., and G. Lefemine. 1974. Carcinogenicity bioassays of vinyl chloride. I. Re- search plan and early results. Environ. Res. 7:387-405. Moolgavkaar, S. H., and A. G. Knudson, Jr. 1981. Mutation and cancer: A model for human carcinogenesis. J. Natl. Cancer Inst. 66:1037-1052. National Research Council. 1986. Dose-route extrapolations: Using inhalation toxicity data to set drinking water limits. Pp. 168-225 in Drinking Water and Health, Vol. 6. Wash- ington, D.C.: National Academy Press. Poland, A., and J. C. Knutson. 1982. 2,3,7,8-Tetrachlorodibenzodioxin and related hal- ogenated aromatic hydrocarbons: Examination of the mechanism of toxicity. Annul Rev. Pharmacol. Toxicol. 22:517-554. Ramsey, J. C., and M. E. Andersen. 1984. A physiologically based description of the inhalation pharmacokinetics of styrene in rats and humans. Toxicol. Appl. Pharmacol. 73: 159-175. Reitz, R. H., T. R. Fox, and J. F. Quast. 1982. Mechanistic considerations for carcinogenic risk estimations: Chloroform. Environ. Health Perspect. 46:163-168. Weisburger, J. H., and G. M. Williams. 1980. Chemical carcinogens. Pp. 84-138 in J. Doull, C. D. Klaasen, and M. O. Amdur, Eds. Casarett and Doull's Toxicology: The Basic Science of Poisons, 2nd ed. New York: Macmillan.

OCR for page 1