National Academies Press: OpenBook

Science and Judgment in Risk Assessment (1994)

Chapter: 10 Variability

« Previous: 9 Uncertainty
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 188

10
Variability

Introduction And Background

It is always difficult to identify the true level of risk in an endeavor like health risk assessment, which combines measurement, modeling, and inference or educated guesswork. Uncertainty analysis, the subject of Chapter 9, enables one to come to grips with how far away from the desired answer one's best estimate of an unknown quantity might be. Before we can complete an assessment of the uncertainty in an answer, however, we must recognize that many of our questions in risk assessment have more than one useful answer. Variability—typically, either across space, in time, or among individuals—complicates the search for the desired value of many important risk-assessment quantities.

Chapter 11 and Appendix I-3 discuss the issue of how to aggregate uncertainties and interindividual differences in each of the components of risk assessment. This chapter describes the sources of variability1and appropriate ways to characterize these interindividual differences in quantities related to predicted risk.

Variability is a very well-known "fact of life" in many fields of science, but its sources, effects, and ramifications are not yet routinely appreciated in environmental health risk assessment and management. Accordingly, the first section of this chapter will step back and deal with the general phenomenon (using some examples relevant to risk assessment, but not exclusively), and then for the remainder of the chapter focus only on variability in quantities that directly influence calculations of individual and population risk.

When an important quantity is both uncertain and variable, opportunities

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 189

are created to fundamentally misunderstand or misestimate the behavior of the quantity.

To draw an analogy, the exact distance between the earth and the moon is both difficult to measure precisely (at least it was until the very recent past) and changeable, because the moon's orbit is elliptical, rather than circular. Thus, as seen in Figure 10-1, uncertainty and variability can complement or confound each other. When only scattered measurements of the earth-moon distance were available, the variation among them might have led astronomers to conclude that their measurements were faulty (i.e., ascribing to uncertainty what was actually caused by variability) or that the moon's orbit was random (i.e., not allowing for uncertainty to shed light on seemingly unexplainable differences that are in fact variable and predictable). The most basic flaw of all would be to simply misestimate the true distance (the third diagram in Figure 10-1) by assuming that a few observations were sufficient (after correcting for measurement error, if applicable). This is probably the pitfall that is most relevant for health risk assessment: treating a highly variable quantity as if it was invariant or only uncertain, thereby yielding an estimate that is incorrect for some of the population (or some of the time, or over some locations), or even one that is also an inaccurate estimate of the average over the entire population.

In the risk-assessment paradigm, there are many sources of variability. Certainly, the regulation of air pollutants has long recognized that chemicals differ from each other in their physical and toxic properties and that sources differ from each other in their emission rates and characteristics; such variability is built into virtually any sensible question of risk assessment or control. However, even if we focus on a single substance emanating from a single stationary source, variability pervades each stage from emission to health or ecologic end point:

Emissions vary temporally, both in flux and in release characteristics, such as temperature and pressure.

The transport and fate of the pollutant vary with such well-understood factors as wind speed, wind direction, and exposure to sunlight (and such less-acknowledged factors as humidity and terrain), so its concentrations around its source vary spatially and temporally.

Individual human exposures vary according to individual differences in breathing rates, food consumption, and activity (e.g., time spent in each micro-environment).

The dose-response relationship (the "potency") varies for a single pollutant, because each human is uniquely susceptible to carcinogenic or other stimuli (and this inherent susceptibility might well vary during the lifetime of each person, or vary with such things as other illness or exposures to other agents).

Each of these variabilities is in turn often composed of several underlying variable phenomena. For example, the natural variability in human weight is due to the interaction of genetic, nutritional, and other environmental factors.

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 190

image

FIGURE 10-1
Effects of ignoring uncertainty versus ignoring variability in measuring
the distance between the earth and the moon.

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 191

According to the central limit theorem, variability that arises from independent factors that act multiplicatively will generally lead to an approximately lognormal distribution across the population or spatial/temporal dimension (as is commonly observed when concentrations of air pollutants are plotted).

When there is more than one desired answer to a scientific question where the search for truth is the end in itself, only two responses are ultimately satisfactory: gather more data or rephrase the question. For example, the question "How far away is the moon from the earth?" cannot be answered both simply and correctly. Either enough data must be obtained to give an answer of the form "The distance ranges between 221,460 and 252,710 miles" or "The moon's orbit is approximately elliptical, with a minor axis of 442,920 miles, a major axis of 505,420 miles, and an eccentricity of 0.482," or the question must be reduced to one with a single right answer (e.g., "How far away is the moon from the earth at its perigee?").

When the question is not purely scientific, but is intended to support a social decision, the decision-maker has a few more options, although each course of action will have repercussions that might foreclose other courses. Briefly, variability in the substance of a regulatory or science-policy question can be dealt with in four basic ways:

1.

Ignore the variability and hope for the best. This strategy tends to be most successful when the variability is small and any estimate that ignores it will not be far from the truth. For example, the Environmental Protection Agency's (EPA's) practice of assuming that all adults weight 70 kg is likely to be correct to within ±25% for most adults and probably valid to within a factor of 3 for virtually all adults. However, this approach may not be appropriate for children, where variability may be large (NRC, 1993e).

2.

Explicitly disaggregate the variability. Where the quantity seems to change smoothly and predictably over some range, continuous mathematical models may be fitted to the data in place of a discrete step function. An example might be the fitting of sine waves to annual concentration cycles for a particular pollutant. In other cases, it is easier to disaggregate the data by considering all or the relevant subgroups or subpopulations. For interindividual variability, this involves dividing the population into as many subpopulations as deemed necessary. For example, one might perform a separate risk assessment for short-term exposure to high levels of ionizing radiation for each 10-year age interval in the population, to take account of age-related differences in susceptibility. For temporal variability, it involves modeling or measuring in a discrete, rather than a continuous, fashion, on an appropriate time scale. For example, a specific type of air-pollution monitor might collect air for 15 min of each hour and report the 15-min average concentration of some pollutant. Such values might then be further aggregated to produce summary values at an even coarser time scale. For spatial variability, it involves choosing an appropriate subregion, e.g., modeling

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 192

 

the extent of global warming or cooling for each 10-deg swath of latitude around the globe, rather than predicting a single value for the entire planet, which might mask substantial and important regional differences. In each case, the common thread appears: when variability is "large" over the entire data set, the variability within each subset can become sufficiently "small" ("small" in the sense of the body-weight example in the paragraph above), if the data are disaggregated into an appropriate number of qualitatively distinct subsets. The strategy tends to be most successful when the stakes are so high (or the data or estimates so easy to obtain) that the proliferation of separate assessments does not consume inordinate amounts of resources. In contrast, in studies of a phenomenon such as global climate change, where the stakes are quite high, the estimates may also be quite hard to obtain on a highly disaggregated basis.

 

In health risk assessment, the choice of the averaging time used to transform the variable quantity into a more manageable form is crucially important. In general, for the assessment of acute toxicity, estimates of the variability in exposure and/or uptake over relatively short periods (minutes, hours, days) are needed. For chronic effects such as cancer, one might model exposure and/or update over months or years without losing needed information, since short-term "peaks and valleys" would matter for cancer risk assessment only insofar as they affected the long-term or lifetime average exposure.2The longer-term variability will generally, though not always, be significantly less marked than the variation over the short-term (but see Note 3). Moreover, the shorter the averaging time, the more such periods will be contained in an individual's lifetime, and the more opportunity there will be for rare fluctuations in exposure or uptake to produce significant risks. This, for example, explains why regulators concerned with the health effects of tropospheric ozone consider the combination of peak short-term concentration and peak activity (e.g., the "exercising asthmatic"). In all cases, the exposure assessor needs to determine which time periods are relevant for which toxic effects, and then see whether available data measuring exposure, uptake, internal dose rates, etc., can provide estimates of both the average and the variability over the necessary averaging time.

3.

Use the average value of a quantity that varies. This strategy is not the same as ignoring the variability; ideally, it follows from a decision that the average value can be estimated reliably in light of the variability, and that it is a good surrogate for the variable quantity. For example, EPA often uses 70 kg as the average body weight of an adult, presumably because although many adults weigh as little as 40 kg and as much as 100 kg, the average weight is almost as useful as (and less complicated than) three different "scenario" values or an entire distribution of weights. In the same vein, a layperson might be content in knowing the average value of the moon's distance from the earth, rather than the minimum, average, and maximum (let alone a complete mathematical description of its orbit)—whereas the average alone would be useless, or even dangerous, to the National Aeronautics and Space Administration in

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 193

 

planning an Apollo mission. Thus, this strategy tends to be most successful (and indeed might be the only sensible strategy) when the variability is small3or when the quantity is itself an input for a model or decision in which the average value of the end result (the combination of several quantities) is all that matters, either for scientific or policy reasons. An example of a scientific rationale for using the average value is the long-term average concentration of a carcinogen in air. If the dose-response function is linear (i.e., ''potency" is a single number), the end result (risk) is proportional to the average concentration. If the concentration is, say, 10 ppm higher than the average in one week and 10 ppm lower than the average in another week, this variability will have no effect on an exposed person's lifetime risk, so it is biologically unimportant. An example of a policy rationale is the use of the expected number of cancer cases in a population exposed to varying concentrations of an airborne carcinogen. If it is determined for a particular policy rationale that the distribution of individual risks across the population does not matter, then the product of average concentration, potency and population size equals the expected incidence, and the spread of concentrations about the average concentration is similarly unimportant. The average value is also the summary statistic of choice for social decisions when there is an opportunity for errors of underestimation and overestimation (which lead to underregulation and overregulation) to even out over a large set of similar choices over the long run.

 

There are at least two reasons why large variabilities can lead to precarious decisions if the average value is used. The obvious problem is that individual characteristics of persons or situations far from the average are "averaged away" and can no longer be identified or reported. A less obvious pitfall occurs when the variability is dichotomous (or has several discrete values) and the average corresponds to a value that does not exist in nature. If men and women respond markedly differently to some exposure situation, for example, the decision that would be appropriate if there existed an "average person" (midway between man and woman) might be inappropriate for either category of real person (see Finkel, 1991).

4.

Use a maximum or minimum of a quantity that varies. This is perhaps the most common way of dealing with variability in risk assessment—to focus attention on one period (e.g., the period of peak exposure), one spatial subregion (e.g., the location where the "maximally exposed individual" resides), or one subpopulation (e.g., exercising asthmatics or children who ingest pathologically large amounts of soil) and ignore the rest. This strategy tends to be most successful when the measures needed to protect or account for the person (or situation) with the extreme value will also suffice for the remainder of the distribution. It is also important to ensure that this strategy will not impose inordinate costs, compared with other approaches (such as using different controls for each subregion or population or simply controlling less stringently by using the average value instead of the extreme "tail").

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 194

The crucial point to bear in mind about all four of those strategies for dealing with variability is that unless someone measures, estimates, or at least roughly models the extent and nature of the variability, any strategy will be precarious. It stands to reason that strategy 1 ("hope for the best") hinges on the assumption that the variability is small—an assumption whose verification requires at least some attention to variability. Similarly, strategy 2 requires the definition of subregions or subpopulations in each of which the variability is small, so care must be taken to avoid the same conundrum that applies to strategy 1. (It is difficult to be sure that you can ignore variability until you think about the possible consequences of ignoring it.) Less obviously, one still needs to be somewhat confident that one has a handle on the variability in order to reduce the distribution to either an average (strategy 3) or a "tail" value (strategy 4). We know that 70 kg is an average adult body weight (and that virtually no adults are above or below 70 kg by more than a factor of 3), because weight is directly observable and because we know the mechanism by which people grow and the biologic limits of either extreme. Armed with our senses and this knowledge, we might need only a few observations to pin down roughly the minimum, the average, and the maximum. But what about a variable like "the rate at which human liver cells metabolize ethylene dibromide into its glutathione conjugate"? Here a few direct measurements or a few extrapolations from animals may not be adequate, because in the absence of any firm notion of the spread of this distribution within the human population (or the mechanisms by which the spread occurs), we cannot know how reliably our estimate of the average value reflects the true average, nor how well the observed minimum and maximum mirror the true extremes.

The distribution for an important variable such as metabolic rate should thus explicitly be considered in the risk assessment, and the reliability of the overall risk estimate should reflect knowledge about both the uncertainty and the variability in this characteristic. The importance of a more accurate risk estimate may motivate additional measurements of this variable, so that its distributions may be better defined with these additional data.

This chapter concentrates on how EPA treats variability in emissions, exposures, and dose-response relationships, to identify which of the four strategies it typically uses and to assess how adequately it has considered each choice and its consequences. The goals of this chapter are three: (1) to indicate how EPA can increase its sophistication in defining variability and handling its effects; (2) to provide information as to how to improve risk communication, so that Congress and the public understand at least which variabilities are and which are not accounted for, and how EPA's handling of variability affects the "conservatism" (or lack thereof) inherent in its risk numbers; and (3) to recommend specific research whose results could lead to useful changes in risk-assessment procedures.

In recent years, EPA has begun to increase its attention to variability. More-

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 195

over, the lack of attention in the past was due in part to a set of choices to erect a set of conservative default options (strategy 4 above) instead of dealing with variability explicitly. In theory at least, the question "How do you determine the extreme of a distribution without knowing the whole distribution?" can be answered by setting a highly conservative default and placing the burden of proof on those who wish to relax the default by showing that the extreme is unrealistic even as a "worst case." For example, the concept of the MEI (someone who breathes pollutants from the source for 70 years, 24 hours per day, at a specified location near a plant boundary) has been criticized as unrealistic, but most agree that as a summary of the population distribution of "number of hours spent at a given location during a lifetime" it might be a reasonable place to start from as a conservative short-cut for the entire distribution.

EPA has also tackled interindividual variability squarely in Exposure Factors Handbook (EPA, 1989c), which provides various percentiles (e.g., 5th, 25th, 50th, 75th, 95th) of the observed variability distributions for some components of exposure assessment, such as breathing rates, water ingestion, and consumption of particular foodstuffs. This document has not yet become a standard reference for many of EPA's offices, however. In addition, as we will discuss below, EPA has not dealt adequately with several other major sources of variability. As a result, EPA's methods to manage variability in risk assessment rely on an ill-characterized mix of some questionable distributions, some verified and unverified point values intended to be "averages," some verified and unverified point values intended to be "worst cases," and some "missing defaults," that is, hidden assumptions that ignore important sources of variability.

Moreover, several trends in risk assessment and risk management are now increasing the urgency of a broad and well-considered strategy to deal with variability. The three most important of these trends are the following:

The emergence of more sophisticated biological models for risk assessment. As pharmacokinetic models replace the administered assumption and as cell-kinetics models (such as the Moolgavkar-Venzon-Knudson model) replace the linearized-multistage model, default models that ignored human variability or took conservative measures to sidestep it will be supplanted by models that explicitly contain values of biologic measures intended to represent the human population. If the latter models ignore variability or use unverified surrogates for presumed average or worst-case properties, risk assessment might take a step backwards, becoming either less or more conservative without anyone's knowledge.

The growing interest in detailed assessments of the actual exposures that people face, rather than hypothetical worst-case exposures. To be trustworthy, both average and worst-case surrogates for variability require some knowledge of the rest of the distribution, as mentioned above. However, it is not well recognized that the average might be more sensitive to the extreme portions of

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 196

 

the whole distribution than an upper percentile might be, such as the 95th. In addition, the use of such terms as actual and best estimates carries an expectation of precision that might apply to only part of the exposure assessment, dose-response relationship, or risk assessment. If, for example, we could precisely measure the airborne concentration of a pollutant in a community around a stationary source (i.e., understand the spatial variability), but did not know the population distribution of breathing rates, we could not predict anyone's "actual exposure." In fact, even if we knew both distributions but could not superimpose them (i.e., know which breathing rates went with which concentrations), the predictions would be as variable as either of the underlying distributions. These circumstances speak to the need for progress in many kinds of research and data collection at once, if we wish to improve the power and the realism of risk assessment.

The growing interest in risk-reduction measures that target people, rather than sources. It should go without saying that if government or industry wishes to eliminate unacceptably high risks to particular persons by purchasing their homes, providing them with bottled water, restricting access to "hot spots" of risk, etc., it needs to know precisely who those persons are and where or when those hot spots are occurring. Even if such policies were not highly controversial and difficult to implement in an equitable and socially responsive way, merely identifying the prospective targets of such policies may well presuppose a command of variability beyond our current capabilities.

Exposure Variability

Variability in human response to pollutants emitted from a particular source or set of sources can arise from differences in characteristics of exposure, uptake, and personal dose-response relationships (susceptibility). Exposure variability in turn depends on variability in all the factors that affect exposure, including emissions, atmospheric processes (transport and transformation), personal activity, and the pollutant concentration in the microenvironments where the exposures occur. Information on those variabilities is not routinely included in EPA's exposure assessments, probably because it has been difficult to specify the distributions that describe the variations.

Human exposure results from the contact of a person with a substance at some nonzero concentration. Thus, it is tied to personal activities that determine a person's location (e.g., outdoors vs. indoors, standing downwind of an industrial facility vs. riding in a car, in the kitchen vs. on a porch); the person's level of activity and breathing rate influences the uptake of airborne pollutants. Exposure is also tied to emission rates and atmospheric processes that affect pollutant concentrations in the microenvironment where the person is exposed. Such processes include infiltration of outside air indoors, atmospheric advection (i.e., transport by the prevailing wind), diffusion (i.e., transport by atmospheric turbu-

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 197

lence, chemical and physical transformation, deposition, and re-entrainment—variability in each process tends to increase the overall variability in exposure. The variabilities in emissions atmospheric processes, characteristics of the microenvironment, and personal activity are not necessarily independent of each other; for example, personal activities and pollutant concentrations at a specific location might change in response to outdoor temperature; they might also differ between weekends and weekdays because the level of industrial activity changes.

Emissions Variability

There are basically four categories of emission variability that may need separate assessment methods, depending on the circumstances:

Routine—this is the type most frequently covered by current approaches.

Ordinary maintenance—special emissions may occur, for example, when the bag house is cleaned. In other cases certain emissions may only occur during maintenance, as when a specific volatile cleaner is routinely used to scour or wash out a reaction tank. These can be deliberately observed and monitored to obtain needed emissions information, if this mode is deemed likely to be significant.

Upsets and breakdowns—unusual operating conditions that may recur within average periods of days, weeks, or months, depending on the facility/process. A combination of observations and modeling approaches may be needed here.

Catastrophic failures—large explosions, ruptures of storage tanks, etc.

The last category is addressed in a separate section of the Clean Air Act and is not discussed in this report.

At least two major factors influence variability in emissions as it affects exposure assessment. First, a given source typically does not emit at a constant rate. It is subject to such things as load changes, upsets, fuel changes, process modifications, and environmental influences. Some sources are, by their nature, intermittent or cyclical. A second factor is that two similar sources (e.g., facilities in the same source category) can emit at different rates because of differences in such things as age, maintenance, or production details.

The automobile is an excellent example of both causes. Consider a single, well-characterized car with an effective control system. When it is started, the catalyst has not warmed up, and emissions can be high. Almost half the total automobile emissions in, say, Los Angeles can occur during the cold-start period. After the catalyst reaches its appropriate temperature range, it is extremely effective (›90%) at removing organic substances, such as benzene and formaldehyde, during most of the driving period. However, hard accelerations can overwhelm the system's capabilities and lead to high emissions. Those variations can lead to spatial and temporal distributions of emissions in a city (e.g.,

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 198

high emissions in areas with a large number of cold starts, particularly in the morning). The composition of the emissions, including the toxic content, differs between cold-start and driving periods. Emissions also differ between cars—often dramatically. Because of differences in control equipment, total emissions can vary, and emissions between cycles can vary between cars (e.g., cold-start vs. evaporative emissions). A final notable contribution to emission variability in automobiles is the presence of super-emitters, whose control systems have failed and may emit organic substances at a rate 10 times that of a comparable vehicle that is operating properly.

Thus, an exposure analysis based on source-category average emissions will miss the variability in sources within that category. And, exposure analyses that do not account for temporal changes in emissions from a particular source will miss an important factor, especially to the extent that emissions are linked to meteorologic conditions. In many cases, it is difficult or impossible to know a priori how emissions will vary, particularly because of upsets in processes that could lead to high exposures over short periods.

Atmospheric Process Variability

Meteorologic conditions greatly influence the dispersion, transformation, and deposition of pollutants. For example, ozone concentrations are highest during summer afternoons, whereas carbon monoxide and benzene concentrations peak in the morning (because of the combination of large emissions and little dilution) and during the winter. Formaldehyde can peak in the afternoon during the summer (because of photochemical production) and in the morning in the winter (because of rush-hour emissions and little dilution). Concentrations of primary (i.e., emitted) pollutants, such as benzene and carbon monoxide, are higher in the winter in urban areas, whereas those of many secondary pollutants (i.e., those resulting from atmospheric transformations of primary pollutants), such as ozone, are higher in the summer. Meteorologic conditions may also play a role in regional variations. Some areas experience long periods of stagnant air, which lead to very high concentrations of both primary and secondary pollutants. An extreme example is the London smog that led to high death rates before the mid-1950s. Wind velocity and mixing height also influence pollutant concentrations. (Mixing height is the height to which pollutants are rapidly mixed due to atmospheric turbulence; in effect, it is one dimension of the atmospheric volume in which pollutants are diluted.) They are usually correlated; the prevailing winds and velocities in the winter, when the mixing height is low, can be very different from those in the summer.

Some quantitative information is available about the impact of meteorologic variability on pollutant concentrations. Concentrations measured at one location over some period tend to follow a lognormal distribution. There are significant fluctuations in the concentrations about the medians (e.g., Seinfeld, 1986), which

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 199

often vary by a factor of more than 10. The extreme concentrations are usually related to time and season. The relative magnitudes and frequencies of such fluctuations in concentration increase as distance from the source decreases. Pollutant transport over complex terrain (e.g., presence of hills or tall buildings), which is generally difficult to model, can further increase relative differences in extreme concentrations about the medians. Two examples of the influence of complex terrain are Donora, Pennsylvania (in a river valley), and the Meuse Valley in Belgium. In those areas, as in London, periods of extremely high pollutant concentrations led to a period of increased deaths. Estimates of concentration over flat terrain cannot capture such effects.

Empirical data on concentration variability are sparse, except for a few pollutants, notably the criteria pollutants (including carbon monoxide, ozone, sulfur dioxide, and particulate matter). Some information on variations in formaldehyde and benzene concentrations is also available. One interesting study that considered air-pollutant exposure during commuting in the Los Angeles area was conducted by the South Coast Air Quality Management District (SCAQMD, 1989). The authors looked at exposure dependence on seasonal, vehicular-age, and freeway-use variations. They found that drivers of older vehicles had greater exposure to benzene and that exposure to benzene, formaldehyde, ethylene, and chromium was greater in the winter, although exposure to ethylene dichloride was greater in the summer. They did not report the variability in exposure between similar vehicles or distributions of the exposures (e.g., probability density functions).

Microenvironmental and Personal-Activity Variability

Microenvironmental variability, particularly when compounded with differences in personal activity, can contribute to substantial variability in individual exposure. For example, the lifetime-exposed 70-year-old has been faulted as an extreme case, but it is instructive to consider this hypothetical person in the distribution of personal activity traits. Although it is unlikely, this 70-year lifetime exposure activity pattern is one end of the spectrum in the variability of personal activity and time spent in a specific microenvironment.

Concentrations in various microenvironments vary considerably and depend on a variety of factors, such as species, building type, ventilation system, locality of other sources, and street canyon width and depth. Both the Los Angeles study (SCAQMD, 1989) and a New Jersey study (Weisel et al., 1992) revealed that exposure can be increased during commuting, particularly if the automobile itself is defective. The primary sources of many air pollutants are indoors, so their highest concentrations are found there. Those concentrations can be 10-1,000 times the outdoor concentrations (or even greater). However, the difference between outdoor and indoor concentrations of pollutants is not nearly so great when the indoor location is ventilated. Concentrations of compounds that do not

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 200

react rapidly with or settle on surfaces, such as carbon monoxide and many organic compounds might not decrease significantly when ventilated indoors. If there are additional sources of these compounds indoors, their concentrations might, in fact, increase. Concentrations of more reactive compounds, such as ozone, can decrease by a factor of 2 or more, depending on ventilation rate and the ventilation system used (Nazaroff and Cass, 1986). Particles can also be advected indoors (Nazaroff et al., 1990). One concern is that the ventilation of outdoor pollutants indoors can increase the formation of other pollutants (Nazaroff and Cass, 1986; Weschler et al., 1992). The lifetime-exposed person sitting on the porch outside his home may be at one extreme for exposure to emissions from an outdoor stationary source, but may be at the other extreme for net air-pollutant exposure; such a person may have effectively avoided "hot" microenvironments in both the home and the automobile.

Increased personal activity leads to a larger uptake, and this will add to variability by as much as a factor of about 2 or more. The activity-related component of variability depends on both the microenvironmental variability (e.g., outdoors vs. indoors) and personal characteristics (e.g., children vs. adults).

Variability In Human Susceptibility

Person-to-person differences in behavior, genetic makeup, and life history together confer on individual people unique susceptibilities to carcinogenesis (Harris, 1991). Such interindividual differences can be inherited or acquired. For example, inherited differences in susceptibility to physical or chemical carcinogens have been observed, including a substantially increased risk of sunlight-induced skin cancer in people with xeroderma pigmentosum, of bladder cancer in dyestuff workers whose genetic makeup results in the "poor acetylator" phenotype, and of bronchogenic carcinoma in tobacco smokers who have an "extensive debrisoquine hydroxylator" phenotype (both are described further in Appendix H). Similarly among different inbred and outbred strains of laboratory animals (and within particular outbred strains) exposed to carcinogenic initiators or tumor promoters there may be a factor of 40 variation in tumor response (Boutwell, 1964; Drinkwater and Bennett, 1991; Walker et al., 1992). Acquired differences that can significantly affect an individual's susceptibility to carcinogenesis include the presence of concurrent viral or other infectious diseases, nutritional factors such as alcohol and fiber intake, and temporal factors such as stress and aging.

Appendix H describes three classes of factors that can affect susceptibility: (1) those which are rare in the human population but which confer very large increases in susceptibility upon those affected; (2) those which are very common but only marginally increase susceptibility; and (3) those which may be neither rare nor of marginal importance to those affected. The Appendix provides particular detail on five of the determinants that fall into this third group. This

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 201

material in Appendix H represents both a compilation of existing literature as well as some new syntheses of recent studies; we commend the reader's attention to this important information.

Overall Susceptibility

Taken together, the evidence regarding the individual mediators of susceptibility described in Appendix H supports the plausibility of a continuous distribution of susceptibility in the human population. Some of the individual determinants of susceptibility, such as concentrations of activating enzymes or of proteins that might become oncogenic, may themselves exist in continuous gradations across the human population. Even factors that have long been thought to be dichotomous are now being revealed as more complicated—e.g., the recent finding that a substantial fraction of the population is heterozygous for ataxia-telangiectasia and has a susceptibility midway between that of ataxia-telangiectasia homozygotes and that of "normal" people (Swift et al., 1991). Most important, the combination of a large number of genetic, environmental, and lifestyle influences, even if each were bimodally distributed, would likely generate an essentially continuous overall susceptibility distribution. As Reif (1981) has noted, "we would expect to find in [the outbred human population] what would be the equivalent result of outbreeding different strains of inbred mice: a spectrum of different genetic predispositions for any particular type of tumor."

A working definition of the breadth of the distribution of "interindividual variability in overall susceptibility to carcinogenesis" is as follows: If we identified persons of high susceptibility (say, we knew them to represent the 99th percentile of the population distribution) and low susceptibility (say, the 1st percentile), we could estimate the risks that each would face if subjected to the same exposure to a carcinogen. If the estimated risk to the first type of person were 10-2 and the estimated risk to the second type of person were 10-6, we could say that "human susceptibility to this chemical varies by at least a factor of 10,000."4

There are two distinct but complementary approaches to estimating the form and breadth of the distribution of interindividual variability in overall susceptibility to carcinogenesis. The biologic approach is a "bottom-up" method that uses empirical data on the distribution of particular factors that mediate susceptibility to model the overall distribution. In the major quantitative biologic analysis of the possible extent of human variations in susceptibility to carcinogenesis, Hattis et al. (1986) reviewed 61 studies that contained individual human data on six characteristics that are probably involved causally in the carcinogenic process. The six were the half-life of particular biologically active substances in blood, metabolic activation of drugs (in vivo) and putative carcinogens (in vitro), enzymatic detoxification, DNA-adduct formation, the rate of DNA repair (as measured by the rate of unscheduled DNA synthesis induced by UV light), and

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 202

the induction of sister-chromatid exchanges after exposure of lymphocytes to x-rays. They estimated the overall variability in each factor by fitting a lognormal distribution to the data and then propagated the variabilities by using Monte Carlo simulation and assuming that the factors interacted multiplicatively and were statistically independent. Their major conclusion was that the logarithmic standard deviation of the susceptibility distribution lies between 0.9 and 2.7 (90% confidence interval). That is, the difference in susceptibility between the most sensitive 1% of the population and the least sensitive 1% might be as small as a factor of 36 (if the logarithmic standard deviation was 0.9) or as large as a factor of 50,000 (if the logarithmic standard deviation was 2.7).5

The alternative approach is inferential or "top-down," and combines epidemiologic data with a demographic technique known as heterogeneity dynamics. Heterogeneity dynamics is an analytic method for describing the changing characteristics of a heterogeneous population as its members age. The power of the heterogeneity-dynamics approach to explain initially puzzling aspects of demographic data, as well as to challenge simplistic explanations of population behavior, stems from its emphasis on the divergence between forces that affect individuals and forces that affect populations (Vaupel and Yashin, 1983). The most fundamental concept of heterogeneity dynamics is that individuals change at rates different from those of the cohorts they belong to, because the passage of time affects the composition of the cohort as it affects the life prospects of each member. In a markedly heterogeneous population, the overall death rate can decline with age, even though every individual faces an ever-increasing risk of death, simply because the population as a whole grows increasingly more "resistant" to death as the more susceptible members are preferentially removed. Specifically with regard to cancer, heterogeneity dynamics can examine the progressive divergence of observed human age-incidence functions (for many tumor types) away from the function that is believed to apply to an individual's risk as a function of age—namely, the power function of age formalized in the 1950s by Armitage and Doll (which posits that risk increases proportionally with age raised to an integral exponent, probably 4, 5, or 6). In contrast with groups of inbred laboratory animals, which do exhibit age-incidence functions that generally obey the Armitage-Doll model, in humans the age-incidence curves for many tumor types begin to level off and plateau at higher ages.

Many of the pioneering studies that used heterogeneity dynamics to infer the amount of variation in human susceptibility to cancer used cross-sectional data, which might have been confounded by secular changes in exposures to carcinogenic stimuli (Sutherland and Bailar, 1984; Manton et al., 1986). One investigation that built on the previous body of work was that of Finkel (1987), who assembled longitudinal data on cancer mortality, including the age at death and cause of death of all males and females born in 1890, for both the United States and Norway. That study separately examined deaths due to lung cancer and colorectal cancer and tried to infer the amount of population heterogeneity that

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 203

could have caused the observed age-mortality relationships to diverge from the Armitage-Doll (ageN) function that should apply to the population if all humans are of equal sensitivity. The study concluded that as a first approximation, the amount of variability (for either sex, either disease, and either country) could be roughly modeled by a lognormal distribution with a logarithmic standard deviation on the order of 2.0 (i.e., general agreement with the results of Hattis et al., 1986). That is, about 5% of the population might be about 25 times more susceptible than the average person (and a corresponding 5% about 25 times less susceptible); about 2.5% might be 50 times more (or less) susceptible than the average, and about 1% might be at least 100 times more (or less) susceptible.

A later analysis (Finkel, in press) showed that such a conclusion, if borne out, would have important implications not only for assessing risks to individuals, but for estimating population risk in practice. In a highly heterogeneous population, quantitative uncertainties about epidemiological inferences drawn from relatively small subpopulations (thousands or fewer), as well as the frequent application of animal-based risk estimates to similarly ''small" subpopulations, will be increased by the possibility that the average susceptibility of small groups varies significantly from group to group.

The issue of susceptibility is an important one for acute toxicants as well as carcinogens. The NRC Committee on Evaluation of the Safety of Fishery Products addressed this issue in depth in their report entitled Seafood Safety (NRC, 1991b). Guidelines for the assessment of acute toxic effects in humans have recently been published by the NRC Committee on Toxicology (NRC, 1993d).

Conclusions

This section records the results of the committee's analysis of EPA's practice on variability.

Exposure Variability and the Maximally Exposed Individual

One of the contentious defaults that has been used in past air-pollutant exposure and risk assessments has been the maximally exposed individual (MEI), who was assumed to be the person at greatest risk and whose risk was calculated by assuming that the person resided outdoors at the plant boundary, continuously for 70 years. This is a worst-case scenario (for exposure to the particular source only) and does not account for a number of obvious factors (e.g., the person spends time indoors, going to work, etc.) and other likely events (e.g., changing residence) that would decrease exposure to the emissions from the specific source. This default also does not account for other, possibly countervailing factors involved in exposure variability discussed above. Suggestions to remedy this shortcoming have included decreasing the point estimate for residence time

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 204

at the location to account for population mobility, and use of personal-activity models (see Chapters 3 and 6).

EPA's most recent exposure-assessment guidelines (EPA, 1992a) no longer use the MEI, instead coining the terms "high-end exposure estimates" (HEEE) and "theoretical upper-bounding exposure" (TUBE) (see Chapter 3). According to the new exposure guidelines (Section 5.3.5.1), a high-end risk "means risks above the 90th percentile of the population distribution, but not higher than the individual in the population who has the highest risk." The EPA Science Advisory Board had recommended that exposures or risks above the 99.9th percentile be regarded as "bounding estimates'' (i.e., use of the 99.9th percentile as the HEEE) for large populations (assuming that unbounded distributions such as the lognormal are used as inputs for calculating the exposure or risk distribution). For smaller populations, the guidelines state that the choice of percentile should be based on the objective of the analysis. However, neither the HEEE nor the TUBE is explicitly related to the expected MEI.

The new exposure guidelines (Section 5.3.5.1) suggest four methods for arriving at an estimator of the HEEE. These are, in descending order of sophistication:

"If sufficient data on the distribution of doses are available, take the value directly from the percentile(s) of interest within the high end;"

"if … data on the parameters used to calculate the dose are available, a simulation (such as an exposure model or Monte Carlo simulation) can sometimes be made of the distribution. In this case, the assessor may take the estimate from the simulated distribution;"

"if some information on the distribution of the variables making up the exposure or dose equation … is available, the assessor may estimate a value which falls into the high end … The assessor often constructs such an estimate by using maximum or near-maximum values for one or more of the most sensitive variables, leaving others at their mean values;"

"if almost no data are available, [the assessor can] start with a bounding estimate and back off the limits used until the combination of parameter values is, in the judgment of the assessor, clearly in the distribution of exposure or dose … The availability of pertinent data will determine how easily and defensibly the high-end estimate can be developed by simply adjusting or backing off from the ultraconservative assumptions used in the bounding estimates."

The first two methods are much preferable to the last two and should be used whenever possible. Indeed, EPA should place a priority on collecting enough data (either case-specific or generic) that the latter two methods will not be needed in estimating variability in exposure. The distribution of exposures, developed from measurements or modeling results or both, should be used to estimate population exposure, as an input in calculating population risk. It can also be used to estimate the exposure of the maximally exposed person. For

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 205

example, the most likely value of the exposure to the most exposed person is generally the 100[(N-1)/N]th percentile of the cumulative probability distribution characterizing interindividual variability in exposures, where N is the number of persons used to construct the exposure distribution. This is a particularly convenient estimator to use because it is independent of the shape of the exposure distribution (see Appendix I-3). Other estimators of exposure to the highest, or jth highest for some j‹N, person exposed are available (see Appendix I-3). The committee recommends that EPA explicitly and consistently use an estimator such as 100[(N-1)/N], because it, and not a vague estimate "somewhere above the 90th percentile," is responsive to the language in CAAA-90 calling for the calculation of risk to "the individual most exposed to emissions. …"

In recent times, EPA has begun incorporating into distributions of exposure assumptions that are based on a national average of years of residence in a home, as a replacement for its 70-year exposure assumption (e.g., an average lifetime). Proposals have been made for a similar "departure from default" for the time an individual spends at a residence each day, as a replacement for the 24 hours assumption. However, such analyses make the assumption that individuals move to a location of zero exposure when they change residences during their lifetime or leave the home each day. But, people moving from one place to another, whether it be changing the location of their residence or moving from the home to office, can vary greatly in their exposure to any one pollutant, from relatively high exposures to none. Furthermore, some exposures to different pollutants may be considered as interchangeable: moving from one place to another may yield exposures to different pollutants which, being interchangeable in their effects, can be taken as an aggregate, single "exposure." This assumption of interchangeability may or may not be realistic; however, because people moving from place to place can be seen as being exposed over time to a mixture of pollutants, some of them simultaneously and others at separate times, a simplistic analysis of residence times is not appropriate. The real problem is, in effect, a more complex problem of how to aggregate exposure to mixtures as well as one of multiple exposures of varying level of intensities to a single pollutant.

Thus, a simple distribution of residence times may not adequately account for the risks of movement from one region to another, especially for persons in hazardous occupations, such as agricultural workers exposed to pesticides, or persons of low socioeconomic status who change residences. Further, some subpopulations that might be more likely to reside in a high-exposure region might also be less mobile (e.g., owing to socioeconomic conditions). For these reasons, the default residency assumption for the calculation of the maximally exposed individual should remain at the mean of the current U.S. life expectancy, in the absence of supporting evidence otherwise. Such evidence could include population surveys of the affected area that demonstrate mobility outside regions of residence with similar exposures to similar pollutants. Personal activity (e.g., daily and seasonal activities) should be included.

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 206

If in a given case EPA determines that it must use the third method (combining various different "maximum," "near-maximum," and average values for inputs to the exposure equation) to arrive at the HEEE, the committee offers another caution: EPA has not demonstrated that these combinations of point estimates do in fact yield an output that reliably falls at the desired location within the overall distribution of exposure variability (that is, in the "conservative" portion of the distribution, but not above the confines of the entire distribution). Accordingly, EPA should validate (through generic simulation analyses and specific monitoring efforts) that its point-estimation methods do reasonably and reliably approximate what would be achieved via the more sophisticated direct-measurement or Monte Carlo methods (that is, a point estimate at approximately the 100[(N-1)/N]th percentile of the distribution). The fourth method, it should go without saying, is highly arbitrary and should not be used unless the bounding estimate can be shown to be "ultraconservative" and the concept of "backing off'' is better defined by EPA.

Susceptibility

Human beings vary substantially in their inherent susceptibility to carcinogenesis, both in general and in response to any specific stimulus or biologic mechanism. No point estimate of the carcinogenic potency of a substance will apply to all individuals in the human population. Variability affects each step in the carcinogenesis process (e.g., carcinogen uptake and metabolism, DNA damage, DNA repair and misrepair, cell proliferation, tumor progression, and metastasis). Moreover, the variability arises from many independent risk factors, some inborn and some environmental. On the basis of substantial theory and some observational evidence, it appears that some of the individual determinants of susceptibility are distributed bimodally (or perhaps trimodally) in the human population; in such cases, a class of hypersusceptible people (e.g., those with germ-line mutations in tumor-suppressor genes) might be at tens, hundreds, or thousands of times greater risk than the rest of the population. Other determinants seem to be distributed more or less continuously and unimodally, with either narrow or broad variances (e.g., the kinetics or activities of enzymes that activate or detoxify particular pollutants).

To the extent that those issues have been considered at all with respect to carcinogenesis, EPA and the research community have thought almost exclusively in terms of the bimodal type of variation, with a normal majority and a hypersusceptible minority (ILSI, 1992). That model might be appropriate for noncarcinogenic effects (e.g., normal versus asthmatic response to SO2), but it ignores a major class of variability vis-à-vis cancer (the continuous, "silent" variety), and it fails to capture even some bimodal cases in which hypersusceptibility might be the rule, rather than the exception (e.g., the poor-acetylator phenotype).

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 207

The magnitude and extent of human variability due to particular acquired or inherited cancer-susceptibility factors should be determined through molecular epidemiologic and other studies sponsored by EPA, the National Institutes of Health, and other federal agencies. Two priorities for such research should be

To explore and elucidate the relationships between variability in each measurable factor (e.g., DNA adduct formation) and variability in susceptibility to carcinogenesis.

To provide guidance on how to construct appropriate samples of the population for epidemiologic studies and risk extrapolation, given the influence of susceptibility variation on uncertainty in population risk and the possible correlations between individual susceptibility and such factors as race, ethnicity, age, and sex.

Results of the research should be used to adjust and refine estimates of risks to individuals (identified, identifiable, or unidentifiable) and estimates of expected incidence in the general population.

The population distribution of interindividual variation in cancer susceptibility cannot now be estimated with much confidence. Preliminary studies of this question, both biologic (Hattis et al., 1986) and epidemiologic (Finkel, 1987) have concluded that the variation might be described as approximately lognormal, with about 10% of the population being different by a factor of 25-50 (either more or less susceptible) from the median individual (i.e., the logarithmic standard deviation of the distribution is approximately 2.0). While the estimated standard deviation of a susceptibility distribution suggested by these studies is uncertain, in light of the biochemical and epidemiological data reviewed earlier in this chapter it is currently not scientifically plausible that the U.S. population is strictly homogeneous in susceptibility to cancer induction by cancer-causing chemicals. EPA's guidelines are silent regarding person-to-person variations in susceptibility, thereby treating all humans as identical, despite substantial evidence and theory to the contrary. This is an important "missing default" in the guidelines. EPA does assume (although its language is not very clear in this regard) that the median human has susceptibility similar to that of the particular sex-strain combination of rodent that responds most sensitively of those tested in bioassays, or susceptibility identical with that of the particular persons observed in epidemiologic studies. These latter assumptions are reasonable as a starting point (Allen et al., 1988), but of course they could err substantially in either direction for a specific carcinogen or for carcinogens as a whole.

The missing default (variations in susceptibility among humans) and questionable default (average susceptibility of humans) are related in a straightforward manner. Any error of overestimation in rodent-to-human scaling (or in epidemiologic analysis) will tend to counteract the underestimation errors that must otherwise be introduced into some individual risk estimates by EPA's current practice of not distinguishing among different degrees of human susceptibil-

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 208

ity. Conversely, any error of underestimation in interspecies scaling will exacerbate the underestimation of individual risks for every person of above-average susceptibility. Therefore, EPA should increase its efforts to validate or improve the default assumption that the median human has similar susceptibility to that of the rodent strain used to compute potency, and should attempt to assess the plausible range of uncertainty surrounding the existing assumption. For further information, see the discussion in Chapter 11.

It can be argued, in addition, that EPA has a responsibility, insofar as it is practicable, to protect persons regardless of their individual susceptibility to carcinogenesis (we use protect here not in the absolute, zero-risk sense, but in the sense of ensuring that excess individual risk is within acceptable levels or below a de minimus level). It is unclear from the language in CAAA-90 Section 112(f)(2) whether the "individual most exposed to emissions" is intended to mean the person at highest risk when both exposure and susceptibility are taken into account, but this interpretation is both plausible and consistent with the fact that a major determinant of susceptibility is the degree of metabolism of inhaled or ingested pollutants and the resulting exposure of somatic and germ cells to carcinogenic compounds (i.e., two people of different susceptibilities will likely be "exposed" to a different extent even if they breathe or ingest identical ambient concentrations). Moreover, EPA has a record of attempting to protect people with a combination of high exposure and high sensitivity, as seen in the National Ambient Air Quality Standards (NAAQS) program for criteria air pollutants (e.g., SO2, NOx, ozone, etc.).

Therefore, EPA should adopt an explicit default assumption for susceptibility before it begins to implement those decisions called for in the Clean Air Act Amendments of 1990 that require the calculation of risks to individuals. EPA could choose to incorporate into its cancer risk estimates for individual risk (not for population risk) a "default susceptibility factor" greater than the implicit factor of 1 that results from treating all humans as identical. EPA should explicitly choose a default factor greater than 1 if it interprets the statutory language to apply to individuals with both high exposure and above-average susceptibility.6EPA could explicitly choose a default factor of 1 for this purpose, if it interprets the statutory language to apply to the person who is average (in terms of susceptibility) but has high exposure. Or, preferably, EPA could develop a "default distribution" of susceptibility, and then generate the joint distribution of exposure and cancer potency (in light of susceptibility), to find the upper 95th or 99th percentile of risk for use in a risk assessment. The distribution is the more desirable way of dealing with this problem, because it takes explicit account of the joint probability (which may be large or small) of a highly exposed individual who is also highly susceptible.

Many of the currently known individual determinants of susceptibility vary by factors of hundreds or thousands at the cellular level; however, many of these risk factors (see Appendix I-2) tend to confer excess risks of approximately a

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 209

factor of 10 on predisposed people, compared with "normal" ones. Although the total effect of the many such factors may cause susceptibility to vary upwards by more than a factor of 10, some members of the committee suggest that a default factor of 10 might be a reasonable starting point, if EPA wished to apply the statutory risk criteria (see Chapter 2) to the more susceptible members of the human population. Conversely, other members of the committee do not consider an explicit factor of 10 to be justified at this time. A 10-fold adjustment might yield a reasonable best estimate of the high end of the susceptibility distribution for some pollutants when only a single predisposing factor divides the population into normal and hypersusceptible people.

If any susceptibility factor greater than 1 is applied, the short-term practical effect will be to increase all risk assessments for individual risk by the same factor, except for chemical-specific risk estimates where there is evidence that the variation in human susceptibility is larger or smaller for that chemical than for other substances. Such a general adjustment of either the default factor or default distribution might become appropriate when more information becomes available about the nature and extent of interindividual variations in susceptibility.

Individual risk assessments may depart from the new default when it can be shown either that humans are systematically either more or less sensitive than rodents to a particular chemical or that interindividual variation is markedly either more or less broad for this chemical than for the typical chemical. Therefore, in the spirit of our recommendations in Chapter 6 and Appendixes N-1 and N-2, the committee encourages EPA both to rethink the new default in general and to depart from it in specific cases when appropriately justified by general principles the agency should articulate.

Although it is known that there are susceptibility differences among people due to such factors as age, sex, race, and ethnicity, the nature and magnitude of these differences is not well known or understood; therefore, it is critical that additional research be pursued. As knowledge increases, science may be able to describe differences in the population at risk and recognize these differences with some type of default or distribution, although caution will be necessary to ensure that broad correlations between susceptibility and age, sex, etc., are not interpreted as deterministic predictions, valid for all individuals, or used in areas outside of risk assessment without proper respect for autonomy, privacy, and other social values.

In addition to adopting a default assumption for the effect of variations in susceptibility on individual risk, EPA should consider whether these variations might affect calculations of population risk as well. Estimates of population risk (i.e., the number of cases of disease or the number of deaths that might occur as a result of some exposure) are generally based on estimates of the average individual risk, which are then multiplied by the number of exposed persons to obtain a population risk estimate. The fact that individuals have unique susceptibilities should thus be irrelevant to calculating population risk, except if ignor-

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 210

ing these variations biases the estimate of average risk. Some observers have pointed out a logical reason why EPA's current procedures might misestimate average risk. Even assuming that allometric or other interspecies scaling procedures correctly map the risk to test animals onto the "risk to the average human" (an assumption we encourage EPA to explore, validate, or refine), it is not clear which "average" is correctly estimated—the median (i.e., the risk to a person who has susceptibility at the 50th percentile of the population distribution) or the expected value (i.e., the average individual risk, taking into account all of the risks in the population and their frequency or likelihood of occurrence).

If person-to-person variation in susceptibility is small or symmetrically distributed (as in a normal distribution), the median and the average (or mean) are likely to be equivalent, or so similar that this distinction is of no practical importance. However, if variation is large and asymmetrically distributed (as in a lognormal distribution with logarithmic standard deviation on the order of 2.0 or higher—see earlier example), the mean may exceed the median by roughly an order of magnitude or more.7

The committee encourages EPA to explore whether extrapolations made from animal bioassay data (or from epidemiological studies) at high exposures are likely to be appropriate for the median or for the average human, and to explore what response is warranted for the estimation and communication of population risk if the median and average are believed to differ significantly. As an initial position, EPA might assume that animal tests and epidemiological studies in fact lead to risk estimates for the median of the exposed group. This position would be based on the logic that at high exposures and hence high risks (that is, on the order of 10-2 for most epidemiologic studies, and 10-1 for bioassays), the effect of any variations in susceptibility within the test population would be truncated or attenuated. In such cases, any test animal or human subject whose susceptibility was X-fold higher than the median would face risks (far) less than X-fold higher than the median risk, because in no case can risk exceed 1.0 (certainty), and thus the effect of these individuals on the population average would not be in proportion to their susceptibilities. On the other hand, when extrapolating to ambient exposures where the median risk is closer to 10-6, the full divergence between median and average in the general population would presumably manifest itself.

If, therefore, current procedures correctly estimate the median risk, then estimates of population risk would have to be increased by a factor corresponding to the ratio of the average to the median.

Other Changes in Risk-Assessment Methods

(1)

Children are a readily identifiable subpopulation with its own physiologic characteristics (e.g., body weight), uptake characteristics (e.g., food consumption patterns), and inherent susceptibilities. When excess lifetime risk is

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 211

 

the desired measure, EPA should compute an integrated lifetime risk, taking into account all relevant age-dependent variables, such as body weight, uptake, and average susceptibility (for one example of such a computation, see Appendix C of NRDC, 1989). If there is reason to believe that risk is not linearly related to biologically effective dose, and if the computed risks for children and adults are found to be significantly different, EPA should present separate risk assessments for children and adults.

(2)

Although EPA has tried to take account of interindividual variability in susceptibility for non-cancer effects (e.g., in standards for criteria air pollutants such as ozone or SO2), such efforts have neither seen exhaustive nor part of an overall focus on variability. In particular, the "10-fold safety factor" used to account for interindividual variability when extrapolating from animal toxicity data has not been validated, in the sense that EPA is generally not aware how much of the human population falls within an order of magnitude of the median susceptibility for any particular toxic stimulus.

 

Although this chapter has focused on susceptibility to carcinogens, because this subject has received even less attention than that of susceptibility to noncarcinogens, the committee urges EPA to continue to improve its treatment of variability in the latter area as well.

(3)

EPA has not sufficiently accounted for interindividual variability in biologic characteristics when it has used various physiologic or biologically based risk-assessment models. The validity of many of these models and assumptions depends crucially on the accuracy and precision of the human biological characteristics that drive them. In a wide variety of cases, interindividual variation can swamp the simple measurement uncertainty or the uncertainty in modeling that is inherent in deriving estimates for the "average" person. For example, physiologically based pharmacokinetic (PBPK) models require information about partition coefficients and enzyme concentrations and activities; Moolgavkar-Venzon-Knudson and other cell-kinetics models require information about cell growth and death rates and the timing of differentiation; and specific alternative models positing dose-response thresholds for given chemicals require information about ligand-receptor kinetics or other cellular phenomena. EPA has begun to collect data to support the development of distributions for the key PBPK parameters (such as alveolar ventilation rates, blood flows, partition coefficients, and Michaelis-Menten metabolic parameters) in both rodents and humans (EPA, 1988f). However, this database is still sparse, especially with respect to the possible variability in human parameters. EPA has developed point estimates for human PBPK parameters for 72 volatile organic chemicals, only 26 of which are on the list of 189 hazardous air pollutants covered in CAAA-90. For only five chemicals (benzene, n-hexane, toluene, trichloroethylene, and n-xylene) does EPA have any information on the presumed average and range of the parameters in the human population. It is perhaps noteworthy that in the one major instance in which EPA has revised a unit risk factor for a hazardous air pollutant on the

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 212

 

basis of PBPK data (the case of methylene chloride), no information on the possible effect of human variability was used (EPA, 1987d; Portier and Kaplan, 1989).

Even when the alternative to the default model hinges on a qualitative, rather than a quantitative, distinction, such as the possible irrelevance to humans of the alpha-2image-globulin mechanism involved in the initiation of some male rat kidney tumors, the new model must be checked against the possibility that some humans are qualitatively different from the norm. Any alternative assumption might be flawed, if it turns out to be biologically inappropriate for some fraction of the human population. Finally, although epidemiology is a powerful tool that can be used as a "reality check" on the validity of potency estimates derived from animal data, there must be a sufficient amount of human data for this purpose. The sample size needed for a study to have a given power level increase under the assumption that humans are not of identical susceptibility.

When EPA proposes to adopt an alternative risk-assessment assumption (such as use of a PBPK model, use of a cell-kinetics model, or the determination that a given animal response is "not relevant to humans"), it should consider human interindividual variability in estimating the model parameters or verifying the assumption of "irrelevance." If the data are not available that would enable EPA to take account of human variability, EPA should be free to make any reasonable inferences about its extent and impact (rather than having to collect or await such data), but should encourage other interested parties to collect and provide the necessary data. In general, EPA should ensure that a similar level of variability analysis is applied to both the default and the alternative risk assessment, so that it can compare estimates of equal conservatism from each procedure.

Risk Communication

EPA often does not adequately communicate to its own decision-makers, to Congress, or to the public the variabilities that are and are not accounted for in any risk assessment and the implications for the conservatism and representativeness of the resulting risk numbers. Each of EPA's reports of a risk assessment should state its particular assumptions about human behavior and biology and what these do and do not account for. For example, a poor risk characterization for a hazardous air pollutant might say "The risk number R is a plausible upper bound." A better characterization would say, "The risk number R applies to a person of reasonably high-end behavior living at the fenceline 8 hours a day for 35 years." EPA should, whenever possible, go further and state, for example, "The person we are modeling is assumed to be of average susceptibility, but eats F grams per day of food grown in his backyard; the latter assumption is quite conservative, compared with the average."

Risk-communication and risk-management decisions are more difficult

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 213

when, as is usually the case, there are both uncertainty and variability in key risk-assessment inputs. It is important, whenever possible, to separate the two phenomena conceptually, perhaps by presenting multiple analyses. For its full (as opposed to screening-level) risk assessments, EPA should acknowledge that all its risk numbers are made up of three components: the estimated risk itself (X), the level of confidence (Y) that the risk is no higher than X, and the percent of the population (Z) that X is intended to apply to in a variable population. EPA should use its present practice of saying that "the plausible upper-bound risk is X" only when it believes that Y and Z are both close to 100%. Otherwise, it should use statements like, "We are Y% certain that the risk is no more than X to Z% of the population," or use an equivalent pictorial representation (see Figure 10-2).

As an alternative or supplement to estimating the value of Z, EPA can and should try to present multiple scenarios to explain variability. For example, EPA could present one risk number (or preferably, an uncertainty distribution—see Chapter 9) that explicitly applies to a "person selected at random from the population," one that applies to a person of reasonably high susceptibility but "average" behavior (mobility, breathing rate, food consumption, etc.), and one that applies to a person whose susceptibility and behavioral variables are both in the "reasonably high" portion of their distributions.

Identifiability and Risk Assessment

Not all the suggestions presented here, especially those regarding variation in susceptibility, might apply in every regulatory situation. The committee notes that in the past, whenever persons of high risk or susceptibility have been identified, society has tended to feel a far greater responsibility to inform and protect them. For such identifiable variability, the recommendations in this section are particularly salient. However, interindividual variability might be important even when the specific people with high and low values of the relevant characteristic cannot currently be identified 8  Regardless of whether the variability is now identifiable (e.g., consumption rates of a given foodstuff), difficult to identify (e.g., presence of a mutant allele of a tumor-suppressor gene), or unidentifiable (e.g., a person's net susceptibility to carcinogenesis), the committee agrees that it is important to think about its potential magnitude and extent, to make it possible to assess whether existing procedures to estimate average risks and population incidence are biased or needlessly imprecise.

In contrast with issues involving average risk and incidence, however, some members of the committee consider the distribution of individual susceptibilities and the uncertainty as to where each person falls in that distribution to be irrelevant if the variation is and will remain unidentifiable. For example, some argue that people should be indifferent between a situation wherein their risk is determined to be precisely 10-5 or one wherein they have a 1% chance of being highly

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 214

image

FIGURE 10-2
Communicating risk, uncertainty, and variability graphically.

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 215

image

FIGURE 10-2
Continued

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 216

susceptible (with risk = 10-3) and a 99% chance of being immune, with no way to know which applies to whom. In both cases, the expected value of individual risk is 10-5, and it can be argued that the distribution of risks is the same, in that without the prospect of identifiability no one actually faces a risk of 10-3, but just an equal chance of facing such a risk (Nichols and Zeckhauser, 1986).

Some of the members also argue that as we learn more about individual susceptibility, we will eventually reach a point where we will know that some individuals are at extremely high risk (i.e., carried to its extreme, an average individual risk of 10-6 may really represent cases where one person in each million is guaranteed to develop cancer while everyone else is immune). As we approach this point, they contend, society will have to face up to the fact that in order to guarantee that everyone in the population faces ''acceptable" low levels of risk, we would have to reduce emissions to an impossibly low extent.

Other committee members reject or deem irrelevant the notion that risk is ultimately either zero or 1; they believe that, both for an individual's assessment of how foreboding or tolerable a risky situation is and for society's assessment of how just or unjust the distribution of risks is, the information about the unidentifiable variability must be reported—that it affects both judgments. To bolster their contentions, these members cite literature about the limitations of expected utility theory, which takes the view, contradicted by actual survey data, that the distribution of risky outcomes about their mean values should not affect the individual's evaluation of the situation (Schrader-Frechette, 1985; Machina, 1990), and empirical findings that the skewness of lotteries over risky outcomes matters to people even when the mean and variance are kept constant (Lopes, 1984). They also argue that EPA should maintain consistency in how it handles exposure variability, which it reports even when the precise persons at each exposure level cannot be identified; i.e., EPA reports the variation in air concentration and the maximal concentration from a source even when (as is usually the case) it cannot predict exactly where the maximum will occur. If susceptibility is in large part related to person-to-person differences in the amount of carcinogenic material that a person's cells are exposed to via metabolism, then it is essentially another form of exposure variability, and the parallel with ambient (outside-the-body) exposure is close. Finally, they claim that having agreed that issues of pure uncertainty are important, EPA (and the committee) must be consistent and regard unidentifiable variability as relevant (see Appendix I-3). Our recommendations in Chapter 9 reflect our view that uncertainty is important because individuals and decision-makers do regard values other than the mean as highly relevant. If susceptibility is unidentifiable, then to the individual it represents a source of uncertainty about his or her individual risk, and many members of the committee believe it must be communicated just as uncertainty should be.

Social-science research aimed at clarifying the extent to which people care about unidentifiable variability in risk, the costs of accounting for it in risk management, and the extent to which people want government to take such

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 217

variation and costs into account in making regulatory decisions and in setting priorities might be helpful in resolving these issues.

Findings And Recommendations

The committees findings and recommendations are briefly summarized below.

Exposure

Historically, EPA has defined the maximally exposed individual (MEI) as the worst-case scenario—a continuous 70-year exposure to the maximal estimated long-term average concentration of a hazardous air pollutant. Departing from this practice, EPA has recently published methods for calculating bounding and "reasonably high-end" estimates of the highest actual or possible exposures using a real or default distribution of exposure within a population. The new exposure guidelines do not explicitly define a point on this distribution corresponding to the highest expected exposure level of an individual.

The committee endorses the EPA's use of bounding estimates, but only in screening assessments to determine whether further levels of analysis are necessary. For further levels of analysis, the committee supports EPA's development of distributions of exposure values based on available measurements, modeling results, or both. These distributions can also be used to estimate the exposure of the maximally exposed person. For example, the most likely value of the exposure to the most exposed person is generally the 100[(N - 1)/N]th percentile of the cumulative probability distribution characterizing interindividual variability in exposure, where N is the number of persons used to construct the exposure distribution. This is a particularly convenient estimator to use because it is independent of the shape of the exposure distribution. The committee recommends that EPA explicitly and consistently use an estimator such as 100[(N - 1)/N], because it, and not a vague estimate "somewhere above the 90th percentile," is responsive to the language in CAAA-90 calling for the calculation of risk to "the individual most exposed to emissions. …"

In recent times, EPA has begun incorporating into distributions of exposure assumptions that are based on a national average of years of residence in a home, as a replacement for its 70-year exposure assumption (e.g., an average lifetime). Proposals have been made for a similar "departure from defaults" for the time an individual spends at a residence each day, as a replacement for the 24 hours assumption. However, such analyses make the assumption that individuals move to a location of zero exposure when they change residences during their lifetime or leave the home each day. But, people moving from one place to another, whether it be changing the location of their residence or moving from the home to office, may vary greatly in their exposure to any one pollutant, from relatively

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 218

high exposures to none. Further, some exposures to different pollutants may be considered as interchangeable: moving from one place to another may yield exposures to different pollutants which, being interchangeable in their effects, can be taken as an aggregate, single "exposure." This assumption of interchangeability may or may not be realistic; however, because people moving from place to place can be seen as being exposed, over time to a mixture of pollutants, some of them simultaneously and others at separate times, a simplistic analysis of residence times is not appropriate. The real problem is, in effect, a more complex problem of how to aggregate exposure to mixtures as well as one of multiple exposures of varying level of intensities to a single pollutant. Thus, a simplistic analysis based on a simple distribution of residence times is not appropriate.

EPA should use the mean of current life expectancy as the assumption for the duration of individual residence time in a high-exposure area, or a distribution of residence times which accounts for the likelihood that changing residences might not result in significantly lower exposure. Similarly, EPA should use a conservative estimate for the number of hours a day an individual is exposed, or develop a distribution of the number of hours per day an individual spends in different exposure situations. Such information can be gathered through neighborhood surveys, etc. in these high-exposure areas. Note that the distribution would correctly be used only for individual risk calculations, as total population risk is unaffected by the number of persons whose exposures sum to a given total value (if risk is linearly related to exposure rate).

 

EPA has not provided sufficient documentation in its exposure-assessment guidelines to ensure that its point-estimation techniques used to determine the "high-end exposure estimate" (HEEE) when data are sparse reliably yield an estimate at the desired location within the overall distribution of exposure (which, according to these guidelines, lies above the 90th percentile but not beyond the confines of the entire distribution).

EPA should provide a clear method and rationale for determining when point estimators for the HEEE can or should be used instead of a full Monte Carlo (or similar) approach to choosing the desired percentile explicitly. The rationale should more clearly indicate how such estimators are to be generated, should offer more documentation that such point-estimation methods do yield reasonably consistent representations of the desired percentile, and should justify the choice of such a percentile if it differs from that which corresponds to the expected value of exposure to the "person most exposed to emissions".

Potency

EPA has dealt little with the issue of human variability in susceptibility; the limited efforts to date have focused exclusively on variability relative to noncar-

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 219

cinogenic effects (e.g., normal versus asthmatic response to SO2). The appropriate response to variability for noncancer end points (i.e., identify the characteristics of "normal" and "hypersusceptible" individuals, and then decide whether or not to protect both groups) might not be appropriate for carcinogenesis, in which variability might well be continuous and unimodal, rather than either-or.

EPA, NIH, and other federal agencies should sponsor molecular epidemiologic and other research on the extent of interindividual variability in various factors that affect susceptibility and cancer, on the relationships between variability in each factor and in the health end point, and on the possible correlations between susceptibility and such covariates as age, race, ethnicity, and sex. Results of the research should be used to adjust and refine estimates of risks to individuals (identified, identifiable, or unidentifiable) and estimates of expected incidence in the general population. As this research progresses, the natural science and social science community should collaborate to explore the implications of any susceptibility factors that can be tested for or that strongly correlate with other genetic traits, so as to ensure that any findings are not misinterpreted or used outside of the environmental risk assessment arena without proper care.

Susceptibility

EPA does not account for person-to-person variations in susceptibility to cancer; it thereby treats all humans as identical in this respect in its risk calculations.

EPA should adopt a default assumption for susceptibility before it begins to implement those decisions called for in the Clean Air Act that require the calculation of risks to individuals. EPA could choose to incorporate into its cancer risk estimates for individual risk a "default susceptibility factor" greater than the implicit factor of 1 that results from treating all humans as identical. EPA should explicitly choose a default factor greater than 1 if it interprets the statutory language to apply to an individual with high exposure and above-average susceptibility. EPA could explicitly choose a default factor of 1 for this purpose, if it interprets the statutory language to apply to an individual with high exposure but average susceptibility. Preferably, EPA could develop a "default distribution" of susceptibility, and then generate the joint distribution of exposure and cancer potency (in light of susceptibility) to find the upper 95th percentile (or 99th percentile) of risk for each risk assessment.

 

EPA makes its potency calculations on the assumption that, on average, humans have susceptibility similar to that of the particular sex-strain combination of rodent that responds most sensitively of those tested in bioassays or susceptibility identical with that of the particular groups of persons observed in epidemiologic studies.

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 220

EPA should continue and increase its efforts to validate or improve the default assumption that, on average, humans to be protected at the risk-management stage have susceptibility similar to that of humans included in relevant epidemiological studies, the most-sensitive rodents tested, or both.

 

It is possible that ignoring variations in human susceptibility may cause significant underestimation of population risk, if both of two conditions hold: (1) current procedures to extrapolate results of laboratory bioassays or epidemiologic studies to the general population correctly map the observed risk in the test population to the human with median susceptibility, not to the expected value averaged over the entire general population; and (2) there is sufficient skewed variability in susceptibility in the general population to cause the expected value to exceed the median to a significant extent.

In addition to continuing to explore the assumption that interspecies scaling (or epidemiologic extrapolation) correctly predicts average human susceptibility, EPA should investigate whether the average that is predicted corresponds to the median or the expected value. If there is reason to suspect the former is true, EPA should consider whether it needs to adjust its estimates of population risk to account for this discrepancy.

 

Children are a readily identifiable subpopulation with its own physiologic characteristics (e.g., body weight), uptake characteristics (e.g., food consumption patterns), and inherent susceptibilities.

If there is reason to believe that risk of adverse biological effects per unit dose depends on age, EPA should present separate risk estimates for adults and children. When excess lifetime risk is the desired measure, EPA should compute an integrated lifetime risk, taking into account all relevant age-dependent variables.

 

EPA does not usually explore or consider interindividual variability in key biologic parameters when it uses or evaluates various physiologic or biologically based risk-assessment models (or else evaluates some data but does not report on this in its final public documents). In some other cases, EPA does gather or review data that bear on human variability, but tends to accept them at face value without ensuring that they are representative of the entire population. As a general rule, the larger the number of characteristics with an important effect on risk or the more variable those characteristics are, the larger the sample of the human population needed to establish confidently the mean and range of each of those characteristics.

When EPA proposes to adopt an alternative risk-assessment assumption (such as use of a PBPK model, use of a cell-kinetics model, or the determination that a given animal response is "not relevant to humans"), it should consider human interindividual variability in estimating the model parameters or verify-

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 221

 

ing the assumption of "irrelevance." If the data are not available to take account of human variability, EPA should be free to make any reasonable inferences about its extent and impact (rather than having to collect or await such data), but should encourage other interested parties to collect and provide the necessary data. In general, in parallel to recommendation UAR4, EPA should ensure that a similar level of variability analysis is applied to both the default and the alternative risk assessment, so that it can compare equivalently conservative estimates from each procedure.

Risk Communication

EPA does not adequately communicate to its own decision-makers, to Congress, or to the public the variabilities that are and are not accounted for in any risk assessment and the implications for the conservatism and representativeness of the resulting risk numbers.

EPA should carefully state in each risk assessment what its particular assumptions about human behavior and biology do and do not account for.

 

For its full (as opposed to screening-level) risk assessments, EPA makes risk-communication and risk-management decisions more difficult when, as is usually the case, both uncertainty and variability are important.

Whenever possible, EPA should separate uncertainty and variability conceptually, perhaps by presenting multiple analyses. EPA should acknowledge that all its risk numbers are made up of three components: the estimated risk itself (X), the level of confidence (Y) that the risk is no higher than X, and the percent of the population (Z) that X is intended to apply to in a variable population. In addition, rather than reporting both Y and Z, EPA can and should try to present multiple scenarios to explore and explain the variability dimension.

Notes

1. Some specialists in different fields often use the term "variability" to refer to a dispersion of possible or actual values associated with a particular quantity, often with reference to random variability associated with any estimate of an unknown (i.e., uncertain) quantity. This report, unless stated otherwise, will use the terms interindividual variability, variability, and interindividual heterogeneity all to refer to individual-to-individual differences in quantities associated with predicted risk, such as in measures of or parameters used to model ambient concentration, uptake or exposure per unit ambient concentration, biologically effective dose per unit exposure, and increased risk per unit effective dose.

2. This assumes that risk is linear in long-term average dose, which is one of the bases of the classical models of carcinogenesis (e.g., the LMS dose-response model using administered dose). However, when one moves to more sophisticated models of the dose-exposure (i.e., PBPK) and exposure-response (i.e., biologically motivated or cell-kinetics models) relationships, shorter averaging times become important even though the health endpoint may manifest itself over the long-term. For example, the cancer risk from a chemical that is both metabolically activated and detoxified in

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 222

vivo may not be a function of total exposure, but only of those periods of exposure during which detoxification pathways cannot keep pace with activating ones. In such cases, data on average long-term concentrations (and interindividual variability therein) may completely miss the only toxicologically relevant exposure periods.

3. As discussed above, in many cases variability that exists over a short averaging time may grow less and less important as the averaging time increases. For example, if on average, adults breathe 20m3 of air per day, then over any random 1-minute period, in a group of 1,000 adults there would probably be some (those involved in heavy exertion) breathing much more than the average value of 0.014 (m3/min), and other (those asleep) breathing much less. Over the course of a year, however, the variation around the average value of 7300 m3/yr would be much smaller, as periods of heavy exercise, sleep, and average activity "average out." On the other hand, some varying human characteristics do not substantially converge over longer averaging periods. For example, the daily variation in the amount of apple juice people drink probably mirrors the monthly and yearly variation as well—those individuals who drink no apple juice on a random day are probably those who rarely or never drink it, while those at the other "tail" of the distribution (drinking perhaps three glasses per day) probably tend to repeat this pattern day after day (in other words, the distribution of "glasses drunk per year'' probably extends all the way from zero to 365 × 3, rather than varying narrowly around the midpoint of this range).

4. Similarly, the two persons might face equal cancer risks at exposures that were 10,000-fold different. However, an alternative definition, which would be more applicable for threshold effects, would be to call the difference in susceptibility the ratio of doses needed to produce the same effect in two different individuals.

5. The logarithmic standard deviation is equivalent to the standard deviation of the normal distribution corresponding to the particular lognormal distribution. If one takes the antilog of the logarithmic standard deviation, one obtains the "geometric standard deviation", or GSD, which has a more intuitively appealing definition: N standard deviations away from the median corresponds to multiplying or dividing the median by the GSD raised to the power N.

6. Moreover, existing studies of overall variations in susceptibility suggest that a factor of 10 probably subsumes one or perhaps 1.5 standard deviations above the median for the normal human population. That is, assuming (as EPA does via its explicit default) that the median human and the rodent strain used to estimate potency are of similar susceptibility, an additional factor of 10 would equate the rodent response to approximately the 85th or 90th percentiles of human response. That would be a protective, but not a highly conservative, safety factor, inasmuch as perhaps 10 percent or more of the population would be (much) more susceptible than this new reference point.

Inclusion of a default factor of 10 could bring cancer risk assessment partway into line with the prevailing practice in noncancer risk assessment, wherein one of the factors of 10 that are often added is meant to account for person-to-person variations in sensitivity.

However, if EPA decides to use a factor of 10, it should emphasize that this is a default procedure that tries to account for some of the interindividual variation in dose-response relationships, but that in specific cases may be too high or too low to provide the optimum degree of "protection" (or to reduce risks to "acceptable" levels) for persons of truly unusual susceptibility. Nor does it ensure that (in combination with exposure estimates that might actually correspond to a maximally exposed or reasonably high-end person) risk estimates are predictive or conservative for the actual "maximally-at-risk" person. In contrast, some persons of extremely high susceptibility might, as a consequence of their susceptibility, not face high exposures. It might also be the case that some risk factors for carcinogenesis also predispose those affected to other diseases from which it might be impossible to protect them.

7. For example, suppose the median income in a country was $10,000, but 5 percent of the population earned 25 times less or more than the median and an additional 1 percent earned 100 times less or more. Then the average income would be [(0.05)(400) + (0.05)(250,000) + (0.01)(100) + (0.01)(1,000,000) + (0.88)(10,000)] = $31,321, or more than three times the median income.

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×

Page 223

8. "Currently" is an important qualifier given the rapid increases in our understanding of the molecular mechanisms of carcinogenesis. During the next several decades, science will doubtless become more adept at identifying individuals with greater susceptibility than average, and perhaps even pinpoint specific substances to which such individuals are particularly susceptible.

Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 188
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 189
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 190
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 191
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 192
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 193
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 194
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 195
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 196
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 197
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 198
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 199
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 200
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 201
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 202
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 203
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 204
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 205
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 206
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 207
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 208
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 209
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 210
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 211
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 212
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 213
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 214
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 215
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 216
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 217
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 218
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 219
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 220
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 221
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 222
Suggested Citation:"10 Variability." National Research Council. 1994. Science and Judgment in Risk Assessment. Washington, DC: The National Academies Press. doi: 10.17226/2125.
×
Page 223
Next: 11 Aggregation »
Science and Judgment in Risk Assessment Get This Book
×
Buy Paperback | $99.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The public depends on competent risk assessment from the federal government and the scientific community to grapple with the threat of pollution. When risk reports turn out to be overblown—or when risks are overlooked—public skepticism abounds.

This comprehensive and readable book explores how the U.S. Environmental Protection Agency (EPA) can improve its risk assessment practices, with a focus on implementation of the 1990 Clean Air Act Amendments.

With a wealth of detailed information, pertinent examples, and revealing analysis, the volume explores the "default option" and other basic concepts. It offers two views of EPA operations: The first examines how EPA currently assesses exposure to hazardous air pollutants, evaluates the toxicity of a substance, and characterizes the risk to the public.

The second, more holistic, view explores how EPA can improve in several critical areas of risk assessment by focusing on cross-cutting themes and incorporating more scientific judgment.

This comprehensive volume will be important to the EPA and other agencies, risk managers, environmental advocates, scientists, faculty, students, and concerned individuals.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!