4
Standards for Risk Assessment

This chapter evaluates the standards that are proposed in the Office of Management and Budget (OMB) bulletin. The standards are defined for general and influential risk assessment, and the committee first comments on that structure. It then discusses major themes, such as uncertainty. The chapter concludes with a summary of comments on each of the individual standards that are proposed in the bulletin.

DIFFERENT LEVELS OF STANDARDS

The bulletin articulates standards for general risk assessments and special standards for influential risk assessments (see Appendix B). One standard listed for general risk assessments is specifically directed at risk assessments used for regulatory analysis. The bulletin defines an influential risk assessment as one that the responsible “agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions” (OMB 2006, p. 23). Thus, the categories of risk assessments, and thus the standards, are not based on inherent properties of risk assessments but on aspects of risk management.

In defining special standards for influential risk assessments, OMB appropriately recognizes that risk assessments that have potentially greater impact should be more detailed, be better supported by data and analyses, and receive a greater degree of scrutiny and critical review than risk assessments likely to have smaller impacts. However, proposing different standards for general and influential risk assessments is problem-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget 4 Standards for Risk Assessment This chapter evaluates the standards that are proposed in the Office of Management and Budget (OMB) bulletin. The standards are defined for general and influential risk assessment, and the committee first comments on that structure. It then discusses major themes, such as uncertainty. The chapter concludes with a summary of comments on each of the individual standards that are proposed in the bulletin. DIFFERENT LEVELS OF STANDARDS The bulletin articulates standards for general risk assessments and special standards for influential risk assessments (see Appendix B). One standard listed for general risk assessments is specifically directed at risk assessments used for regulatory analysis. The bulletin defines an influential risk assessment as one that the responsible “agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions” (OMB 2006, p. 23). Thus, the categories of risk assessments, and thus the standards, are not based on inherent properties of risk assessments but on aspects of risk management. In defining special standards for influential risk assessments, OMB appropriately recognizes that risk assessments that have potentially greater impact should be more detailed, be better supported by data and analyses, and receive a greater degree of scrutiny and critical review than risk assessments likely to have smaller impacts. However, proposing different standards for general and influential risk assessments is problem-

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget atic for at least three reasons. First, the determination of what constitutes an influential risk assessment may be unclear at the outset. Although some agencies may be able to identify an influential risk assessment at the onset of the analysis,1 others may not be able to.2 The impact of an agency activity that led to the development of the risk assessment may not be known a priori. Some degree of iteration is necessary and appropriate, but the application of additional standards when some arbitrary impact threshold is crossed may lead to needless and inappropriate delays in implementation of the action. Second, the effort to separate risk assessments arbitrarily into two broad categories does not appropriately recognize the continuum of risk assessment efforts in terms of potential impact on economic, environmental, cultural, and social values. Any attempt to divide that continuum into two categories is unlikely to succeed and will not substantially improve the quality of risk assessments. The use of two categories will tend to lead to costly and slow iterative processes in which a risk assessment may not be judged influential initially but on completion may be found to cross an arbitrary threshold that triggers the additional standards. It may be that additional evaluation and analysis may be appropriate as the impacts of the risk assessment are better identified, but an arbitrary triggering of a new set of standards is not appropriate. Third, the specific standards to be required of all influential risk assessments appear to be targeted at types of risk assessments and supporting information that may not be appropriate for the broad array of risk assessments that are conducted by federal agencies. Several standards proposed for influential risk assessments appear to be related specifically to human health risk assessments; these standards might not be appropriate for engineering risk assessments that evaluate the safety of structures or systems. Other issues associated with the standards are discussed in the remainder of this chapter. RANGE OF RISK ESTIMATES AND CENTRAL ESTIMATES One focus in the bulletin is the presentation of a range of risk estimates and a central estimate; statements on this topic in the bulletin and 1 See Appendix E, pp. DOE-4 and DOL-5. 2 See Appendix E, pp. HHS-13, DOD-9, and CPSC-4.

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget supplementary information are summarized in Table 4-1. Previous National Research Council (NRC) reports have made relevant comments on this and related topics; selected comments from those reports are provided in Table 4-2. The committee agrees with OMB that in some cases “presentation of single estimates of risk is misleading” and that ranges of “plausible risk” should be presented; however, the challenge is in the operational definitions of such words as central, expected, and plausible. The committee’s concerns regarding the use of those words in the bulletin are presented in this section. Central Estimates of What? In the supplementary information, a central estimate is defined as a mean of a distribution, the most representative value of a distribution, or a weighted estimate of risk. However, a central estimate is defined in context, and those definitions raise the question of what is being considered. Distributions arise in considerations of uncertainty and variability. Variability is an inherent property of a system. Ventilation rates, water-consumption amounts, and body-mass indexes all differ in a population of individuals. Obtaining more data will not reduce variability but will provide a better description of the distribution of a variable trait. In contrast, uncertainty reflects ignorance. The mean body-mass index of a population of individuals is typically an unknown value. Similarly, the true statistical model for a dose-response relationship is unknown. Unlike variability, uncertainty might be reduced by obtaining more data. Potential confusion arises because distributions might be used to represent both variability and uncertainty. For variability, the distribution corresponds to the different values of a trait or characteristic in a population. For the uncertainty of a parameter value, a distribution might correspond to the sampling distribution of the parameter in repeated samples (a frequentist perspective) or to the actual distribution of the parameter (a Bayesian perspective). Variability and uncertainty can lead to a distribution of risk. Consider the use of a single dose-response model (without the additional complexity of model uncertainty) to predict the risk of adverse response as a function of dose, and assume that the coefficients of the model must be estimated (that is, uncertainty) and that there is a distribution of exposures in the population (that is, variability). The point estimates of the

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget TABLE 4-1 Summary of Bulletin and Supplementary Information on Range of Risk Estimates and Central Estimates Type of Risk Assessment Bulletin Supplementary Information General risk assessment A risk assessment should “provide a characterization of risk, qualitatively and, whenever possible, quantitatively. When a quantitative characterization of risk is provided, a range of plausible risk estimates shall be provided” (OMB 2006, p. 24). The reporting of “multiple estimates of risk (and the limitations associated with these estimates)” is recommended to “convey the precision associated with these estimates” (OMB 2006, p. 13). A discussion of the Safe Drinking Water Act (SDWA) noted the need to describe the population addressed by each risk estimate, the expected risk or central estimate for affected populations, and the upper- bound or lower- bound risk estimates, with uncertainty issues, supporting studies, and methods used in the calculations. The SDWA reporting standards “should be met, where feasible, in all risk assessments which address adverse health effects” (OMB 2006, p. 14). Risk assessment for regulatory analysis A risk assessment should include “whenever possible, a range of plausible risk estimates, including central or expected estimates, when a quantitative characterization of risk is made available” (OMB 2006, p. 24). The supplementary information expands on issues related to risk assessments used for regulatory analysis. A “range of plausible risk estimates, including central estimates,” are required with any quantitative characterization of risk. A “central estimate” is defined to be “( i) mean or average of the distribution; (ii) number which contains multiple estimates of risk based on different assumptions, weighted by their relative plausibility; or (iii) any estimate judged to be most representative of the distribution.” This quantity should “neither understate nor overstate the risk, but rather, should provide the risk manager and the public with the expected risk” (OMB 2006, p. 16). Influential risk assessment All influential risk assessment shall “highlight central estimates as well as high- end and low-end estimates of risk when such estimates are uncertain” (OMB 2006, p. 25). The supplementary information expands on issues related to standards for influential risk assessment. An influential risk assessment should meet the standards for regulatory- analysis risk assessment and additional standards. In particular, the standard for

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget     the presentation of numerical estimates directly addresses issues related to central estimates of risk and ranges of risk estimates. Reporting a range avoids a “false sense of precision” when there is uncertainty. Reporting the range and central estimate “conveys a more objective characterization of the magnitude of the risks.” The highlighting of only high- end or low- end risk estimates is discouraged (OMB 2006, p. 17). The supplementary information further notes that “central” and “expected” are used synonymously. If model uncertainty is present, this central model could reflect a weighted average of risk estimates from alternative models or some synthesis of “probability assessments supplied by qualified experts” (OMB 2006, p. 17).

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget TABLE 4-2 Previous NRC Reports Addressing Issues of Uncertainty and Central Risk Estimates Topic Comment Reference Uncertainty and risk assessment NRC (1989), a report on risk communication, discusses uncertainty in the risk assessment process: “Any scientific risk estimate is likely to be based on incomplete knowledge combined with assumptions, each of which is a source of uncertainty that limits the accuracy that should be ascribed to the estimate. Does the existence of multiple sources of uncertainty mean that the final estimate is that much more uncertain, or can the different uncertainties be expected to cancel each other out? The problem of how best to interpret multiple uncertainties is one more source of uncertainty and disagreement about risk estimates” (p. 44). NRC 1989 Central estimates (point estimates) and range of risk estimates “When a risk estimate is uncertain, it can be described by a point or ‘maximum likelihood’ estimate or by a range of possibilities around the point estimate. But estimates that include a wide range of uncertainties can imply that a disastrous consequence is ‘possible, ’ even when expert opinion is unanimous that the likelihood of disaster is extremely small. The amount of uncertainty to present is a judgment that can potentially influence a recipient's judgment” (p. 83). NRC 1989 Averaging predictions “Simply put, although classical decision theory does encourage the use of expected values that take account of all sources of uncertainty, it is not in the decision- maker's or society's best interest to treat fundamentally different predictions as quantities that can be ‘averaged’ without considering the effects of each prediction on the decision that it leads to” (p. 173) “To create a single risk estimate or PDF [probability density function] out of various different models not only could undermine the entire notion of having default models that can be set aside for sufficient reason, but could lead to misleading and perhaps meaningless hybrid risk estimates” (p. 174). NRC 1994 Range of plausible values “One important issue related to uncertainty is the extent to which a risk assessment that generates a point estimate, rather than a range of plausible values, is likely to be too ‘conservative’ (that is, to excessively exaggerate the plausible magnitude of harm that might result from specified environmental exposures)” (p. 180). NRC 1994

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget model coefficients are obtained (for example, central estimates of a sampling distribution or of some posterior distribution). The mean of the exposure distribution (a central estimate) could be substituted into the dose-response model by using the point estimates of the coefficients to yield a central risk estimate. If the exposure distribution is unimodal and symmetric, the estimate seems like a reasonable central risk estimate. If the exposure distribution is skewed or multimodal, the central estimate may be unreasonable. At this point, uncertainty might enter into the calculation, and confidence or credible intervals for the model coefficients might be used to reflect parameter uncertainty. The bulletin requires the reporting of a plausible range of risk when a quantitative risk assessment is conducted. How should plausible be defined in this context? For example, would substituting the 2.5th percentile and 97.5th percentile from the exposure distribution in the dose-response model yield a plausible range? Without more guidance and operational definitions of terms, the bulletin’s guidance on central estimates and plausible risk ranges is unclear. More on “Central” Estimates Expected value has a technical statistical meaning that corresponds to the mean of some random variable. Many central estimates can be constructed, including arithmetic means, geometric means, harmonic means, medians, and trimmed means; however, it is misleading to say that “central” and “expected” estimates are synonymous, as is suggested in the bulletin (see Table 4-1). As noted in NRC (1994, p. 173), “it is not in the decision-maker’s or society’s best interest to treat fundamentally different predictions as quantities that can be ‘averaged’ without considering the effects of each prediction on the decision that it leads to.” In fact, a simple “expected” value may not convey an appropriate message. It may be reasonable to provide a calculation of the central risk estimate along with the lower and upper bounds of the 95% confidence or credible intervals and thus provide a sense of the degree of uncertainty in a particular risk assessment. Usually, some number will need to be selected for a risk management decision, such as to set an “action level” to control a toxicant under consideration or to offer guidance about how much exposure is acceptable or “safe,” as in the determination of a reference dose or an allowable limit in the workplace. Variability is an important consideration because people might respond differently to a given

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget exposure because of personal vulnerability to exposure or behavior that alters the actual exposure. Often in public-health practice and prevention, the goal is to protect the most vulnerable in the population—children, the elderly, people with illnesses (such as respiratory or cardiac disease), the developing fetus, and workers. Using the mean or central estimate would not accomplish that goal unless it reflected the mean response of the distribution of vulnerable or susceptible individuals. Risk communication is hampered by the use of vague or meaningless terms. For example, there is no such thing as the “average person.” Is the average person male or female? Does this person weigh 70 kg? Instead, we have average values of measurable attributes of people. Similarly, terms like central, expected, and plausible should be replaced with precise language. Reporting Plausible Ranges and Central Estimates The purpose and context of risk assessments frequently influence the need for, and indeed the advisability of, reporting a range and a central estimate. For example, consider a risk assessment that evaluates risks associated with operations conducted in an extreme environment. The setting of National Aeronautics and Space Administration (NASA) spacecraft exposure guidelines (SEGs) illustrates how decisions made in advance can determine the kinds of data that need to be presented and how risks should be reported. NASA guidelines for chemical exposure on spacecraft, which include SEGs and their predecessors, spacecraft maximum acceptable concentrations (SMACs) and spacecraft water exposure guidelines (NRC 1994, 1996a, 1996b, 2000b, 2004), are set to protect astronauts whose health and job fitness are closely monitored. However, they are engaged in an inherently dangerous activity and are in an environment that presents unique stressors, such as exposure to high levels of solar radiation (associated risks include cancer and hematopoietic toxicity) and microgravity (associated risks include loss of muscle mass and lowered hematocrit). In addition to protecting the health of astronauts, great emphasis is placed on avoiding exposure that would prevent astronauts from performing mission-critical tasks. The guidelines for chemical exposures are derived with those risks in mind. Because of NASA’s emphasis on safety and the devastating consequences of accidents, relatively conservative assumptions are used in

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget setting exposure guidelines. In addition, the chemical exposure guidelines are used as design points for the environmental control systems for the spacecraft. In early discussions concerning SMACs, NASA engineers working on those systems indicated their preference for a single guideline to use as a design target. Thus, although SEG documentation discusses uncertainties and limitations and transparently describes the derivation of all SEG values, SEG values are set at levels thought to be protective, and central estimates or ranges are not reported. UNCERTAINTY This section assesses the extent to which the proposed bulletin achieves its goal of enhancing technical quality and objectivity with respect to the treatment of uncertainty. Understanding the current state of best practice is a precondition for improving that practice. Therefore, this section first provides a historical perspective on uncertainty in risk assessment. That discussion provides the best examples of approaches to uncertainty analysis in the federal agencies. This section then briefly reviews methods used to address uncertainty in risk assessment and next notes relevant statements from previous NRC reports. The section concludes with the committee’s comments on the bulletin’s standards related to uncertainty analysis. This section differs from other sections of the report in that it provides more in-depth discussion of this topic; the level of detail was considered appropriate, given the focus on uncertainty in the bulletin. Historical Perspective The desire to do risk assessment properly led to the development of many of the methods for uncertainty analysis, particularly probabilistic risk assessment (PRA, also referred to as probabilistic safety analysis [PSA] in Europe). Although the aerospace industry led the development of reliability engineering, the basic methods for the use of PRA in engineering were developed in the nuclear industry. The concepts of formal and structured development of accident risk scenarios using event trees, extension of the causal models using fault trees, probabilistic treatment of physical dependencies, separation of external and internal sources of risk, and quantification and propagation of parameteric uncertainties first

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget appeared in the Reactor Safety Study. This historical perspective on the development of PRA is discussed below. The Aerospace Sector A systematic concern with PRA began in the aerospace sector after the fire in Apollo flight test AS-204 on January 27, 1967, in which three astronauts were killed. Before the Apollo accident, NASA relied on its contractors to apply “good engineering practices” to provide quality assurance and quality control. NASA’s Office of Manned Space Flight initiated the development of quantitative safety goals in 1969. Quantitative safety goals were not adopted; the reason given at the time was that managers would not appreciate the uncertainty in risk calculations: “the problem with quantifying risk assessment is that when managers are given numbers, the numbers are treated as absolute judgments, regardless of warnings against doing so” (Wiggins 1985). After the inquiry into the Challenger accident of January 28, 1986, it came to light that distrust of reassuring risk numbers was not the only reason for abandoning quantitative risk assessment. Rather, initial estimates of catastrophic failure probabilities were so high that their publication would have threatened the political future of the entire space program (Bedford and Cooke 2001). Since the shuttle accident, NASA has instituted programs of quantitative risk analysis to support safety during the design and operation phases of space travel. On the basis of an earlier U.S. Nuclear Regulatory Commission document, a PRA procedures guide was published in 2002; it included a chapter on uncertainty analysis (NASA 2002). The current NASA Procedural Requirements NPR-8705.5, effective as of July 2004, mandates PRA procedures for NASA programs and projects and stipulates that “any PRA insights reported to decision makers shall include an appreciation of the overall degree of uncertainty about the results and an understanding of which sources of uncertainty are critical. Presentation of PRA results without uncertainties significantly detracts from the quality and credibility of the PRA study” (NASA 2004, p. 12). The Nuclear Sector Throughout the 1950s, in accordance with President Eisenhower’s

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget Atoms for Peace program, the Atomic Energy Commission pursued an approach to risk management that emphasized using high-quality components and construction; conservatism in the engineering codes and standards for design, construction, and operation of plants; and conservative analysis of accident scenarios using the “maximum credible accident.” Because “credible accidents” were covered by plant design, residual risk was estimated by studying the hypothetical consequences of “incredible accidents.” A study released in 1957 focused on three scenarios of radioactive releases from a 200-megawatt (200-MW) nuclear-power plant operating 30 miles from a large population center. Regarding the probability of such releases, the study concluded that “no one knows now or will ever know the exact magnitude of this low probability” (AEC 1957). Successive design improvements were intended to reduce the probability of a catastrophic release of the radioactive material from the reactor. However, because of the limitations of the analytic methods, such improvements were not able to have a significant effect on the accident estimates. Moreover, as larger reactors were planned, such as 1,000-MW reactors, the increase in radioactive material in the cores led to larger consequences in the “incredible accident” scenarios. The desire to quantify and evaluate the effects of those improvements led to the introduction of probabilistic risk assessment. Whereas the earlier studies had dealt with uncertainty by making conservative assumptions, the goal now was to provide a realistic, as opposed to conservative, assessment of risk. A realistic risk assessment necessarily involved an assessment of the uncertainty in the risk calculation. The basic methods of PRA developed in the aerospace program in the 1960s found their first full-scale application, including accident-consequence analysis and uncertainty analysis, in the Reactor Safety Study of 1975 (U.S. NRC 1975), which is rightly considered to be the first modern PRA. The Reactor Safety Study caused considerable concern within the scientific community. In response to letters from Representative Udall, chairman of the House Committee on Interior and Insular Affairs, the U.S. Nuclear Regulatory Commission created an independent group of experts to review its “achievements and limitations.” The report of that review group (Lewis et al. 1978) led to a policy statement by the commission (U.S. NRC 1979). The policy statement (1) endorsed the review group’s strong criticism of the executive summary, stating that it was misleading and was not a summary of the report, (2) acknowledged that the peer-review process followed in publishing the Reactor Safety Study

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget 2. Compare the results of the assessment to other results published on the same topic from qualified scientific organizations. It may be appropriate for some assessments to compare their results with those derived by other scientific organizations regarding such issues as the affected population, geography, time scales, and the definition of adverse effects. However, the bulletin fails to define which comparisons are required. With specialized risk assessments, substantial assumptions may be needed to make comparisons. When external risk assessments have not followed an agency’s own standards, comparisons may undermine the quality of its work. Finally, requiring resources to compare results with external risk assessments that do not meet the standards may result in less time to devote to risk assessments by the agency and thus affect the quality and output of agency products. 3. Highlight central estimates as well as high-end and low-end estimates of risk when such estimates are uncertain. The bulletin’s discussion of central estimates and uncertainty is confusing and prevents useful application of the standard. The detailed requirement of a central estimate, as well as high- and low- end estimates, is clearly inapplicable and inappropriate for some types of risk assessments. Some NRC committees have warned that descriptions of “central estimates” of risk may have little meaning when applied to models for high- to low-dose extrapolation. Finally, the strong emphasis on central estimates in the bulletin means that the most vulnerable people in a population—who, almost by definition, lie in the tails of the probability distribution—might be underrepresented, depending on the characterization of the central estimate. 4. Characterize uncertainty with respect to the major findings of the assessment including: The aspiration level in the bulletin is at the edge of the current state of the art and exceeds what is practically feasible. The recent NRC assessments of perchlorate, arsenic, and methyl mercury discussed uncertainties qualitatively, not quantitatively. This standard will force agencies into an unsuitable role of requiring basic research before they can perform their assigned roles. These requirements are not fully articulated, either by the bulletin or by the research community. Many terms are vague, leaving the requirements document and disclose the nature and quantitative implications of model uncertainty, and the relative plausibility of different models based on scientific judgment; and where feasible:

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget Influential Risk Assessment Standards (from Bulletin) Technical and Scientific Evaluation (from Committee) include a sensitivity analysis; and provide a quantitative distribution of the uncertainty. not operational. The bulletin does not constitute “technical guidance” and hence cannot “enhance the technical quality…of risk assessments” (OMB 2006, p. 3). 5. Portray results based on different effects observed and/or different studies to convey how the choice of effect and/or study influences the assessment. The wording of this standard may undermine good scientific practice. In many cases, basing results on alternative effects can be counterproductive. For example, for methyl mercury, the assessment should be based on the most sensitive effect and not on a less sensitive or “alternate” effect. The presentation of alternative analyses may not be informative. The standard does not allow risk assessors sufficient flexibility to adapt to the nature of the available data and science. 6. Characterize, to the extent feasible, variability through a quantitative distribution, reflecting different affected population(s), time scales, geography, or other parameters relevant to the needs and objectives of the assessment. If this standard is implemented literally, few risk assessments could be completed without significant new research and tool development. 7. Where human health effects are a concern, determinations of which effects are adverse shall be specifically identified and justified based on the best available scientific information generally accepted in the relevant clinical and toxicological communities. The bulletin’s definition of adverse effect implies a clinically apparent effect. This ignores public health’s fundamental goal of controlling exposures well before the occurrence of functional impairment of the whole organism. Dividing effects into dichotomous categories of “adverse” and “nonadverse” ignores the scientific reality that adverse effects may be manifested along a continuum. Furthermore, many effects of central importance to public health (and risk assessment) are not adverse themselves but associated with healthy functioning, for example, carboxyhemoglobin formation, acetylcholinesterase inhibition, and microbial infection. The bulletin proposes simplistic and restrictive guidance concerning adverse effects, which is at odds with relevant science and legislation.

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget 8. Provide discussion, to the extent possible, of the nature, difficulty, feasibility, cost and time associated with undertaking research to resolve a report's key scientific limitations and uncertainties. This standard appears to address how much the uncertainty could be reduced with various investments in research. That is not risk assessment, but management of risk assessment. Because each risk assessment involves many “default” assumptions, it would be more cost-effective to undertake this activity as an overarching research activity and not as a component of each influential risk assessment. 9. Consider all significant comments received on a draft risk assessment report and: Appears to represent a good practice for any risk assessments that do not already implement such standards (although the existence of such problems is not established by the bulletin). However, requiring a federal agency to provide a rationale for why its position is preferable to positions proposed by commenters is likely to expend excessive resources and might result in less time to devote to agency risk assessments, thus affecting the quality and output of agency products. issue a "response-to-comment" document that summarizes the significant comments received and the agency's responses to those comments; and provide a rationale for why the agency has not adopted the position suggested by commenters and why the agency position is preferable.

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget REFERENCES AEC (U.S. Atomic Energy Commission). 1957. Theoretical Possibilities and Consequences of Major Accidents in Large Nuclear Power Plants. WASH-740. Washington, DC: U.S. Atomic Energy Commission. Allred, E.N., E.R. Bleecker, B.R. Chaitman, T.E. Dahms, S.O. Gottlieb, J.D. Hackney, M. Pagano, R.H. Selvester, S.M. Walden, and J. Warren. 1989. Short term effects of carbon monoxide exposure on the exercise performance of subjects with coronary artery disease. N. Engl. J. Med. 321(21):1426-1432. ANS (American Nuclear Society). 2003. ANSI/ANS-58.21-2003. External-Events PRA Methodology: American National Standard. American Nuclear Society. December 2003. ASME (American Society of Mechanical Engineers). 2005. ASME RA-Sb-2005. Standard for Probabilistic Risk Assessment for Nuclear Power Plant Applications: Addendum B to ASME RA-S-2002. American Society of Mechanical Engineer. December 30, 2005. Bailer, A.J., R.B. Noble, and M.W. Wheeler. 2005. Model uncertainty and risk estimation for experimental studies of quantal responses. Risk Anal. 25(2):291-299. Bedford, T., and R.M. Cooke. 2001. P. 5 in Probabilistic Risk Analysis: Foundations and Methods. Cambridge: Cambridge University Press. Brown, J., L.H.J. Goossens, F.T. Harper, B.C.P. Kraan, F.E. Haskin, M.L. Abbott, R.M. Cooke, M.L. Young, J.A. Jones S.C. Hora, A. Rood and J. Randall. 1997. Probabilistic Accident Consequence Uncertainty Study: Food Chain Uncertainty Assessment, Vols. 1 and 2. NUREG/CR-6523, EUR 16771, SAND97-0335. Prepared for Division of Systems Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/510290-keuF8T/webviewable/510290.pdf and http: //www.osti.gov/bridge/servlets/purl/510291-eUsNPE/webviewable/510291.pdf [accessed Oct. 16, 2006]. Brown, J., J. Ehrhardt, L.H.J. Goossens, R.M. Cooke, F. Fischer, I. Hasemann, J.A. Jones, B.C.P. Kraan, and J.G. Smith. 2001. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Uncertainty from the Food Chain Module. EUR 18823. FZKA 6309. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18823_en.pdf [accessed Oct. 17, 2006]. Budnitz, R.J., G. Apostolakis, D.M. Boore, L.S. Cluff, K.J. Coppersmith, C.A. Cornell, and P.A. Morris. 1997. Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts. NUREG/CR-6372. Prepared for Division of Engineering Technology,

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington DC, Office of Defense Programs, U.S. Department of Energy, Germantown, MD, and Electric Power Research Institute, Palo Alto, CA, by Senior Seismic Hazard Analysis Committee (SSHAC) [online]. Available: http://www.osti.gov/ energycitations/servlets/purl/479072-krGkYU/webviewable/479072.pdf [accessed Oct. 18, 2006]. Cooke, R.M., and L.H.J. Goossens. 1999. Nuclear Science and Technology: Procedures Guide for Structured Expert Judgment. EUR 18820EN. Prepared for Commission of European Communities Directorate-general XI (Environment and Nuclear Safety), Luxembourg, by Delft University of Technology, Delft, The Netherlands. June 1999 [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18820_en.pdf [accessed Oct. 17, 2006]. DOE (U.S. Department of Energy). 1996. Characterization of Uncertainties in Risk Assessment with Special Reference to Probabilistic Uncertainty Analysis. RCRA/CERCLA Information Brief. EH 413-068/0496. U.S. Department of Energy, Office of Environmental Policy and Assistance [online]. Available: http://www.eh.doe.gov/oepa/guidance/risk/un-cert.pdf [accessed Oct. 16, 2006]. Eckerman, K.F., R.W. Leggett, C.B. Nelson, J.S. Puskin, and A.C.B. Richardson. 1999. Cancer Risk Coefficients for Environmental Exposure to Radionuclides. Federal Guidance Report No.13. EPA 402-R-99-001. Prepared for Office of Radiation and Indoor Air, U.S. Environmental Protection Agency, Washington, DC, by Oak Ridge National Laboratory, Oak Ridge, TN. September 1999 [online]. Available: http://www.epa.gov/radiation/docs/federal/402-r-99-001.pdf [accessed Oct. 16, 2006]. Fischhoff, B., P. Slovic, S. Lichtenstein, S. Read, and B. Combs. 1978. How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Sci. 9(2):127-152. Fischhoff, B., S. Lichtenstein, P. Slovic, S.L. Derby, and R.L. Keeney. 1981. Acceptable Risk. New York: Cambridge University Press. Garrick, B.J. 1984. Recent case studies and advancements in probabilistic risk assessments. Risk Anal. 4(4):267-279. Goossens, L.H.J., R.M. Cooke, and B.C.P. Kraan. 1996. Evaluation of Weighting Schemes for Expert Judgement Studies. Prepared for Commission of the European Communities, Directorate-General for Science, Research and Development, XII-F-6, by Delft University of Technology, Delft, The Netherlands. 75 pp. Goossens, L.H.J., J. Boardman, F.T. Harper, B.C.P. Kraan, R.M. Cooke, M.L. Young, J.A. Jones, and S.C. Hora. 1997. Probabilistic Accident Consequence Uncertainty Study: Uncertainty Assessment for Deposited Material and External Doses, Vols. 1 and 2. NUREG/CR-6526. EUR 16772. SAND97-2323. Prepared for Division of Systems Technology,

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/291006-pL3L0D/webviewable/291006.pdf http://www.osti.gov/bridge/servlets/purl/291007-7XzROO/webviewable/291007.pdf [accessed Oct. 16, 2006]. Goossens, L.H.J, J.A. Jones, J. Ehrhardt, B.C.P. Kraan, and R.M. Cooke. 2001. Probabilistic Accident Consequence Uncertainty Assessment: Countermeasures Uncertainty Assessment. EUR 18821. FZKA 6307. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18821_en.pdf [accessed Oct. 17, 2006]. Harper, F.T., L.H.J. Goossens, R.M. Cooke, S.C. Hora, M.L. Young, J. Päsler-Sauer, L.A. Miller, B. Kraan, C. Lui, M.D. McKay, J.C. Helton and J.A. Jones. 1995. Probabilistic Accident Consequence Uncertainty Study: Dispersion and Deposition Uncertainty Assessment, Vol. 1 and 2. NUREG/CR-6244. EUR 15855 EN, SAND94-1453. Prepared for Division of Systems Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC, and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/10125585-aArQNy/webviewable/10125585.pdf, http://www.osti.gov/bridge/servlets/purl/25041-SZccBx/webviewable/25041.pdf [accessed Oct. 17, 2006]. Haskin, F.E., F.T. Harper, L.H.J. Goossens, B.C.P. Kraan, J.B. Grupa, and J. Randall. 1997. Probabilistic Accident Consequence Uncertainty Study: Early Health Effects Uncertainty Assessment, Vols. 1 and 2. NUREG/CR-6545. EUR 16775. SAND97-2689. Prepared for Division of Systems Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC, and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/291010-cH8Oey/webviewable/291010.pdf and http://www.osti.gov/bridge/servlets/purl/291011-8Nk nmm/webviewable/291011.pdf [accessed Oct. 17, 2006]. HM Treasury (Her Majesty’s Treasury). 2005. Managing Risks to the Public: Appraisal Guidance. London: HM Treasury. June 2005 [online]. Available: http://www.hm-treasury.gov.uk/media/8AB/54/Managing_risks_to_the_public.pdf [accessed Oct. 18, 2006]. Hoeting, J.A., D. Madigan, A.E. Raftery, and C.T. Volinsky. 1999. Bayesian model averaging: A tutorial. Stat. Sci. 14(4):382-417. Jones, J., J. Ehrhardt, L.H.J. Goossens, J. Brown, R.M. Cooke, F. Fischer, I. Hasemann, and B.C.P. Kraan. 2001a. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Methodology and Processing Techniques. EUR 18827. FZKA-6313. European Communities

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18827_en.pdf [accessed Oct.17, 2006]. Jones, J., J. Ehrhardt, L.H.J. Goossens, J. Brown, R.M. Cooke, F. Fischer, I. Hasemann, and B.C.P. Kraan. 2001b. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Overall Uncertainty Analysis. EUR 18826. FZKA 6312. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18826_en.pdf [accessed Oct. 17, 2006]. Jones, J., J. Ehrhardt, L.H.J. Goossens, J. Brown, R.M. Cooke, F. Fischer, I. Hasemann, B.C.P. Kraan, A. Khursheed, and A. Phipps. 2001c. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Uncertainty from the Dose Module. EUR 18825. FZKA-6311. European Communities [online]. Available: ftp://ftp.cordis. eu-ropa.eu/pub/fp5-euratom/docs/eur18825_en.pdf [accessed Oct. 17, 2006]. Jones, J., J. Ehrhardt, L.H.J. Goossens, R.M. Cooke, F. Fischer, Hasemann, and B.C.P. Kraan. 2001d. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Uncertainty from the Early and Late Health Effects Module. EUR 18824. FZKA-6310. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18824_en.pdf [accessed Oct. 17, 2006]. Jones, J., J. Ehrhardt, L.H.J. Goossens, R.M. Cooke, F. Fischer, Hasemann, and B.C.P. Kraan. 2001e. Probabilistic Accident Consequence Uncertainty Assessment Using COSYMA: Uncertainty from the Atmospheric Dispersion and Deposition Module. EUR 18822. FZKA-6308. European Communities [online]. Available: ftp://ftp.cordis.europa.eu/pub/fp5-euratom/docs/eur18822_en.pdf [accessed Oct. 17, 2006]. Kang, S.H., R.L. Kodell, and J.J. Chen. 2000. Incorporating model uncertainties along with data uncertainties in microbial risk assessment. Regul. Toxicol. Pharmacol. 32(1):68-72. Kemeny, J. 1979. Report of the President's Commission on the Accident at Three Mile Island. Report of the Public Health And Safety Task Force. Washington, DC: U.S. Government Printing Office. October 1979 [online]. Available: http://www.threemileisland.org/downloads//193.pdf [accessed Nov. 27, 2006]. Kithil, R. 1995. Lightning's Social and Economic Costs. Presentation at International Aerospace and Ground Conference on Lightning and Static Electricity, September 26-28, 1995, Williamsburg, VA [online]. Available: http://www.lightningsafety.com/nlsi_lls/sec.html [accessed Oct. 18, 2006]. IRIS (Integrated Risk Information System). 2006. IRIS Database for Risk Assessment, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/iris/ [accessed Nov. 10. 2006]. Lewis, H.W., R.J. Budnitz, H.J.C. Kouts, W.B. Loewenstein, W.D. Rowe, F. von Hippel, and F. Zachanariasen. 1978.

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget Risk Assessment Review Group Report to the U.S. Nuclear Regulatory Commission. NUREG/CR-0400. Nuclear Regulatory Commission, Washington, DC. 76pp. Lichtenstein, S., P. Slovic, B. Fischhoff, M. Layman, and B. Combs. 1978. Judged frequency of lethal events. J. Exp. Psychol. Learn. 4:551-578. Little, M.P., C.M. Muirhead, L.H.J. Goossens, F.T. Harper, B.C.P. Kraan, R.M. Cooke, and S.C. Hora. 1997. Probabilistic Accident Consequence Uncertainty Analysis: Late Health Effects Uncertainty Assessment, Vols. 1 and 2. NUREG/CR-6555. EUR 16774. SAND97-2322. Prepared for Division of Systems Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC, and Commission of the European Communities, Brussels. Luxembourg: Office for Publications of the European Communities [online]. Available: http://www.osti.gov/bridge/servlets/purl/291008-wV5DjS/webviewable/291008.pdf and http://www.osti.gov/bridge/servlets/purl/291009-fM9p1b/webviewable/291009.pdf [accessed Oct. 18, 2006]. Lowrance, W.W. 1976. Of Acceptable Risk: Science and the Determination of Safety. Los Altos, CA: W. Kaufmann. Morales, K.H, J.G. Ibrahim, C.J. Chen, and L.M. Ryan. 2006. Bayesian model averaging with applications to benchmark dose estimation for arsenic in drinking water. J. Am. Stat. Assoc. 101(473):9-17. NASA (National Aeronautics and Space Administration). 2002. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners, Version 1.1. Prepared for Office of Safety and Mission Assurance NASA Headquarters, Washington, DC [online]. Available: http://www.hq.nasa.gov/office/codeq/doctree/praguide.pdf [accessed Oct. 18, 2006]. NASA (National Aeronautics and Space Administration). 2004. Probabilistic Risk Assessment Procedures Guide for NASA Managers and Practitioners. NPR 8705.5. Office of Safety and Mission Assurance NASA Headquarters, Washington, DC [online]. Available: http://nodis3.gsfc.nasa.gov/npg_img/N_PR_8705_0005_/N_PR_8705_0005_.pdf [accessed Oct. 23, 2006]. NEI (Nuclear Energy Institute). 2000. Probabilistic Risk Assessment Peer Review Process Guidance, Revision A3. NEI-00-02. Washington, DC: Nuclear Energy Institute. March 20, 2000. NOAA (National Oceanic and Atmospheric Administration). 1995. Natural Hazard Fatalities for the United States, 1994. National Oceanic and Atmospheric Administration, Washington, DC. NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press. NRC (National Research Council). 1989. Improving Risk Communication. Washington DC: National Academy Press.

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget NRC (National Research Council). 1991. Human Exposure Assessment for Airborne Pollutants. Washington DC: National Academy Press. NRC (National Research Council). 1993. Issues in Risk Assessment, Volumes I, II and III Washington DC: National Academy Press. NRC (National Research Council). 1994. Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants, Vol. 1. Washington, DC: National Academy Press. NRC (National Research Council). 1996a. Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants, Vol. 2. Washington, DC: National Academy Press. NRC (National Research Council). 1996b. Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants, Vol. 3. Washington, DC: National Academy Press. NRC (National Research Council). 2000a. Toxicological Effects of Methylmercury. Washington, DC: National Academy Press. NRC (National Research Council). 2000b. Spacecraft Maximum Allowable Concentrations for Selected Airborne Contaminants, Volume 4. Washington, DC: National Academy Press. NRC (National Research Council). 2002. Estimating the Public Health Benefits of Proposed Air Pollution Regulations. Washington, DC: National Academies Press. NRC (National Research Council). 2004. Spacecraft Water Exposure Guidelines for Selected Contaminants, Vol. 1. Washington, DC: National Academies Press. NRC (National Research Council). 2005. Health Implications of Perchlorate Ingestion. Washington, DC: National Academies Press. OMB (U.S. Office of Management and Budget). 2003. Regulatory Analysis. Circular A-4 to the Heads of Executive Agencies and Establishments, September 17, 2003 [online]. Available: http://www.whitehouse.gov/omb/circulars/a004/a-4.pdf [accessed Oct. 12, 2006]. OMB (U.S. Office of Management and Budget). 2006. Proposed Risk Assessment Bulletin. Released January 9, 2006. Washington, DC: Office of Management and Budget, Executive Office of the President [online]. Available: http://www.whitehouse.gov/omb/inforeg/proposed_risk_ assessment_bulletin_010906.pdf [accessed Oct. 11, 2006]. Raftery, A.E. 1995. Bayesian model selection in social research. Sociol. Methodol. 25:111-163. Regli, S., J.B. Rose, C.N. Haas, and C.P. Gerba. 1991. Modelling the risk from Giardia and viruses in drinking water. Am. Water Works Assoc. J. 83(11):76-84. Rogovin, M., and G.T. Frampton. 1980. Three Mile Island: A Report to the Commissioners and the Public. Washington, DC: U.S. Government Printing Office [online]. Available: http://www.threemileisland.org/downloads//354.pdf [accessed Nov. 27, 2006]. Sheps, D.S., M.C. Herbst, A.L. Hinderliter, K.F. Adams, L.G. Ekelund,

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget J.J. O'Neil, G.M. Goldstein, P.A. Bromberg, J.L. Dalton, M.N. Ballenger, et al. 1990. Production of arrhythmias by elevated carboxyhemoglobin in patients with coronary artery disease. Ann. Intern. Med. 113(5):343-351. Slovic, P. 2000. The Perception of Risk. London: Earthscan. Tversky, A., and D. Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. Science 185(4157):1124-1131. U.S. NRC (U.S. Nuclear Regulatory Commission). 1975. Reactor Safety Study: An Assessment of Accident Risks in the U.S. Commercial Nuclear Power Plants. Wash-1400, NUREG-75/014. Washington, DC: U.S. Nuclear Regulatory Commission. U.S. NRC (U.S. Nuclear Regulatory Commission). 1979. Nuclear Regulatory Commission Issues Policy Statement on Reactor Safety Study and Review by Lewis Panel: NRC Statement on Risk Assessment and the Reactor Safety Study Report (WASH-1400) in Light of the Risk Assessment Review Group Report, January 18, 1979. No. 79-19. Office of Public Affairs, U.S. Nuclear Regulatory Commission, Washington, DC. U.S. NRC (U.S. Nuclear Regulatory Commission). 1983. PRA Procedures Guide: A Guide to the Performance of Probabilistic Risk Assessments for Nuclear Power Plants. NUREG/CR-2300. Washington, DC: U.S. Nuclear Regulatory Commission. U.S. NRC (U.S. Nuclear Regulatory Commission). 1990. Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants, Vol. 1. - Final Summary Report; Vol. 2, Appendices A, B & C; Vol. 3. Appendices D & E. NUREG-1150. Division of Systems Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC [online]. Available: http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr1150/ [accessed Oct. 20. 2006]. U.S. NRC (U.S. Nuclear Regulatory Commission). 2006. An Approach for Determining the Technical Adequacy of Probabilistic Risk Assessment Results for Risk-Informed Activities. Draft Regulatory Guide Dg-1161. Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC [online]. Available: http://ruleforum.llnl.gov/cgi-bin/downloader/rg_lib/123-0198.pdf [accessed Oct. 20, 2006]. Vesely, W.E., F.F. Goldberg, N.H. Roberts, and D.F. Haasl. 1981. Fault Tree Handbook. NUREG-0492. Systems and Reliability Research, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, Washington, DC. January 1981 [online]. Available: http://www.nrc.gov/reading-rm/doc-collections/nuregs/staff/sr0492/sr0492.pdf [accessed Oct. 23, 2006]. Wallace, R.B., ed. 1998. Maxcy-Rosenau-Last Public Health and Preventive Medicine, 14th Ed. Stamford, CT: Appleton and Lange. Wiggins, J. 1985. ESA Safety Optimization Study. HEI-685/1026. Hernandez Engineering, Houston, TX.

OCR for page 35
Scientific Review of the Proposed Risk Assessment Bulletin from the Office of Management and Budget Wittenberg, E., S.J. Goldie, B. Fischhoff, and J.D. Graham. 2003. Rationing decisions and individual responsibility in illness: Are all lives equal? Med. Decis. Making 23(3):194-221.