National Academies Press: OpenBook

Public Participation in Environmental Assessment and Decision Making (2008)

Chapter: 6 Practice: Integrating Science

« Previous: 5 Practice: Organizing Participation
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 137
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 138
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 139
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 140
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 141
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 142
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 143
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 144
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 145
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 146
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 147
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 148
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 149
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 150
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 151
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 152
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 153
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 154
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 155
Suggested Citation:"6 Practice: Integrating Science." National Research Council. 2008. Public Participation in Environmental Assessment and Decision Making. Washington, DC: The National Academies Press. doi: 10.17226/12434.
×
Page 156

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

6 Practice: Integrating Science B ecause of the substantial scientific content of environmental policy is- sues, efforts to engage the public must address the issue of integrating science and public participation. This chapter elaborates on the need for integration, particularly in terms of achieving the objective of quality. We note a series of challenges posed by this need to integrate, assess avail- able knowledge relevant to meeting them, and identify several norms and procedures that promote the successful integration of science and public participation in environmental policy. In Chapter 4, we note that many of the practices required for effective public participation are also sound man- agement practices. As this chapter shows, the practices required for shaping processes to integrate science and participation are also sound practices for organizing science to inform public policy. This chapter concludes with a discussion of the issue of implementing the principles of good practice set forth in this chapter and the previous two. The evidence reviewed in this chapter shows that integrating science and public participation through processes that iterate between analysis and broadly based deliberation—as recommended in Understanding Risk (National Research Council, 1996) and subsequent National Research Council (e.g., 1999a, 2005a) reports—promotes the quality, accountability, and legitimacy of environmental assessments and decisions. In contrast, processes that treat analysis and deliberation in isolation from each other impede both analysis and deliberation. Evidence from various sources suggests that efforts to integrate science and public participation are more likely to produce satisfactory results if they follow five specific principles: 137

138 PUBLIC PARTICIPATION 1. Availability of decision-relevant information: The processes ensure that decision-relevant information is accessible and interpretible to all par- ticipants and that decision-relevant analyses are available in open sources and presented in enough detail to allow for independent review. 2. Explicit attention to both facts and values: Efforts are made to identify the values at stake, to consider different formulations of the prob- lem to be analyzed that may embody different values or concerns (especially in the initial design phase of a public participation process), and to analyze how the available choice options affect various values. 3. Explicit description of analytic assumptions and uncertainties: The analysis and deliberation include the implications of different assumptions and different possible actualizations of uncertain factors. 4. Independent review: Official analyses are reviewed by other com- petent analysts who are credible to the parties. 5. Iteration: Past conclusions are reconsidered on the basis of new information and analysis. INTEGRATION In Chapter 2, we define quality in environmental assessments and deci- sions in terms of five elements: 1. identification of the values, interests, and concerns of the agencies, scientists, and other parties that are interested in, or might be affected by, the environmental process or decision; 2. identification of the range of actions that might be taken (for decisions); 3. identification and systematic consideration of the effects that might follow from the environmental processes or actions being considered, in- cluding uncertainties about these effects, and consideration of kinds of impacts that deserve consideration given the values, interests, and concerns of those affected; 4. outputs consistent with the best available knowledge and methods relevant to the above tasks, particularly the third; and 5. incorporation of new information, methods, analyses, and concerns that arise over time. A good decision has been defined as one that is logically consistent with what is known (e.g., information, including uncertainties), what the decision maker (or the constituencies that he or she represents) wants (i.e., values and preferences about the possible effects), and what the decision can do (management alternatives or actions) (Howard, 1966, 1968; Raiffa, 1968). Approaching decision making in this way seems like common sense

PRACTICE: INTEGRATING SCIENCE 139 (North, 1968), but in practice it is difficult to do, especially with complex decisions affecting the environment. Furthermore, environmental decisions also involve considerations of fairness and learning from experience that are not explicit parts of most formal decision frameworks (Dietz, 2003). All of these criteria for good decisions can be met only if scientific analysis is used effectively. Uncertainties in the best available knowledge (element 4 above), regardless of whether this knowledge comes from data collected using scientific methods, from the judgments of scientific experts, or from observations made without the use of formal methodologies, must be considered as a part of scientific analysis, using appropriate qualitative and quantitative methods. Of course, the interested and affected parties to a decision are generally the best judges of what they want and of their values—but without scientific analysis, they may not know when or how environmental decisions affect those values. For example, as science showed that climate change will affect not only average temperature, but also the frequency and intensity of coastal storms, floods, droughts, wildfires, and so forth, some people who had considered themselves at little risk recon- sidered their positions. Scientists are usually in the best position to identify and systematically consider the effects of environmental processes and actions. However, good scientific analysis often requires information about local context that is most likely to come from people with close experience with local condi- tions. In a well-known example, British authorities advised sheep farmers after the Chernobyl nuclear accident that they could avoid radioactive contamination of their flocks by simply keeping the lambs out of the val- leys. But the farmers knew that the fields were unfenced, so the solution was not practical and the risk was greater than the government scientists thought (Wynne, 1989). These examples are among many that could be cited to show that in- tegrating scientific analysis and public input requires more than a handoff of tasks from one group to another. The public cannot make good value judgments without good science, and scientists cannot do good decision- oriented analysis without public input. Recognizing the latter point, many policy reviews have advocated integration of public input into environmen- tal assessment processes that have traditionally been dominated by science (e.g., National Research Council, 1996, 1999a, 1999b, 2005a, 2007a; Presidential/Congressional Commission on Risk Assessment and Risk Man- agement, 1997a,b). They recognize that past nonintegrated assessment e ­ fforts have suffered in terms of both quality and legitimacy because they did not fully incorporate information (including appropriate consideration of uncertainties) and concerns coming from various affected parties. These studies represent an important departure from previous thinking about how to conduct environmental assessments for informing practical

140 PUBLIC PARTICIPATION decisions. They show that public input holds the potential to avoid the repetition of past failures, although they provide few specifics on how best to integrate public input to achieve its theoretical benefits. Most impor- tantly, they do not address in much detail several well-known difficulties that present significant challenges to successful integration of public input and scientific analysis. CHALLENGES OF INTEGRATION Integration is challenging for several reasons, including attributes of the science involved, characteristics of the public, and the difficulties of communication. Challenges Related to Science One set of scientific challenges arises from lack of data, the complex- ity of environmental processes, and the uncertainty of scientific knowledge about environmental processes. In the absence of precise knowledge, a traditional decision rule is to be conservative: either choose options that include a margin of safety that is adequate to avoid bad effects or outcomes, or choose analytic procedures to avoid underestimating the probability of the bad effects or outcomes. Another approach is to analyze how each bad outcome might occur, with a major effort to understand the sequence of events or the characteristics of situations in which bad outcomes or effects may occur. Guidelines for good practice in such analyses may be a useful way to deal with many decision situations without a need for repeated, ex- pensive, and time-consuming analyses. But it must be recognized that such guidelines typically include important value judgments on how conservative the analysis or decision process should be. Especially in cases in which scientific knowledge is evolving, it may be appropriate to have both guidelines and a procedure to depart from the guidelines (see Box 6-1). Departure from the guidelines may involve detailed analysis including formal probabilistic methods to characterize uncertainties affecting what can go wrong and lead to bad outcomes. Such methods make the value judgments about how to deal with uncertainty or how to make trade-offs among different bad outcomes explicit rather than being left implicit in guidelines and therefore not open for review or discus- sion. We emphasize that such trade-offs, although not part of science, are a critical input into the decision-making process. Another challenge comes from the possibility that informing environ- mental decisions may require some reconsideration of standard approaches to scientific epistemology (Funtowitz and Ravetz, 1993; Rosa, 1998). For example, standard scientific practice places the burden of statistical proof on the data that show associations or causal relationships. The default

PRACTICE: INTEGRATING SCIENCE 141 BOX 6-1 Guidelines for Analysis Under Uncertainty and Departures from Guidelines A 1983 National Research Council (NRC) report developed the idea of default assumptions as a means to bridge across uncertainties surrounding the risk posed by chemicals that may cause cancer in humans. In the context of a great many decisions on regulating a multitude of such chemicals in the environment, it was judged appropriate to have a standard set of assumptions, called “inference options” (National Research Council, 1983) and later “defaults” (e.g., National Re- search Council, 1994), so that cancer risks could be estimated for many chemicals in a consistent and standardized way, rather than using different procedures for different chemicals. The default assumptions were to be chosen conservatively, so that human cancer risk is more likely to be overestimated than underestimated. Guidelines for Carcinogen Risk Assessment of the U.S. Environmental Protection Agency (EPA) (1986:21) described the estimate as a “plausible upper limit to the risk that is consistent with some proposed mechanisms of carcinogenesis. Such an estimate, however, does not necessarily give a realistic prediction of the risk. The true value of the risk is unknown, and may be as low as zero.” Following the 1983 NRC report, federal and EPA cancer risk guidelines were established so that agency procedures for calculating risk from human exposure to potentially carcinogenic chemicals became more predictable for all of the interested and af- fected parties. Guideline-based procedures for cancer risk assessment have been criticized as slow, resistant to change in response to new scientific information, opaque to nonscientists, and, perhaps most important, as making value judgments invisible and less open to discussion. Although both the NRC reports endorsed the use of defaults, both also stressed the need for iterative processes in which standard procedures using the guidelines could be used for screening, priority setting, and more routine decision making, while exceptions would be permitted when war- ranted by the importance of the decision situation and by new scientific informa- tion. The Presidential/Congressional Commission on Risk Assessment and Risk Management (1997a,b) also recommended an iterative process that includes both risk analysis and public deliberation. Although EPA’s cancer risk guidelines have recently been modified (U.S. Environmental Protection Agency, 2005) to have more flexibility, only a few examples of departures from defaults have occurred in EPA’s cancer risk assessments for specific chemicals. presumption or “null hypothesis” is that there are no effects; to reject that presumption, the data must be so inconsistent with it as to render it highly unlikely (i.e., a likelihood of less than 5 percent or 1 percent). This conservative approach comes at the cost of treating associations and causal relationships that do not meet that high standard of proof as if they are not present. In risk management contexts, this practice followed naively could lead to ignoring consequential risks and costs—risks and costs that

142 PUBLIC PARTICIPATION might be of great concern to the public and its well-being. Thus, standard conservative scientific practices for making knowledge claims may lead to misunderstandings in risk management contexts. Alternative practices are available, such as eliciting scientists’ judgment in the form of probability estimates. Characterizing uncertainty in terms of probabilities based on judgment has been widely advocated (e.g., Raiffa, 1968; Morgan and Henrion, 1990; Intergovernmental Panel on Climate Change, 2001; National Research Council, 2002b), and some applications have been carried out (e.g., Howard et al., 1972; Morgan and Keith, 1995; Moss and Schneider, 2000). However, many scientists remain skeptical, and this approach is not yet standard practice for environmental assessments. A third challenge is for scientists to gain sufficient understanding of what the parties need to know to direct their efforts toward providing deci- sion-relevant information. The possibility that the available science may not be seen as useful by the intended users has often been noted as an impedi- ment to public use of and trust in scientific information that is meant to be useful for decision making (e.g., National Research Council, 1989, 1999a, 2007b). Thus, it is important to ensure that the analyses being conducted make sense to the parties involved. Challenges Related to the Public Many interested and affected parties lack sufficient technical and sci- entific background to understand the scientific issues as scientists present them. It is impractical to educate all participants, so this challenge requires that someone perform a translational role in linking publics to the relevant science. Another challenge is that most people, under most conditions, do not carefully consider all information relevant for analyzing complex issues. Instead, they apply cognitive shortcuts, called heuristics, that do not fol- low rules of logical reasoning and that affect their understanding of envi- ronmental, health, and safety risks, as noted in Chapter 2. But as noted in Chapter 5, given the time, resources, and motivation, nonscientists can be- come quite adept at critically understanding complex scientific analyses. At the same time, scientists are also imperfect analysts, also subject to heuristic processing as well as disciplinary blinders, and subject to other factors that predispose them to deviations from normative ideals, although scientific communities have developed various norms and procedures that provide some safeguards against individual biases and overconfidence (see below). Another challenge is that what people want can appear unstable or inconsistent. The values that people consider in expressing preferences can be influenced by the way choices are framed (e.g., Tversky and ­Kahneman,

PRACTICE: INTEGRATING SCIENCE 143 1981; Tversky, Slovic, and Kahneman, 1990; Payne, Bettman, and ­Johnson, 1992; Gregory, Litchtenstein, and Slovic, 1993; Slovic, 1995; Payne, B ­ ettman, and Slovic, 1999). There are strategies for eliciting people’s values that show promise for addressing this limitation (e.g., Saaty, 1990; Keeney, 1992; Hammond, Keeney, and Raiffa, 1999). Additional challenges arise from the diversity of participants’ values, in- terests, and concerns. People can be expected to attend to different aspects of an environmental issue, to draw different conclusions from the same information, and otherwise to engage in modes of thinking and analysis that may be mutually unintelligible or even mutually provocative. There are analytic techniques, such as benefit-cost analysis and risk analysis, that can help organize information and thinking about choice options and values. However, the use of these techniques, especially as formulas for decision making, have been questioned on methodological grounds because of the concern that they, like default assumptions, make value judgments opaque and inaccessible for political debate (see., e.g., Presidential/Congressional Commission on Risk Assessment and Risk Management, 1997a,b). When value judgments are embedded in analytical techniques that nonscientists do not understand, risks are being taken with the legitimacy of an assess- ment process. These challenges are outlined in greater detail in a background paper prepared for this study by DeKay and Vaughan (2005). They underscore the need to find ways to organize thinking at the collective level that can overcome the limitations of individual cognition. This hope motivates many calls for public deliberation, but it too faces challenges, in the form of some well-established pathologies of group discussion and decision making such as the possibility of increased polarization or inappropriately strong influ- ence from high-status individuals (discussed in Chapter 5; see also Levine and Moreland, 1998; Mendelberg, 2002; Stern, 2005b). One recent review concluded that “left to their own devices, groups tend to use information that is already shared, downplaying unique information held by specific in- dividuals that arguably could improve the situation” (Delli Carpini, Cook, and Jacobs, 2004:328). Research on group process also suggests, however, that under appropriate conditions, group discussion can improve decisions by increasing the use of information that is not commonly shared (e.g., Winquist and Larson, 1998; Kelly and Karau, 1999). Challenges of Communication Given the differences among participants in funds of knowledge, habits of thinking, analytic languages and methods, and values and concerns, com- munication is likely to be problematic. One challenge is presented by the fact that to understand environmental systems and their complex relations

144 PUBLIC PARTICIPATION to human activity, scientists often use mathematical models and statistical and probabilistic methods of analysis that are difficult for nonspecialists to understand. Moreover, the extent of uncertainty or disagreement among scientists on a complex environmental issue may also be hard for nonsci- entists to understand. The challenges of making science understood have been described in extensive bodies of research on risk communication and the use of science to support environmental decisions (for some reviews, see Fischhoff, 1989; National Research Council, 1989, 1999a, 2007b). There are also differences in how knowledge is validated between the scientific community and others. On one hand, scientists have learned to trust the norms of their community as a control on the honesty and quality of scientific work, while other participants may not share these norms or trust that community as an arbiter (National Research Council, 2007a). On the other hand, the valuable, locally grounded knowledge that non- scientists can bring to the analysis of environmental problems usually is not developed via scientific inquiry, so melding it with traditional science and vetting it through review processes that scientists accept can be chal- lenging. The Millennium Assessment (Reid et al., 2005), the U.S. National Assessment on Climate Change and Variability (U.S. National Assessment Synthesis Team, 2000), and the Arctic Climate Impact Assessment (Arctic Climate Impact Assessment, 2004) all have made special efforts to include locally grounded knowledge in environmental assessments, but effective processes for doing so are only beginning to be explored and represent a special challenge for the future. MEETING THE CHALLENGES This section reviews and draws conclusions from four sources of knowl- edge and insight about how to effectively integrate science and public input: decision analysis, research on environmental assessments and decisions, the practice of science at the frontiers of knowledge, and experience dealing with uncertain and disputed knowledge in various social arenas. Decision Analysis Decision analysis is a paradigm for supporting decisions that has emerged over the past half century, building on ideas from economics, systems analysis, and many other areas of science, that is now widely ap- plied in business and governmental decision making (Howard, 1966, 1968; North, 1968; Raiffa, 1968; Fishburn, 1981; Behn and Vaupel, 1982; von Winterfeldt and Edwards, 1986; Clemen, 1996; National Research Coun- cil, 1996; Hammond, Keeney, and Raiffa, 1999). Essentially, it uses logical methods to break a problem into elements; describe what is known (includ-

PRACTICE: INTEGRATING SCIENCE 145 ing what is known about uncertainties), what is desired (goals, objectives, values), and what is possible (the management actions available), and ap- ply an analytical structure to evaluating the alternative actions. North and Renn (2005) provide a more detailed discussion in relation to the present context. The key aspect of decision analysis is the use of logic, especially math- ematics and probability theory, to represent relationships between envi- ronmental management actions and subsequent effects. Such use of logic is basic practice in most fields of science and engineering, as well as in such common activities as making a household budget or preparing a tax return. However, in both scientific practice and everyday life, systematic errors in decision making are common, so decision analysis identifies log- ics and procedures to overcome these common and often subtle mistakes. Decision analysts have developed useful insights about how to organize complex, incomplete, and uncertain scientific information in ways that are both logically consistent and intelligible to nonscientists. These tools are not unique to decision analysis but are widely used in science, government, and business mangement. A decision analysis approach has been used to inform a great variety of environmental decisions, including weather modification applied to hurricanes (Howard, Matherson, and North, 1972), control of sulfur oxide emissions for coal plants (North and Merkhofer, 1976), the probability of contaminating Mars from Viking landings, and applications to sanitary and phytosanitary protection standards (North, 1995; National Research Council, 2000), acid rain (North and Balson, 1985), U.S. govern- ment commercialization of synthetic fuels (Tani, 1978), and drinking water contamination by arsenic (North, Selker, and Guardino, 2002). Five principles for organizing and presenting scientific information flow from research and practice in decision analysis: they cover accuracy, uncertainty, the use of models, the use of sensitivity analysis, and the use of disagreements. Ensure accurate calculations. Quantitative analysis of how manage- ment options affect outcomes can be very complex. Calculations must be done correctly, and inputs and assumptions in scientific analyses must be explicit and available for critical review. The process needs to be transpar- ent and subject to review for correctness by outside parties. Characterize uncertainty in the form of probabilities. An important analytical tool in decision analysis is the use of probabilities to character- ize uncertainty (Savage, 1954; Raiffa, 1968). Indeed, probability theory is the only logically consistent way to reason about uncertainty (Cox, 1961; Jaynes, 2003). Probabilities are usually calculated from statistics on past events. However, probability is also useful for characterizing uncertainty in the absence of statistical data (e.g., uncertainty about the outcomes of possible management choices; see, e.g., National Research Council, 1996,

146 PUBLIC PARTICIPATION 2002b, 2007b). In one tradition, various experts are asked to express their best professional judgment as a probability. It could be stated as willing- ness to place a bet on an uncertain outcome for which the probabilities are known (e.g., flipping a coin, rolling a die, or spinning a ball onto a roulette wheel; see, e.g., Savage, 1954; Raiffa, 1968; Morgan and Henrion, 1990) or by using ordinary qualitative expressions, such as “very unlikely,” that have been keyed to probability numbers (Moss and Schneider, 2000). An- other tradition in science (Cox, 1961; Jeffreys, 1961; Jaynes, 2003) holds that probability theory is a logic for inference, and that probabilities should reflect the available evidence relevant to the uncertain event or variable. Those carrying out or interpreting such probability assessments need to un- derstand the subtleties of human judgment about uncertainty (Spetzler and Staël von Holstein, 1975; Kahneman, Slovic, and Tverskey, 1982; Wallsten and Budescu, 1983) and to recognize that such judgments are imprecise. Important uncertainties may warrant careful assessment by multiple experts under carefully designed protocols. Use models carefully to represent complex realities. Describing how management actions will affect the environment and human health usu- ally involves a large number of relationships, such as growth processes, interaction of various species within an ecosystem, transport and trans- formation of pollutants in the environment, etc. These relationships are often too complex to describe and comprehend in nonscientific language, so scientists often describe them quantitatively in the form of mathematical models that are implemented as computer programs. Modern computers enable very large numbers of relationships to be specified, so that effects following environmental management decisions can be calculated through a large number of steps from assumptions and data input to the model. The use of such models is widespread in scientific disciplines related to the environment and in federal agencies (National Research Council, 2007c) Such models are often very difficult to understand, especially for nonsci- entists, and yet such understanding is critical for effective participation in decisions that use them. Although a model may be presented as representing science, partici- pants in an analytic-deliberative process need to understand that a model is not scientific reality. A model is an abstraction from reality that is based on many assumptions and judgments about how nature works, which ele- ments are most important to represent, and which elements may be left out without compromising accuracy. Thus, a model is always an embodi- ment of scientific judgment that was developed with a purpose in mind. As Levins (1966) noted, any model must make a trade-off among precision, realism, and generality. Although experienced modelers may understand how a particular model deals with these trade-offs, these tradeoffs are not obvious to the public or even to scientists from other fields. Models can

PRACTICE: INTEGRATING SCIENCE 147 be very useful for describing how environmental systems work and how management actions may affect the environment. But models and results calculated by models can be wrong or misleading, especially when a model was developed in a different context, the assumptions are not appropriate, or the input data are incorrect. Scientific peer review and comparison with results calculated by other models that describe the same environmental system are very important for interpreting results from a model. Some models in science can provide the basis for very accurate predictions, but in the area of environmental assessment and decision making, the accuracy of model predictions may not be high. In many cases it is useful to examine how uncertainties on inputs (data, parameters, model assumptions) may translate into uncertainty on the effects or outcomes of interest for envi- ronmental systems. Use sensitivity analysis to find out which elements are important and which are not. Environmental decisions typically turn on projections of the impacts on an environmental or human system over time of each of the options being considered. Thus, projections, whether from models or from the judgment of experts, depend on assumptions about how the system works and on the data being used to make the projections. Sometimes the projected impacts differ greatly depending on particular assumptions. It is useful to carry out a systematic analysis of sensitivity to data inputs and assumptions. Sensitivity analysis is done by listing each input or model as- sumption and asking how the projected impact would change if this factor were different, within a range judged to be reasonable. For example, in a study of water management in a river basin, one would want to evaluate management policies not only under an assumption of average precipitation, but also under scenarios of a series of wet years or dry years. If the evaluation of the management alternatives changes as the precipitation levels vary from wetter to drier, one might choose options that yield good results across scenarios or undertake further study of ways to manage the river to reduce the sensitivity to precipitation. If variation in precipitation is found not to be important, further study of this topic and associated refinement of the model is not appropriate. An important value of sensitivity analysis is in identifying factors that do not have a strong ef- fect on the impacts of interest. Then discussion and debate then move away from those factors and concentrate on the ones that appear to be more critical to the decision. Often models have a great many assumptions and input parameters. Outside review of the model by experts in the relevant technical disciplines may be useful in identifying assumptions and inputs that might be sensi- tive. When the input is uncertain and model results are sensitive to these inputs, it may be useful to represent the uncertainty explicitly for each input factor and the resulting overall uncertainty in the results calculated

148 PUBLIC PARTICIPATION using the model. For example Patil and Frey (2004) have suggested that food safety models should be designed to facilitate sensitivity analysis, and that sensitivity analysis methods are a valuable tool in supporting food safety regulation. Such conclusions also seem appropriate for other areas of environmental assessment and decision making. A recent National Re- search Council report evaluating a health risk assessment prepared by a federal agency cited lack of sensitivity analysis as a major failing (National Research Council, 2007d). Use disagreements to focus analysis and promote learning. Achieving a logically consistent integration of what is known, what society wants, and what society can do may require that many disagreements must be resolved: about policy goals, about the state of knowledge, and so forth. Good decision analysis helps make the nature of the disagreements clearer, often allowing some of them to be addressed through further data collection and analysis. Thus, getting high-quality information may require multiple rounds of interaction, in which the parties learn from each other. Research on Environmental Assessment and Decision Processes Strong traditions of research on integrating science and public participa- tion have emerged in the overlapping literatures on risk (National Research Council, 1996; Jaeger et al., 2001; Rosa, Renn, and McCright, 2007), common-pool resource management (Dietz, Ostrom, and Stern, 2003), ecosystem and natural resource management (Shannon, 1987, 1991; Dietz and Stern, 1998) and impact assessment (Cramer, Dietz, and ­ Johnston, 1980; Dietz, 1987, 1988). Stated briefly, the core of this literature acknowledges that both the public (interested and affected parties, in our terminology) and the scientific community have substantial expertise, but expertise of different kinds and on different matters (Dietz, 1987). On one hand, as noted in Chapter 2, the public often has detailed knowledge of the local context and everyday practices that is not readily available to the scientific analyst. And of course the public, rather than the scientific community, is the legitimate source of information about public values and preferences. On the other hand, scien- tific analysis is essential for understanding the dynamics of complex systems and assessing the uncertainty in how such systems evolve over time with different management actions. It is also valuable in systematically eliciting public views about values and preferences (for a discussion of systematic techniques for value elicitation, see Gregory and McDaniels, 2005). The literature also shows that differences in perspective sometimes arise that can be so fundamental as to call into question the methods, or even the use, of decision analysis. Perhaps the most prominent example arises with choice options that have a nonzero probability of resulting in catastrophes

PRACTICE: INTEGRATING SCIENCE 149 such as the extinction of species, the elimination of major ecosystems, or the development of biological weapons of mass destruction. Faced with such possibilities, some segments of the public advocate precautionary ap- proaches that rule out such options absolutely. They may reject any decision analyses that might be used to justify trading these risks against potential benefits of the choice options or advocate decision-analytic methods that emphasize the possibility and highlight the importance of worst-case pos- sibilities (e.g., Marshall and Picou, 2008). Because of the complementary nature of scientific and other kinds of knowledge and the potential for conflict about how best to use knowledge to inform decisions, researchers have often concluded that high-quality environmental assessment and decision making require a dialogue between scientfic analysis and public deliberation in which science both informs and is informed by the public. Because public concerns in large part de- termine which scientific questions are decision relevant and because public knowledge of local contexts and practices must inform scientific analysis, successful linking of analysis and deliberation is also considered critical to the legitimacy of the analysis and of decisions that use it. In addition, this literature gives strong reason to believe, though still very little hard evidence, that by participating in iterated processes of analysis and delibera- tion, public participants will become more sophisticated about the scientific analysis over time and scientists and agency officials more sophisticated about the public’s needs for understanding. Such changes increase capacity in all those involved. Many empirical studies support the value of linking science and pub- lic deliberation for obtaining good outcomes. For example, Mitchell et al. (2006:320), in their synthesis of work on environmental assessments, noted the importance of users understanding the science well enough that “credibility by proxy” is replaced with “credibility through understand- ing.” Bradbury (2005:15) concludes in her analysis of Superfund sites that “access to timely and accurate information is a prerequisite for community members’ ability to participate effectively.” Leach (2005) identified nine studies that show the importance of access to adequate scientific and tech- nical information in Forest Service planning processes. Bingham (2003), drawing on multiple sources of practitioner experience, has noted the spe- cial importance of being explicit about the character of science and data when the interested and affected parties (sometimes including responsible government agencies) define the problem differently, when decision mak- ers’ objectives are not clearly defined, when the conceptual framework for the issue is shifting, when the parties disagree about the methods for data collection and/or analysis, or when arguments over science are masking an underlying conflict about something else. The need for iterating be- tween analysis and deliberation has been repeatedly emphasized in assess-

150 PUBLIC PARTICIPATION ments of environmental decision making (e.g., National Research Council, 1999a, 2003, 2005a; Renn, 2005) and is reflected in much existing agency guidance. Such studies do not point to any a priori optimal format or set of rules for integration. Rather, they suggest that each process must be designed around the problems and opportunities of a specific context. The literature on formats for public participation, discussed above, does not provide sys- tematic knowledge for reasons already discussed. An analytical and empirical literature is beginning to emerge that identi- fies and examines possible processes and techniques for integration (Burgess et al., 2007; Chilvers, 2007, 2008; Bayley and French, 2008; Webler and Tuler, 2008). This is a promising area for future research. Scientific Practice at the Frontiers of Knowledge Uncertainty and disagreement are always present at the frontiers of science, so scientific communities have developed methods, norms, and procedures that help them test knowledge claims so that shared knowledge can advance despite the limitations of individuals. Some of these methods (e.g., nonhuman instrumentation, the use of mathematics and logic) aim to reduce or circumscribe the role of human judgment, but others actu- ally rely on “the subjective judgments of fallible human beings and social institutions to detect and correct errors made by other fallible humans and institutions” (Stern, 2005a:976). Judgment is especially critical under conditions that are common in environmental policy arenas: when there are multidimensional and ineq- uitable impacts, scientific uncertainty and ignorance, value uncertainty and conflict, an urgent need for scientific input, and mistrust among the interested parties (Dietz and Stern, 1998). Under such conditions, scientists often disagree about which scientific problems must be solved to provide needed information, which assumptions are reasonable when knowledge is incomplete, how to interpret uncertain or conflicting information, and so forth. Scientific communities have developed several norms and practices that help them advance knowledge despite uncertainty, disagreement, and human frailty and that seem capable of adaptation to practical problems of environmental assessment and decision making. The key norms and practices seem to be: • Define concepts concretely and operationally • Make assumptions explicit • Test sensitivity of conclusions to different assumptions • Make analytic methods transparent

PRACTICE: INTEGRATING SCIENCE 151 • Make data available for reanalysis • Apply logical reasoning in drawing conclusions • Restrict arguments to matters of substance, method, and logic and avoid ad hominem arguments • Test conclusions for consistency with other data • Publish results and conclusions in open sources • Subject scientific analyses and reports to independent (peer) review • Sanction fraud and misrepresentation These norms and practices seem to have considerable generality and are understandable to nonscientists who may not understand scientific data, assessments, or inference techniques. They create a climate of sci- entific openness that contributes to the legitimacy of deliberations; that moves the deliberation toward clarity on matters of fact; and that tends to clarify the ways in which disputes turn on matters of evidence, judgment, and values. Experience with Uncertain and Disputed Knowledge Practitioners of environmental dispute resolution are among the most knowledgeable about procedures for integrating science and public input in ways that advance understanding. A summary of insights from this ex- perience (Bingham, 2003) identifies five principles for making better public choices in the face of contested science: clarify the questions jointly before gathering more data; focus on decision-relevant information; let science be science, and do not confuse it with policy; learn together; and remember that science is not necessarily the underlying cause of disputes and draw on other basic consensus-building principles and tools. Bingham calls on process conveners to consult widely about the scientific questions that need to be addressed, to talk explicitly about trust, uncertainty, and the role of information, and to identify and address disagreements over information through a process that builds trust. We return to this approach in more detail in Chapter 7. Citizens in daily life have also found ways to make good use of the expertise of physicians, accountants, attorneys, architects, and other pro- fessionals, even though they are personally not equipped to evaluate that expertise. An important strategy relies on seeking independent assessments from others who have relevant knowledge and different interests or perspec- tives from the original expert. Principles such as getting second opinions and independent checking of claims are widely familiar and understand- able from practices in medicine, the adversarial system of trial by jury, news reporting, and other social institutions. This observation suggests that it should be possible to devise systems of independent review for use

152 PUBLIC PARTICIPATION in environmental public participation that are credible to most, if not all, participants. Doing this would seem to require efforts at the start to seek agreement among all parties on how to identify, share, evaluate, and apply decision-relevant information of various kinds (scientific, cultural, techni- cal, etc.). To integrate the science well, it is important for relevant informa- tion to be accessible to all participants and to make special efforts to ensure that the participants can understand this information. It is also important to agree on processes for joint fact-finding or other strategies for shared learning that respect that any participant may have special expertise, that acknowledge the different scopes and domains of such expertise, and that provide ways of checking knowledge claims that are credible to participants who lack expertise in the specific area. CONCLUSION Integrating science and public participation through processes that iter- ate between analysis and broadly based deliberation promotes the quality, accountability, and legitimacy of environmental assessments and decisions. Such processes are more likely to produce satisfactory results if they are transparent regarding decision-relevant information and analysis, are at- tentive to both facts and values, are explicit about assumptions and un- certainties, and provide for independent review and iteration to allow for reconsideration of past conclusions. Understanding Risk (National Research Council, 1996:3) concluded: [S]uccess depends critically on systematic analysis that is appropriate to the problem, responds to the needs of the interested and affected par- ties, and treats uncertainties of importance to the decision problem in a comprehensible way. Success also depends on deliberations that formulate the decision problem, guide analysis to improve decision participants’ un- derstanding, seek the meaning of analytic findings and uncertainties, and improve the ability of interested and affected parties to participate effec- tively in the risk decision process. The process must have an appropriately diverse representation of the spectrum of interested and affected parties, and of specialists in risk analysis, at each step. This set of conclusions is generally supported by evidence available since that report’s publication and applies to environmental assessments and decisions more generally. The Presidential/Congressional Commission on Risk Assessment and Risk Management (1997a,b) reached similar con- clusions and recommended an analytic-deliberative process to improve federal decision making in the management of environmental risks (see Box 6-2). The evidence presented in this chapter allows for some further specification of the advice offered in these earlier reports.

PRACTICE: INTEGRATING SCIENCE 153 The evidence supports the conclusion that there is no readily available alternative to including the public in decision-relevant environmental as- sessments that integrate science. Furthermore, there is no readily available alternative to deliberation as a means of resolving disagreement among scientists, and it must be recognized that valuable information often comes from nonscientists. Dialogue among multiple perspectives is necessary for quality in assembling and assessing the relevant information. Respectful evaluation is needed by all parties, including the scientists, and independent review is essential to the credibility of scientific and technical conclusions. Although there are significant challenges to integration, both for scientists and for the public, the challenges can be addressed. The most promising approach is to extend the norms used in science and in the best of public policy to encourage balanced and substantively focused discussion that advances the quality and legitimacy of analysis and contributes to participants’ capacity for future deliberation. There is little careful empirical research on how to do this, but the studies that do exist converge with insights from decision analysis and long-standing practices in science, public policy, and nonspecialist use of expert knowledge on the principles identified here—transparency of decision-relevant information and analysis, explicit attention to both facts and values, explicitness about assumptions and uncertainties, independent review, and iteration—as a shorthand description of a set of norms and practices for integrating science that are conducive to good results. We note that many federal agencies that invest considerable effort in doing environmental and risk analysis have not institutionalized these norms and practices in their assessment and decision processes. It is more typical to use linear or sequential processes in which the agency assumes re- sponsibility for problem formulation, has its scientific staff and contractors gather information and conduct the analysis, and then submits the analy- sis to a notice-and-comment process. In such a process, the analysis may receive some level of peer review prior to finalization, but the issues to be addressed in the analysis, the information to be considered, and the under- lying assumptions do not get reviewed early enough to shape the analysis. We emphasize the conclusion of the Presidential/Congressional Commission on Risk Assessment and Risk Management (1997a:5; see Box 6-2) after extensive hearings, that “many risk management failures can be traced to not including stakeholders in decision making at the earliest possible time and not considering risks in their broader contexts.” Although some federal agencies have had extensive experience with public participation, the pro- cesses proposed in this and preceding National Research Council reports are not standard practice across federal agencies. Most federal agencies with responsibility for environmental assessment and decision making do not commonly integrate science and public participation through processes that iterate between analysis and broadly based deliberation.

154 PUBLIC PARTICIPATION BOX 6-2 The Presidential/Congressional Commission on Risk Assessment and Risk Management The Presidential/Congressional Commission on Risk Assessment and Risk Management (hereafter the commission) was established by Congress in 1990 legislation amending the Clean Air Act (Presidential/Congressional Commission on Risk Assessment and Risk Management, 1997a,b). The 10 commission mem- bers, appointed by the president and leaders of both parties in Congress, included leading scientists with expertise in biological sciences applicable to public health and environmental problems. The commission was chartered to “make a full inves- tigation of the policy implications and appropriate uses of risk assessment and risk management in regulatory programs under various Federal laws to prevent cancer and other chronic health effects which may result from exposure to hazardous substances” (Presidential/Congressional Commission on Risk Assessment and Risk Management, 1997a:i). This purview overlapped considerably with that of the present study, in that fed- eral agencies, such as the Environmental Protection Agency, the Food and Drug Administration, and the Occupational Safety and Health Administration, spend much of their budgets dealing with hazardous substances in the environment. The commission conducted an extensive set of hearings with stakeholder groups in a variety of locations across the United States. It developed a six-step risk management framework as the basis for its recommendations for reform of federal agency practices (Charnley, 2003; North, 2003; Omenn, 2003; Presidential/ Congressional Commission on Risk Assessment and Risk Management, 1997a,b; see Figure 6-1) and produced a set of recommendations very similar to those made in Understanding Risk: SUMMARY: THE PRACTICE OF PARTICIPATION Based on an assessment of multiple sources of evidence, Chapters 4, 5, and 6 identify sets of empirically supported principles of good public par- ticipation practice—for project management, for organizing the participa- tion, and for integrating the science. We summarize them in Box 6-3. These principles echo those that can be found in sources of guidance derived mainly from practitioners’ experiences and, in that sense, the principles are not new. Our findings do, however, reinforce at least certain aspects of the collected experiential knowledge with other sources of support. The main challenge facing practitioners is to find practical ways to implement the principles of good public participation practice. As we note throughout this volume, practitioners have developed numerous formats,

PRACTICE: INTEGRATING SCIENCE 155 [M]any risk management failures can be traced to not including stakeholders in deci- sion making at the earliest possible time and not considering risks in their broader contexts. In contrast, the Commission’s Risk Management Framework is intended to: • Provide an integrated, holistic approach to solving public health and environ- mental problems in context • Ensure that decisions about the use of risk assessment and economic analy- sis rely on the best scientific evidence and are made in the context of risk manage- ment alternatives • Emphasize the importance of collaboration, communication, and negotiation among stakeholders so that public values can influence risk management strategies • Produce risk management decisions that are more likely to be successful than decisions made without adequate and early stakeholder involvement • Accommodate critical new information that may emerge at any stage of the process The commission’s final report is a strong call for a shift to an altered approach. However, it does not prescribe detailed methods for accomplishing such goals as “adequate and early stakeholder involvement.” Also, despite the fact that most commission members are scientists, the report did not provide much detail on how scientific information should be assembled or evaluated for the iterative analytic- deliberative approach it proposed. For example, the commission’s report does not clarify the phrase “best scientific evidence” or address the role of judgment in determining what is best, particularly when the science available for predicting the consequences of policy alternatives involves pervasive uncertainty. techniques, and practices for implementing the principles, and many of them can be helpful. As we also note, numerous guidebooks are available that describe the formats and practices and offer advice on how and when to use different ones. Practical experience makes clear, however, that implementing the prin- ciples can be much more difficult in some contexts than others, and that different contexts present different challenges for public participation. In Chapters 7 and 8 we review available evidence on which aspects of context matter, how they matter, and how it is possible to examine the context of public participation to diagnose the situation, that is, to identify and an- ticipate specific difficulties that are likely to arise in the context at hand, when trying to implement principles of good practice. This kind of diag- nostic process has prescriptive value in that it can help practitioners and

156 PUBLIC PARTICIPATION BOX 6-3 Empirically Supported Principles of Practice for Environmental Public Participation MANAGEMENT PRACTICES (Chapter 4) Clarity of purpose Commitment to use the process to inform decisions Adequate resources Appropriate timing Implementation focus Commitment to learning ORGANIZING PARTICIPATION (Chapter 5) Inclusiveness of participation Collaborative problem formulation and process design Transparency of process Good-faith communication INTEGRATING SCIENCE (Chapter 6) Iteration between analysis and broadly based deliberation with: • availability of decision-relevant information • explicit attention to both facts and values • explicitness about analytic assumptions and uncertainties • independent review • reconsideration of past conclusions participants select from among the great variety of available formats and practices those that may help address the particular difficulties they can expect to encounter. However, there are limits in offering prescriptions. As the following chapters show, it is neither possible nor advisable to identify any single “best practice” for conducting public participation or even for overcom- ing particular difficulties that certain contexts present. Rather, the best that can be done after identifying the likely difficulties is to select practices collaboratively to try to address them and then to monitor the process to see whether the practices are accomplishing the desired results, keep- ing open the possibility of changing practices or formats when they are unsuccessful.

Next: 7 Context: The Issue »
Public Participation in Environmental Assessment and Decision Making Get This Book
×
Buy Paperback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Federal agencies have taken steps to include the public in a wide range of environmental decisions. Although some form of public participation is often required by law, agencies usually have broad discretion about the extent of that involvement. Approaches vary widely, from holding public information-gathering meetings to forming advisory groups to actively including citizens in making and implementing decisions.

Proponents of public participation argue that those who must live with the outcome of an environmental decision should have some influence on it. Critics maintain that public participation slows decision making and can lower its quality by including people unfamiliar with the science involved.

This book concludes that, when done correctly, public participation improves the quality of federal agencies' decisions about the environment. Well-managed public involvement also increases the legitimacy of decisions in the eyes of those affected by them, which makes it more likely that the decisions will be implemented effectively. This book recommends that agencies recognize public participation as valuable to their objectives, not just as a formality required by the law. It details principles and approaches agencies can use to successfully involve the public.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!