3
Major Challenges to Achieving an Effective Assessment Process

Despite the diversity of assessment contexts and types of assessments, some common challenges can be identified. If these challenges are addressed adequately, there is a greater likelihood that the assessment process will effectively inform the target audience and the decision-making process. There is an extensive body of literature in which these challenges have been identified. This chapter provides a summary of the evaluation of assessments available in the literature.

FRAMING A CREDIBLE AND LEGITIMATE PROCESS

Framing the assessment process such that it is perceived as credible and legitimate by all relevant stakeholders is a major challenge (Farrell et al. 2001). The leading social science theories of trust in the policy process (e.g., Ostrom 1998; Leach and Sabatier 2005) indicate that trust comes from two sources. One is shared values and beliefs. The other is predictable behavior in an environment where deviant behavior is penalized. In general, the scientific community has such shared values and beliefs with regards to the rigor of the scientific process (Merton 1973; Jasanoff 1987). Further, most scientists see colleagues, with a few exceptions, behaving according to the norms of science and have confidence in mechanisms—such as replication and peer review—that disclose and sanction unethical behavior. However, these bases of trust among scientists are not always shared by those engaged in global change politics (Jasanoff 1987). In fact, since issues of global change are so complex, a large nonexpert community judges the risks and



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 39
Analysis of Global Change Assessments Lessons Learned 3 Major Challenges to Achieving an Effective Assessment Process Despite the diversity of assessment contexts and types of assessments, some common challenges can be identified. If these challenges are addressed adequately, there is a greater likelihood that the assessment process will effectively inform the target audience and the decision-making process. There is an extensive body of literature in which these challenges have been identified. This chapter provides a summary of the evaluation of assessments available in the literature. FRAMING A CREDIBLE AND LEGITIMATE PROCESS Framing the assessment process such that it is perceived as credible and legitimate by all relevant stakeholders is a major challenge (Farrell et al. 2001). The leading social science theories of trust in the policy process (e.g., Ostrom 1998; Leach and Sabatier 2005) indicate that trust comes from two sources. One is shared values and beliefs. The other is predictable behavior in an environment where deviant behavior is penalized. In general, the scientific community has such shared values and beliefs with regards to the rigor of the scientific process (Merton 1973; Jasanoff 1987). Further, most scientists see colleagues, with a few exceptions, behaving according to the norms of science and have confidence in mechanisms—such as replication and peer review—that disclose and sanction unethical behavior. However, these bases of trust among scientists are not always shared by those engaged in global change politics (Jasanoff 1987). In fact, since issues of global change are so complex, a large nonexpert community judges the risks and

OCR for page 39
Analysis of Global Change Assessments Lessons Learned benefits associated with an issue based on the trust it has in the institution or process (Earle and Cvetkovich 1995; Siegrist et al. 2000). Trust in an assessment, as in any process that relies on deliberation among multiple individuals, requires that the process be seen as both fair and competent (i.e., legitimate and credible) (Habermas 1970; Renn et al. 1995). Because trust conflates fairness and competence (Habermas 1970), the terms “credibility” and “legitimacy” are used instead throughout this report to distinguish the two sources of trust (Ravetz 1971; Clark and Majone 1985; Social Learning Group 2001a). Legitimacy implies that those who have a view on the issue, and those who will be affected by decisions that emerge from the process, have the opportunity to have a say in the process either directly or through a third party whom they trust. Further, it requires that the process allows all views to be given serious consideration, with the outcomes determined by thoughtful deliberation under rules seen as acceptable to all participants. Credibility implies that those who have knowledge relevant to the issues at hand participate in ways that allow their knowledge to influence the discussion, either through their direct participation or through consideration of their work. The following questions provide guidance for global change assessments to achieve credibility and legitimacy: Who has interests at stake in the outcomes of the assessment process? What kind of expertise is required to understand the issues being considered? Process assessments, impact assessments, and response assessments differ considerably in who has interests at stake, what kinds of expertise are relevant, and who has that expertise. Thus, implementation of the requirements for a legitimate and credible assessment will differ across the three types of assessments. Process Assessments Process assessments describe the state of the natural world as we understand it, the global change of interest, and its natural and anthropogenic causes (for detailed definition see Chapter 2). They are not intended to provide policy options, and therefore strive to avoid analysis of values, such as benefits, costs, or risk preferences. This simplifies the task of achieving credibility and legitimacy compared to impact assessments that inevitably consider value judgments and trade-offs. Science has strong norms for how to carry out deliberations about the state of knowledge, so it is relatively easy, in principle if not in practice, to conduct a credible process assessment

OCR for page 39
Analysis of Global Change Assessments Lessons Learned (Jasanoff 1987). Rigorously adhering to the science and the established rules regarding the inclusion of peer-reviewed or non-peer-reviewed material helps to ensure credibility. These preestablished norms are undoubtedly one reason the committee finds that many process assessments have been conducted successfully. Over the years, the knowledge and practice of conducting this type of assessment have been developed and refined by a core of experienced scientists and assessors, and have been successfully applied to multiple generations of stratospheric ozone assessments (WMO 1986a, 1990a,b, 1992, 1995, 1999, 2003, 2007) and the Intergovernmental Panel on Climate Change (IPCC) Working Group I (WG I) assessments (IPCC 1990a, 1995a, 2001a). A well-established and successful model for process assessments has emerged that involves the following key elements: Getting a critical mass of the world’s most respected scientists in the relevant fields to participate; Ensuring broad participation and sponsorship; Having an intensive, science-focused process of deliberation that is of such high quality that it attracts the number and quality of participation required and produces reports that can serve as authoritative scientific references in the field; Urging the process to provide clear consensus evaluations of the state of knowledge on key policy-relevant questions, to the extent the underlying knowledge base allows; Writing clear, compact summaries with the involvement and consent of the scientific author teams; and Disseminating the summary messages prominently and consistently. Even in the case of process assessments, it might be difficult to achieve the perception of legitimacy and credibility. Indeed scientists and other experts who participate in an assessment may have a different perspective on its legitimacy than others who expect that the outcome will affect their interests but are not intimately involved in the process (Jasanoff 1987). In practice, process assessments will be perceived as legitimate only if the intended target audience has ways of ensuring that the relevant questions are addressed and that scientific controversies of concern have been resolved to its satisfaction and by a process it considers legitimate (Jasanoff 1987). In the case of climate change in particular, many political actors realize that the conclusions drawn by process assessments, such as IPCC WG I, shift the momentum of policy decisions that are highly consequential for their actions. In turn, some actors external to the scientific community are increasing skeptical of these process assessments (McCright 2000; McCright and Dunlap 2003). If the sole purpose of the process assessment is to reach

OCR for page 39
Analysis of Global Change Assessments Lessons Learned a scientific consensus, it might need to achieve credibility only among the scientific community (Social Learning Group 2001a). However, if there is substantial political interest in the outcome of a process assessment, then it will need to achieve credibility and legitimacy among a broader audience whose concern is not with the science per se, but with the policies that may be adopted as a result of scientific conclusions. It is commonplace in environmental disputes for arguments that are logically about values (e.g., weights to be given to the costs, benefits, and risks associated with climate change, biodiversity loss, or ozone depletion) to be framed in terms of disputes about facts (Dietz et al. 1989; Dietz 2001). Thus, the factual content of a process assessment may be criticized even if the underlying concerns are about policy choices, and thus more about values than about facts. A major challenge for process assessments over the next decade may be in enhancing credibility and legitimacy among those who feel their interests are affected by the outcomes of these assessments. The IPCC process, in particular, is an attempt to develop legitimacy among a wider stakeholder group for all of the working group products, including that of WG I. Impact Assessments Impact assessments have been much less successful in achieving credibility and legitimacy than process assessments (Parson et al. 2003; Moser 2005). Impact assessments, which characterize the impacts of environmental change and human and natural systems, need to be perceived as credible and legitimate among the broadest audience. To achieve scientific credibility, all the rules of a process assessment apply. However, the expertise needed to “get the science right” in an impact assessment is typically broader than for process assessments. Impact assessments also must be credible and legitimate to those who will be affected by the global change being analyzed, the policies implemented to address that change, or both. Because impacts manifest themselves within localities, economic sectors, and ways of living, highly contextualized “local” knowledge—that is, knowledge about the places, sectors, or activities that may experience impacts—is essential to an accurate analysis. The literature provides substantial guidance for local and regional participation (Cohen 1997; Kasemir et al. 1999, 2003; Harremoës and Turner 2001; Van Asselt and Rijkens-Klomp 2002; Toth 2003). Some assessments, such as the Millennium Ecosystem Assessment (MA) and the U.S. National Assessment of Climate Change and Variability, have made extensive efforts at incorporating local knowledge, and their efforts were at least partially successful (Morgan et al. 2005). Nevertheless, effectively incorporating local knowledge in impact assessments remains a challenge.

OCR for page 39
Analysis of Global Change Assessments Lessons Learned Another major challenge for attaining legitimacy and credibility in impact assessments of global scope is the relative lack of experience in ensuring adequate and legitimate participation at that scale. In particular, ensuring equity in participation between developing and developed nations is a significant challenge, necessitating capacity building for local knowledge to be incorporated in a fair and competent way (Jager et al. 2001). In the case of impact assessments, achieving legitimacy and credibility is further complicated because, unlike process assessments, they usually require value analysis, that is, some weighting of costs, benefits, and risks as they are visited across various populations. Just as the assessment must be competent with regard to the scientific “facts” it addresses, it must be competent with regard to the values deployed in analyzing trade-offs and options (Dietz 2001, 2003). Although systematic procedures for assessing values and risk preferences and aggregating them are available, they are complex and none are without controversy. Response Assessments Response assessments focus on reducing human drivers of the environmental change or their impacts. There is a logical coupling between response and process assessments, mediated by the scenarios of emissions or other anthropogenic perturbations that are used to drive projections of future environmental change in the process assessments. Process and response assessments together have a logical structure, considering human and natural factors along with potential human interventions, that is parallel to the complete structure of impact assessments. But, whereas the human and natural factors cannot be separated in impact assessments, they can be separated at the boundary between process and response assessments. This separation is usually done by using scenarios that describe how human driving forces will unfold over time. Such scenarios become the mechanism that crosses the boundary between process and response assessment. Scenarios provide process assessments with a set of possible futures that are plausible even if a response assessment does not attempt to assign probabilities to future states of the world. This ability to separate process and response assessments by the use of scenarios makes both process and response assessments much easier to conduct successfully than impact assessments, albeit at the cost of resting the process assessment on “what if” scenarios rather than detailed analysis of social system responses. As with impact assessments, the involvement of stakeholders is essential for the success of response assessments, but in this case, who the stakeholders are and how to involve them are starkly different.

OCR for page 39
Analysis of Global Change Assessments Lessons Learned Technology Assessments: A Special Type of Response Assessments The parties interested in and affected by the choice of technologies are composed primarily of industries and others that develop and deploy technologies, regulators who enforce decisions, and those in academic and other research institutions who develop technology. Achieving a legitimate, and in some cases legal, technology assessment brings additional challenges regarding proprietary information and the possibility of giving some participants competitive advantage (Parson 2006). A further challenge in technology assessments is thinking broadly about the implications of its conclusions. Some technological choices have widespread societal and environmental consequences. Sectors of the economy, regions, and lifestyles can all be changed substantially by technological choices, producing both winners and losers. Some have argued that technological choices are as consequential as or more consequential than what are seen as standard political decisions. In general, if the assessment’s conclusions will lead to decisions that have relatively minor impacts, it may be sufficient to achieve credibility among those who work directly with the technology (Clark et al. 2001). For example, technological solutions to the ozone problem required mainly finding an alternate coolant for refrigeration or an alternate non-CFC-emitting production process for foams. Because these solutions only impact the industry sectors involved and not the public at large, they can be considered as having minor impacts. On the other hand, in the climate change debate, consideration of switching to nuclear power to reduce greenhouse gas emissions has potentially major impacts on the public. Thus, given the broader societal implications of the choices, a broader community involvement may be necessary to ensure legitimacy. Integrated Assessments As discussed in Chapter 2, there are multiple approaches to and multiple definitions of integrated assessments (Parson 1995; Weyant et al. 1996; Rotmans and Dowlatabadi 1996). Some refer to the production of a synthesis report that includes social, biological, and physical science components and that is based on loosely coupled multidisciplinary analysis (Parson 1995). Another definition is restricted to the development and use of models that explicitly link the dynamics of social, biological, and physical systems (Ravetz 1997, 2003). Over time, the latter, more tightly coupled form of integrated assessment has become more common. Even the most thoroughly integrated assessments often neglect issues that are of considerable importance in decision making, such as equity (Morgan and Dowlatabadi 1996). While it is important to analyze equity,

OCR for page 39
Analysis of Global Change Assessments Lessons Learned there are practical difficulties in conducting such analyses in assessment processes that require broad consensus. First, the available methods for analyzing equity, such as distributionally weighted cost-benefit analyses, require assumptions about the weight that should be given to risks, benefits, and costs visited on one group versus another. Such weights are value judgments, and it is difficult to develop consensus on value judgments in assessments (Moser and Dilling 2004; Moser 2005). This problem occurs even when formal methods are not used. Simply identifying equity issues requires agreement about what dimensions of inequality should be given consideration (region, gender, ethnicity, social class, etc.), a complex and often contentious problem itself. The degree and nature of the integration is a design decision, ideally made with specific reference to the users and purpose of the assessment. If an integrating structure is designed, it is possible to ensure that broad-scale assessments can continue to be developed, while at the same time enhancing the relevance in individual applications where many resource decisions are made (Schneider 1997; Schneider and Lane 2005). Integrated assessments provide opportunities to address multiple spatial scales (local to global) and multiple stresses relevant to an environmental change. Indeed, the U.S. National Assessment Climate Change Impacts on the United States (NAST 2001) called for a more integrated approach to examining impacts and vulnerabilities to multiple stresses. For example, assessment of the impacts of climate change on the health sector was clearly limited by the lack of knowledge of the integrated system. Changes in vector-borne diseases (e.g., dengue fever delivered by mosquito populations) could clearly be tied to climate change and variability; however, there were many other environmental factors that also controlled the distribution of these diseases (e.g., the importance of land use on host distribution, waste products that impact water and air quality, human social systems). A National Research Council (NRC) workshop Understanding and Responding to Multiple Environmental Stresses (NRC 2006) notes that integrated assessments are required when the impacts and decisions are place-based (i.e., specific to a locality or region) but the drivers of impacts are also drawn from a much larger scale (e.g., climate change). The link between large-scale drivers and place-based contexts and a focus on multiple stresses also increase the ability to put knowledge to work by connecting to stakeholders and decision makers in the location where the decisions are relevant. This “nested matrix” approach is a model to be further explored because it combines the strategic advantages of a broad-scale assessment while allowing a number of detailed case studies that are more useful to local decision making. In recent years, considerable attention has been given to effective methods for engaging decision makers and the public in the process of

OCR for page 39
Analysis of Global Change Assessments Lessons Learned integrated assessment in ways that enhance the quality and integrity of the science (Cohen 1997; Kasemir et al. 1999, 2003; Harremoës and Turner 2001; Van Asselt and Rijkens-Klomp 2002; Toth 2003). Not all assessments need to be fully integrated, although there are many benefits to working toward an integrated approach, including a greatly enhanced potential to be policy relevant. In general, an integrated assessment is justified when the problem itself is multidimensional, as is the case with most environmental problems. Having the appropriate disciplines—including both physical, biological, and social scientists—involved in an assessment is critical for both scientific and political credibility. Social scientists are especially critical for structuring the problem and communicating uncertainties and risks (Tol and Vellinga 1998; Van Asselt and Rotmans 2004). For example, climate change can be explained in terms of physical processes that are connected to the wide variety of human activities that give rise to greenhouse gas emissions, leading to impacts on society. Understanding the various links in the chain and their interconnections is an extremely complex undertaking involving inputs from a multitude of disciplines. In addition, social science perspectives can be critical for adequately incorporating uncertainty into models (Van Asselt and Rotmans 2004). The rationale for an integrated assessment is that the separation that differentiates process, impact, and response assessments from each other is ultimately artificial and may lead to science that is less robust than might be ideal. Responses depend on real and perceived impacts; they affect the processes driving global change and consequently alter the impacts; finally, responses themselves have impacts. Rational decision making should take account of the full range of these interactions. Of course, as Levins (1966) has noted, models are always simplifications, so integrated assessments must make decisions about how to simplify. Linked assessments tend to maintain much of the complexity of individual assessments, at the cost of less than full articulation and harmonization. In contrast, fully integrated assessments tend to maximize articulation and harmonization, but at the cost of simplifying. Both of these strategies can be useful, but the trade-offs need to be weighed carefully in advance. For some decisions, the detail contained in linked assessments, but often lost in fully integrated assessments, is essential. However, the limited articulation of linked assessments means that some critical feedbacks are either not considered or considered only qualitatively. Despite the importance of integrated understanding for decision making, methods of integration, whether nested matrices or fully integrated models, are at an early, yet rapid, stage of development (Morgan and Dowlatabadi 1996; Schneider 1997; Tol and Vellinga 1998; Van Asselt and Rotmans 2003; Schneider and Lane 2005) and deserve further development, including model comparisons. The Energy Modeling Forum (http://www.stanford.edu/group/EMF)

OCR for page 39
Analysis of Global Change Assessments Lessons Learned and the newly formed Integrated Assessment Society (http://www.tias-web.info/) are taking laudable steps in this direction. SCIENCE-POLICY INTERFACE: BALANCING CREDIBILITY WITH SALIENCE The appropriate interface between science and policy is frequently debated and requires deliberate negotiation at the onset of each assessment process (NRC 1983; Jasanoff 1987; Cash and Moser 2000). The interactions between scientists and policy makers in assessments can assume different forms, ranging from efforts to isolate the scientific community from the policy-making process via boundary organizations such as the National Academies, to highly institutionalized collaboration and deliberative processes between both groups, such as congressional hearings. Regardless of where along the spectrum the science-policy interface falls, each community “must maintain its self-identity and protect its sources of legitimacy and credibility” (Farrell et al. 2006). Especially careful boundaries are necessary between the authorizing body (i.e., those requesting the assessment) and the assessment participants. While the authorizing body needs to be involved in the framing of the goals and scope of the assessment to ensure that the most salient questions are addressed (NRC 1996), legitimacy and credibility suffer when it is perceived that they control the assessment process (Jasanoff 1987; Cash and Moser 2000). At the same time, isolating scientists from the authorizing body too much is likely to result in a loss of salience (NRC 1996). Therefore, negotiating this boundary is a balancing act between achieving credibility, legitimacy, and salience (Jasanoff 1987). Based on its deliberations and input from scholars and practitioners of assessments, the committee concludes that an explicit boundary is critical throughout the process, but most importantly during the review stage. A key determinant of credibility is the quality control applied in an assessment. Quality control is defined as the procedures designed to guarantee that the “substantive material contained in the assessment report agrees with underlying data and analysis, as agreed to by competent experts” (Farrell et al. 2006). Different criteria are used to define what an expert opinion is (e.g., that which is published in peer-reviewed journals or is subject to repeated reviews). For assessments that undergo government review, it is critical that the expert participants retain a “veto right” regarding the scientific content of the report (Watson 2006).

OCR for page 39
Analysis of Global Change Assessments Lessons Learned ENGAGING STAKEHOLDERS Stakeholders include all interested and affected parties in an assessment process: those who commission the assessment, the experts who participate in the process, those who are affected by the pertinent environmental change, and those in a position to take actions in response to the assessment’s results. The four types of assessments—process, impact, response, and integrated—have inherently different kinds of stakeholders who can usefully be engaged. The appropriate stakeholders may be scientists, decision makers, politicians, resource managers, the public, and so forth. Even those without the technical expertise to engage in the assessment process may still perceive themselves to be stakeholders because they are affected by the outcome, particularly in the case of impact assessments. In public processes, in which broad stakeholder engagement is desirable, it usually is beneficial to cast the net as widely as possible. Minimizing the risk of offending groups and sectors by failing to invite their participation is often more important than limiting the cost associated with borad public engagement (Jacobs 2002; Jacobs et al. 2005). The credibility and legitimacy of public processes, such as assessments, often rest on the perceptions created by the engagement process, particularly regarding the process of selecting participants and the transparency thereof (NRC 1996; Jacobs et al. 2005; Watson 2006). Engaging the public and local knowledge poses a special challenge because meaningful participation in assessments may require some familiarity with the scientific or technological issues at hand. Public involvement can be facilitated if individuals are already organized in nongovernmental organizations (NGOs) or other organizations, which can send representatives to participate in the dialogue. Such was the case for the Arctic Climate Impact Assessment (ACIA), where the local population was organized in a tribal consortium (Corell 2006). When engaging stakeholders, there is always tension between the needneed to establish balance among interest groups, ensure credibility of results, allocate sufficient time and resources to support a broad engagement effort, and encourage ownership in the process by participants. Based on the committee’s collective experience, significant benefits and impacts often result from engaging stakeholders in the assessment process, rather than relying simply on disseminating the final assessment product. Engagement throughout the process builds trust between individuals and between categories of users; results in broader understanding of multiple perspectives; builds a shared knowledge base that may be useful in other applications; and develops a network of relationships that will prove useful in the future (NRC 1996; Jacobs 2002)..

OCR for page 39
Analysis of Global Change Assessments Lessons Learned Private-Sector Stakeholders Engaging private-sector stakeholders can lead to effective response assessments, but it has proven especially challenging (Parson 2006). Private-Private-sector stakeholders have different information needs and modes of engagement than public-sector participants. Their world view and “decision context” may be more constrained in time, interest, and resources than those of other participants. Therefore, carefully designed, sector-specific engagement strategies may be required to ensure their participation (Semans 2006). The private sector may be critical in ensuring that the assessment has the desired salience for decision makers because of their ties with economic viability or vulnerability, their interest in cutting-edge scientific advances, and their political connections (Parson 2006). There are benefits in sector-specific assessments as well as cross-sector assessments, depending on the goal of the engagement. Industry participants often have access to the best information about relevant technologies. In this case, a successful assessment needs to identify and include top technical experts, but should also consider very carefully their and their employers’ motivations in participating. While individuals and firms may want to contribute, they operate under market and competitive pressures that compel them to consider what they might gain from the participation (Parson 2006). Therefore, assessment organizers may need to provide incentives for industry to participate. Most often, response assessments are initiated before regulatory policies to mitigate the environmental risk are in place, with a goal of evaluating whether technology options are feasible and sufficiently cost-efficient that regulatory policies can be adopted without major economic impacts. Asking industry representatives to participate energetically in these sorts of assessments and to disclose information openly is comparable to asking potentially affected industries to provide a green light to impose regulations (Parson 2006). Despite the complexity of private-sector engagement, there are many situations in which the private interests of participating individuals and their firms can be sufficiently aligned with the public interest to obtain high-quality assessments of technical options (Parson 2006). The three general situations that are conducive to private-sector involvement are: When some firms perceive that, because of their technical skills or competitive positioning, they might gain an advantage from a regulatory response to the issue; When firms judge that an environmental issue has become so serious that regulation is inevitable, and they think that participation might provide them with better information, influence over regulatory details, or a step ahead of competitors; and

OCR for page 39
Analysis of Global Change Assessments Lessons Learned National Oceanic and Atmospheric Administration, provide some examples of tools that have successfully helped decision makers use climate information (http://www.climate.noaa.gov/cpo_pa/risa/). For example, a forecast evaluation tool was developed by the Climate Assessment for the Southwest to assist water resource managers in evaluating the forecast skill of previous seasonal climate forecasts (Jacobs et al. 2005). This tool fosters the application of climate forecast information available in typical process assessments to decisions such as reservoir management, agricultural crop and irrigation decisions, stocking decisions on ranch lands, and flow management for habitat preservation. In Michigan, a team has developed tools to make climate forecasts useful to decision makers in agriculture and tourism (http://www.pileus.msu.edu). When integrated models are tailored to particular decision-making processes or scales, the complexity of the issue is reduced and the feasibility of a useful integrated assessment is increased. In fact, programs such as the Regional Integrated Science Assessments illustrate how regional assessments can be nested within a national or global assessment. A need for further development of decision-support tools for global change assessments has been pointed out by many authors, particularly at the regional scale (Scheraga and Smith 1990; Easterling 1997; Morgan et al. 1999; Jacobs et al. 2005). A greater investment in developing such tools will significantly enhance the ability to seamlessly apply assessment findings to the decision-making process. The ability to successfully develop tools for decision makers requires familiarity with the institutional, economic, and political context within which decision makers operate. A wide range of such policy-analysis and decision-support tools are needed and are being developed. Many such efforts are far too complex and beyond the scope of what assessments such as the IPCC or the CCSP could support meaningfully. Nevertheless, programs such as the Regional Integrated Science Assessments serve as successful examples of regional assessments that allow context-specific, salient information to be developed. Such local and regional efforts can be nested within a national or global assessment. The concept of a nested matrix for assessments involves developing a general framework to identify trends and vulnerabilities at the national scale on an ongoing basis, allowing for prioritized, focused, integrated assessments of specific sectors and regions to illustrate the richness and context of impacts at the scale that resource decisions are normally made. To some degree, this approach was developed within the MA and the National Assessment of Climate Change Impacts. REVIEW PROCESS Most global change assessments to date have well-established review mechanisms, incorporating some combination of expert, public, and gov-

OCR for page 39
Analysis of Global Change Assessments Lessons Learned ernment review. Some assessments, such as TEAP, have not included a formal review process because all of the major stakeholders were already involved in preparation of the report and because they included proprietary information. Stratospheric ozone assessments are peer reviewed, but do not undergo public review because the scope of the issue is limited and the perception of legitimacy in this process by all major stakeholders has been established over time. Effective review processes increase credibility by allowing many individuals to evaluate the veracity of the report and increase legitimacy by involving a larger range of stakeholders (Edwards and Schneider 2001). A transparent process for review is especially important (Edwards and Schneider 2001; Watson 2006). The following questions are helpful for establishing the guidelines for review of an assessment product: Will there be an expert review only, or also a stakeholder, government, and public review? How will reviewers be selected? Who coordinates the review process? How will responses to review comments be handled? Will reviewer comments and responses be made public? To address the risk that experts involved in the assessment process might promote an agenda or their own research, the review process can be designed to include a balanced group of reviewers, incorporating varied viewpoints and expertise from outside the field of science being assessed. Legitimacy in the process often can be enhanced by setting up an independent body of respected individuals to function as a neutral broker between the reviewers and the experts involved in the assessment process. CONSENSUS BUILDING Dissenting voices among assessment participants can negatively impact perceptions of the legitimacy of the assessment process and can even detract from its credibility if the dissent is not addressed in a rigorous and transparent fashion (Edwards and Schneider 2001). Ideally, assessment leaders will manage the process such that either a consensus can be found, or the dissenting conclusions can be incorporated into the process. For example, differing views can be explained by inherent uncertainties of the state of knowledge or by alternative interpretations of available information. Assessments are more likely to be effective if they have clear guidelines agreed upon by participants from the outset and explicit treatment of dissenting views. Given that process assessments rely on the latest scientific knowledge available, dissenting conclusions in these types of assessments are more

OCR for page 39
Analysis of Global Change Assessments Lessons Learned easily addressed. The scientific method provides norms regarding the evidence required to draw certain conclusions (Jasanoff 1987; Edwards and Schneider 2001). Impact assessments, however, must rely in part on value judgments, which can create a greater challenge regarding the resolution of differing opinions. In this context, a fair and transparent treatment of all sides of the argument, with detailed explanations of how each conclusion is drawn, will allow the assessment users to make their own value judgment based on the information presented. CHARACTERIZING UNCERTAINTY Characterizing uncertainty can represent a challenge in assessments, in terms of determining what sorts of uncertainty information would be useful for decision makers as well as developing quantitative or qualitative measures of uncertainty (Johnson and Slovic 1994; Patt and Schrag 2003). While there is evidence that decision makers have an aversion to ambiguity (VanDijk et al. 2004), uncertainty is unavoidable in many decision-making contexts. Once decision makers understand that they are operating in an uncertain environment, they typically prefer that the conclusions of an assessment be accompanied by a description of the level and source of relevant uncertainties (Johnson and Slovic 1994). It is also important to manage expectations about reducing uncertainties because some of these uncertainties will not be resolved for decades if at all. A range of approaches have been employed for characterizing uncertainty related to environmental change, including standard statistical techniques, model-based sensitivity analysis, expert judgment, and scenario development. It is difficult, if not impossible, to objectively and quantitatively define uncertainty for many of the complex issues related to climate change. In cases where quantitative techniques can not be applied, scientists and others preparing an assessment often are forced to choose between providing no uncertainty estimates or developing and implementing qualitative approaches, typically based in part on expert judgment and consensus (Morgan and Keith 1995). The committee concludes that, in cases where uncertainty estimates have been requested, it is appropriate for the practitioners of the assessment to make every effort to accommodate that request by using expert judgment. Objective, Quantitative Methods for Assigning Uncertainty Statistical theory provides a detailed and robust framework for defining the uncertainty in a parameter derived from a dataset, for example, the statistical or probability distribution of possible results about a mean derived from repeated sampling of a population. Although such methods

OCR for page 39
Analysis of Global Change Assessments Lessons Learned may be applicable for characterizing a single climate parameter (e.g., the average temperature at a given location) and whenever possible should be used, they are often not applicable and can even provide misleading results in climate change assessments. One limitation of statistical approaches is that, in climate change assessments, many of the parameters of interest are derived from space-borne platforms that require complex analysis and/or state-of-the-science technology. Statistical methods only provide an assessment of the random error in the measurement; they do not address systematic errors that can arise from artifacts in the instrumentation, the methods used to reduce the raw data, or both. If systematic errors in the measurement are present, a statistical measure of the uncertainty from random error will underestimate the real uncertainty and can lead one to be overly confident in the veracity of the result. A case in point is the estimate of mid-tropospheric temperature trends from satellite and balloon measurements. For many years, analyses of these data indicated that mid-tropospheric temperature trends were inconsistent with surface temperature trends, even after accounting for statistical uncertainties (NRC 2000). These results were interpreted by some to mean that the current warming was not due to greenhouse gas warming and were widely debated in the lay media as well as the scientific literature. Subsequent analysis uncovered errors in the methods used to translate the satellite and balloon measurements into temperature values. After correcting the data processing methods, the apparent inconsistency in the midtropospheric and surface temperature trends disappeared (CCSP 2006). Another reason quantitative statistical methods are often not applicable is that inferences and conclusions about future climate change (e.g., the influence of human activities) are based on a complex synthesis and analysis of many parameters, factors, and lines of evidence. Quantitative and fully objective estimates of uncertainty in these cases are not feasible. Model Simulations and Uncertainty Many aspects of climate change science are based on model simulations. These range from estimates of the global warming potential of greenhouse gases, which can depend on model estimates of the atmospheric lifetime of a gas, to predictions of future temperature and precipitation trends. The most widely adopted approach to estimating uncertainty in model predictions is sensitivity analysis, in which the range of probable model outcomes is assessed using a series of model realizations with a range of values for the various inputs. Both the sensitivity to specific model parameters (e.g., how clouds or air-sea exchange are represented) and to different scenarios for future greenhouse gas emissions can be tested in this manner. For this

OCR for page 39
Analysis of Global Change Assessments Lessons Learned method to be reliable, it is important that the set of sensitivity or scenario runs accurately portray existing uncertainties (Morgan et al. 2005). In most applications of sensitivity analysis, an upper and lower bound to the model prediction are obtained but not a probability distribution for the range of results. Incorporating the model simulation into a Monte Carlo algorithm can provide a statistical estimate of uncertainty with probability distributions (Metropolis and Ulam 1949; Cubasch et al. 1994; Robert and Casella 2004). However, Monte Carlo applications within the framework of a climate assessment present two problems. First, Monte Carlo analysis requires knowledge of the probability distributions (i.e., statistical uncertainty) of the parameters under consideration and such distributions are rarely well defined for the reasons discussed above. Second, climate models are computationally expensive and Monte Carlo requires an often unfeasibly large number of model realizations to obtain statistically meaningful results. Investigators have attempted to address the latter by reducing the number of simulations required by using algorithms that identify the most critical regions of the parameter space (Tatang et al. 1997). Another approach that yields statistical estimates of model uncertainty with probability distributions is the so-called direct sensitivity analysis technique, in which the uncertainty of each parameter is incorporated into the underlying differential equations of the model. Overall model uncertainty is then directly calculated by the model itself. One version of this approach—the Direct Decoupled Method—has been used in air quality modeling (Russell and Dennis 2000), but, to the best of the committee’s knowledge, it has not been used for climate modeling. The advantage of direct sensitivity analysis is that it eliminates the need for multiple model simulations. However, like Monte Carlo, it requires knowledge of the probability distributions of parameters under consideration. The approaches described above can, if carried out properly, yield a statistical measure of the uncertainty in the model output. However, the results can be misleading because the methods used are based on the assumption that the model or models completely describe and account for all relevant processes. If the models have unknowingly omitted an important process, the actual results can lie far outside the uncertainty range predicted. For example, the stratospheric ozone models failed to predict the appearance of the Arctic ozone hole because they did not include important heterogeneous chemical reactions. Expert Judgment In many global change assessments that evaluate potential outcomes in complex systems, the characterization of uncertainty for policy makers must

OCR for page 39
Analysis of Global Change Assessments Lessons Learned rely on qualitative metrics arrived at through a consensus of experts. For example, qualitative metrics (such as “virtually certain,” “likely,” etc.) are used in the IPCC assessment. For these metrics to be useful, all participants, including policy makers, must share and accept the meanings intended by the qualitative metrics. Other formal approaches to developing expert consensus, such as the “Delphi method” and techniques for drawing conclusions based on a range of expert judgment, have been developed (De Groot 1970; Oalkey 1970; Watson and Buede 1987; Morgan et al. 1984). Scenarios Scenario analysis can be a useful tool for developing insights on the importance of key uncertainties and where additional research may have the greatest payoff. Where there are legitimate differences in opinion over the true state of the world, the scenario analysis approach can help clarify the importance of alternative assumptions and resolve seemingly intractable conflicts by illustrating a range of potential outcomes. However, scenario development is information intensive and requires data that are internally consistent. Such information may or may not be readily available. Further, scenarios are frequently confused with predictions of future conditions, so communication of appropriate ways to interpret them is essential. A STRATEGIC COMMUNICATION PLANCOMMUNICATION PLAN If an assessment’s scientific findings are effectively communicated, understood, and accepted by the target audience, there is a greater chance that optimal policies and decisions will be undertaken to address the environmental challenges analyzed in the assessment. Ideally, the communication strategy involves a multifaceted approach: getting to know the target audience, recognizing its information needs, and actively engaging its members in the process (Moser and Luganda 2006). In designing a communication strategy, the assessment team should try to analyze and respond to the interests, motivations, receptivity, knowledge base, barriers, and resistance of different target audiences (Moser and Luganda 2006; Moser and Dilling 2007). The basic objective is to stimulate individuals to think about problems, risks, and solutions, and thereby to influence policies, decisions, and behavior. The communication process should be active during the entire assessment, and not solely be designed around the report dissemination. Effective assessments have a comprehensive, multifaceted communication strategy right from the start, encompassing an analysis of the target audiences, alternate modes of reaching and engaging them, desired responses (e.g., policy decisions, legislation, technological innovation, standards, international

OCR for page 39
Analysis of Global Change Assessments Lessons Learned treaties), and appropriate follow-up activities. Further, the communication and outreach do not end with publication of the assessment report, but are an ongoing, dynamic, and iterative process of interaction with stakeholders, media, academe, and the public. The audiences targeted for communication efforts may differ through the assessment process and may be influenced by the issues themselves and by desired responses to the assessments. At different stages, the audience will comprise those who commissioned the assessment, those who are affected by the environmental problems it addresses, and those who can influence relevant legislation, business policies, and new product or technology development. Thus, target audiences may include government policy makers at national, state, and local levels; business decision makers; scientists and technical experts not initially involved in the assessment but relevant for solutions (e.g., engineers, economists, epidemiologists); affected communities (e.g., indigenous peoples in the Arctic); NGOs; and the general public. Each audience will differ in their degree of receptivity, knowledge, values, self-interest, and capacity to act (Johnson and Slovic 1995; Moser and Dilling 2007). In addition to the target audience, the communication plan often needs to consider appropriate intermediaries to engage in the process (e.g., the media, prominent opinion leaders, consultants, educational institutions). These intermediaries help translate the assessment results for the target audience and are commonly the most sophisticated users of the assessment products. However, it is important to consider the potential for some intermediaries to distort or select facts in order to either exaggerate or downplay the impacts. If such intermediaries are too closely engaged in the dissemination process, the credibility and acceptance of the assessment might be hampered, stimulating resistance to action based on its conclusions. Modalities for communication and outreach extend beyond the printed page, including informal meetings and consultations, seminars and dialogues, public forums, selected working groups, interviews and news conferences, television, Internet, and CDs. Different types of publications and communication activities will be appropriate for different audiences. The MA, the stratospheric ozone assessments, and the ACIA are examples of assessment processes that produced an array of different publications and communications for different audiences—from policy makers to business to the general public. Effective assessment reports are concise, accessible, visually attractive, and user-friendly; investment in the writing and review process is critical. In terms of substance, it is important that the information provided is relevant to the needs of the most important stakeholders. For example, the global-scale information about climate change provided by the IPCC may not fully meet the local, regional, and short-term needs of stakeholders. More

OCR for page 39
Analysis of Global Change Assessments Lessons Learned generally, an assessment providing mainly global abstractions may fail to motivate decision makers at regional and local levels. The characteristic complexity of the science and the range of scientific uncertainties add to the communication challenge (Johnson and Slovic 1995; NRC 1996; Johnson 2003; Patt and Schrag 2003). Indeed, there may be an inherent conflict between a scientist’s penchant for exactitude and the effective presentation of an environmental assessment to a nontechnical audience. The complexity of the science is often daunting, encompassing projections of cumulative, minute changes in multiple variables for long future time frames; theoretical models and scenarios; complicated assumptions about risks; and the multidisciplinary nature of the subject matter. Conscious and imaginative efforts to simplify language, tables, and scenarios can make them more understandable (Johnson and Slovic 1994, 1995). Creative use of easy-to-understand charts, tables, graphs, and photographs can add significantly to a report’s effectiveness and impact, particularly by enhancing the assessment’s interest for media and other intermediaries. Many of the visual components of IPCC reports (e.g., the tangle of multiple-scenario trajectories) are virtually incomprehensible to nonexperts, in contrast to the attractive visual displays in the ACIA reports. Accessibility is also enhanced when both the basic report and other documentation are made available on the web in usable form and, if appropriate, translated into key languages. A summary is possibly the most crucial element of the written assessment product and an effective dissemination process, especially if it is concise, unbiased, clear about assumptions and uncertainties, free of jargon, and relevant to the various needs of decision makers. Different types of summaries may be appropriate for different audiences. For media and the general public, for example, the summaries can be briefer, more colorful, and less technical, while still scientifically impeccable; ACIA offers useful examples (ACIA 2004). For business use, technology and product-oriented summaries, as in TEAP, are appropriate, especially when industry experts are involved in their preparation (UNEP 1991a, 1994a, 1998a, 2002a). The scientific assessments under the Montreal Protocol have used “Twenty Questions and Answers about the Ozone Layer” effectively to communicate to a wide range of nontechnical audiences. In sum, the most effective communications strategies are not based on a single encyclopedic report, however exemplary its scholarship. Rather, it will comprise frequent consultations with stakeholders throughout the process; media outreach, engaged dialogues, meetings, and forums with key audiences; and a diversity of publications and pamphlets tailored to multiple audiences. Effective publications strategies are flexible, varying with different audiences and objectives, and producing products that differ, for example, in the degree of complexity, in policy relevance, in local or

OCR for page 39
Analysis of Global Change Assessments Lessons Learned regional focus, in basic education, and in technical emphasis. Such effective communication strategies can be expensive, requiring that budgetary provisions for communication are appropriate to the degree and scale of the desired communications outreach. The budgetary costs can be justified by analyzing in advance the potential benefits of effective communications for a successful assessment outcome. SUMMARY OF GUIDANCE FROM THE LITERATURE The scholarly literature on assessments cited above provides a rich and growing body of information on how to create a credible, legitimate, and salient assessment process. These characteristics are enhanced through a process of thoughtful deliberation, which is fair and competent, in which all reasonable views are given serious consideration. Four elements are central: Engagement builds legitimacy and credibility. Who is at the table and whether they participate in two-way communication define perceptions of fairness and balance in point of view. A transparent review process and a deliberate effort to promote consensus increase legitimacy and credibility. Transparency in handling critical comments is particularly important in minimizing perceptions of bias or imbalance. Deliberate and consistent methods of treating and communicating uncertainties add credibility and salience. Regardless of method (statistics, sensitivity analysis, scenario development, or expert judgment), each measure must be defined and communicated in a consistent manner. A deliberate and active communication strategy instituted at the onset of an assessment process enhances the value, credibility, and legitimacy of the process and products. An effective communication plan recognizes the nature of the audiences, including their interests, receptivity, and knowledge base, as well as any barriers to communication. The difficulty in incorporating these four key elements depends on the nature of the assessment. Process assessments have a well-worn path for success, which includes incorporating a critical mass of experts, ensuring broad participation, focusing intensively on a specific science issue, developing consensus through a state-of-the-science evaluation, instituting authoritative review, and offering a clear summary of the results. As a result, process assessments are less likely to be subject to criticism for their credibility or legitimacy, unless the science topic is associated with the perception of great political importance.

OCR for page 39
Analysis of Global Change Assessments Lessons Learned In contrast, assessments that focus on impacts or responses present greater challenges. In each case, achieving credibility and legitimacy requires involving a broader set of stakeholders, often with more specific interests and biases. Further, the assessment outcomes are much more likely to involve analyses that depend on assigning values; hence, it is more likely that they will generate a diversity of opinions that impact legitimacy and credibility. It is important that impacts and response assessments be designed in a manner that accepts the challenge of broadening the participation and the level of communication, while also recognizing that there is still much to be learned about how to conduct these types of assessments successfully.

OCR for page 39
Analysis of Global Change Assessments Lessons Learned This page intentionally left blank.