4
Characterizing and Reducing Uncertainty

The term reducing uncertainty is ubiquitous within the Climate Chance Science Program (CCSP) strategic plan. Reducing uncertainties is the central theme of one of the five major CCSP goals and the foundation of one of the four core approaches to address these goals. As such, it is viewed as a litmus test for determining whether scientific knowledge is sufficient to justify particular policies and decisions. It is listed as one of the key criteria for prioritization of work elements within the CCSP. Finally, “reducing uncertainties” appears in many of the research questions, milestones, and products and is an element of plans to develop decision support resources (Chapter 11 of the plan calls for “scientific synthesis and analytic frameworks to support integrated evaluations, including explicit evaluation and characterization of uncertainties”). The fact that the concept of reducing uncertainties appears in the plan as a goal, within an approach, as a criteria, as the basis of scientific questions, and as a milestone or product is indicative of the degree to which this concept pervades strategic thinking in the CCSP. A key question is whether reduction of uncertainty is also a metric for assessing progress, and if so, how should it be applied?

In a number of presentations to the committee, reducing uncertainty appeared to take on the mantle of a potential “supermetric” capable of assessing whether or not the CCSP is successful. For example, if the investment in climate model prediction does not result in a narrowed range of predicted sensitivity, then the investment could be viewed as a failure. This use of reducing uncertainty as a metric violates the general principles



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 55
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program 4 Characterizing and Reducing Uncertainty The term reducing uncertainty is ubiquitous within the Climate Chance Science Program (CCSP) strategic plan. Reducing uncertainties is the central theme of one of the five major CCSP goals and the foundation of one of the four core approaches to address these goals. As such, it is viewed as a litmus test for determining whether scientific knowledge is sufficient to justify particular policies and decisions. It is listed as one of the key criteria for prioritization of work elements within the CCSP. Finally, “reducing uncertainties” appears in many of the research questions, milestones, and products and is an element of plans to develop decision support resources (Chapter 11 of the plan calls for “scientific synthesis and analytic frameworks to support integrated evaluations, including explicit evaluation and characterization of uncertainties”). The fact that the concept of reducing uncertainties appears in the plan as a goal, within an approach, as a criteria, as the basis of scientific questions, and as a milestone or product is indicative of the degree to which this concept pervades strategic thinking in the CCSP. A key question is whether reduction of uncertainty is also a metric for assessing progress, and if so, how should it be applied? In a number of presentations to the committee, reducing uncertainty appeared to take on the mantle of a potential “supermetric” capable of assessing whether or not the CCSP is successful. For example, if the investment in climate model prediction does not result in a narrowed range of predicted sensitivity, then the investment could be viewed as a failure. This use of reducing uncertainty as a metric violates the general principles

OCR for page 55
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program presented in Chapter 3. Reliance on a single metric can provide an erroneous sense of progress and increase the potential for misuse (principle 8). The principle that metrics should address both process and progress (principle 7) is particularly relevant for complex and diverse programs such as the CCSP. Importantly, the meaning of uncertainty is poorly defined for much of the scope of the CCSP. It is likely that different definitions apply to different program elements (e.g., overarching goal, prioritization criteria, research question, milestone). Without careful definition, reducing uncertainty cannot be evaluated using specific observable or articulated measures. Therefore, it violates the principle that metrics should be easily understood and broadly accepted by the community (principle 5). To be meaningful, a metric must first be based on a well-specified variable that indicates advancement of knowledge. Second, a precise definition of what is meant by “uncertainty” in reference to that variable must be specified. The pervasive and diverse use of reducing uncertainty as a definition of progress, and the flaws and potential misuse of reducing uncertainty as a metric, warrant a more detailed assessment of its application for the CCSP. THE ROLE OF UNCERTAINTY IN CLIMATE DISCUSSIONS The climate community expresses uncertainty in different ways.1 The CCSP defines uncertainty as: An expression of the degree to which a value (e.g., the future state of the climate system) is unknown. Uncertainty can result from lack of information or from disagreement about what is known or even knowable. It may have many types of sources, from quantifiable errors in the data to ambiguously defined concepts or terminology, or uncertain projections of human behavior.2 Uncertainty plays a key role in policy formation because decisions often turn on the question of whether scientific understanding is sufficient to justify particular types of response. The CCSP strategic plan seeks to develop knowledge of the complex human-natural system in support of public and private decisions, and a central component of this task concerns character- 1   For example, see Lempert, R., N. Nakicenovic, D. Sarewitz, and M. Schlesinger, 2004, Characterizing climate-change uncertainties for decision-makers, Climatic Change, 65, 1–9; Intergovernmental Panel on Climate Change, 2004, Describing Scientific Uncertainties in Climate Change to Support Analysis of Risk and of Options: Workshop Report, M. Manning, M. Petit, D. Easterling, J. Murphy, A. Partwardhan, H.-H. Rogner, R. Swart, and G. Yohe, eds., Report of a workshop held at the National University of Ireland, Maynooth, May 11–13, 2004, 138 pp., <http://ipcc-wg1.ucar.edu/meeting/URW/product/URW_Report_v2.pdf>. 2   Climate Change Science Program and Subcommittee on Global Change Research, 2003, Strategic Plan for the U.S. Climate Change Science Program, Washington, D.C., p. 199.

OCR for page 55
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program izing, and where possible reducing, current levels of uncertainty in knowledge of key climate processes. The CCSP objective is laudable, but it has also yielded the potential for an overly simplistic view of uncertainty as a measure of progress. Perhaps the most prominent example of this shortcoming involves comparison of the estimates of the change in global mean temperature at equilibrium with a doubling of CO2 from preindustrial levels. Studies of global mean temperature date back as far as the late nineteenth century,3 when Arrhenius estimated that a doubling of CO2 would warm the planet by 4-6°C. However, the most prominent early modern estimate was provided by a 1979 National Research Council study, usually referred to as the Charney report in reference to the committee’s chairman.4 It concluded that “… the equilibrium surface global warming due to doubled CO2 will be in the range 1.5°C to 4.5°C, with the most probable value near 3°C.” Subsequent estimates of this range have led to similar conclusions. For example, the 1995 Intergovernmental Panel on Climate Change (IPCC) second assessment reports the same range with a “best estimate” of 2.5°C, while the 2001 third assessment states that “the previously estimated range for this quantity, widely cited as +1.5°C to +4.5°C, still encompasses the more recent model sensitivity estimates.”5 Such comparisons of ranges are widely interpreted, even in the scientific literature, as meaningful indicators of temperature change (or lack of it).6 The application of an uncertainty metric, defined in this case as the extent to which the range in estimated climate sensitivity due to the doubling of carbon dioxide has narrowed with climate research, would suggest that little progress has been made. In fact, this interpretation is far from correct because of the flaws in the application of uncertainty as a metric. PITFALLS IN THE APPLICATION OF UNCERTAINTY METRICS Previous experiences in the climate debate, associated studies, and the example above reveal three circumstances in which caution should be exer- 3   Arrhenius, S., 1896, On the influence of carbonic acid in the air upon the temperature of the ground, Philosophical Magazine and Journal of Science, 41, 251. 4   National Research Council, 1979, Carbon Dioxide and Climate: A Scientific Assessment, National Academy Press, Washington D.C., 22 pp. 5   Intergovernmental Panel on Climate Change, Working Group I, 1995, Climate Change 1995: The Science of Climate Change, Cambridge University Press, Cambridge, U.K., p. 34; Intergovernmental Panel on Climate Change, Working Group I, 2001, Climate Change 2001: The Scientific Basis, Cambridge University Press, Cambridge, U.K., p. 527. 6   For example, see “Rising global temperature, rising uncertainty,” Science, 292, April 13, 2001, pp. 192–194; “Three degrees of consensus,” Science, 305, August 13, 2004, pp. 932–934.

OCR for page 55
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program cised in the construction of metrics for the CCSP: (1) ill-specified comparisons, (2) systematic errors, and (3) chaotic systems. Ill-Specified Comparisons Even in cases where a variable is well defined, improper comparisons can arise. A clear example of such failure is the comparison of the estimates in the Charney report with the most recent IPCC analysis described above. The two estimates of mean global temperature sensitivity to increased carbon dioxide cannot be compared meaningfully because neither states what confidence interval is intended. The Charney report did not include a statement about confidence intervals, and the IPCC has not attached probabilities or confidence intervals to its ranges of estimates. The same issue arises with comparisons of the estimated range of temperature change among the IPCC summaries, which involve both climate and emissions models.7 Therefore, a comparison of the estimates from the Charney report and the more recent IPCC reports is not meaningful because what is meant by uncertainty must be determined more precisely for each case. Metrics for the CCSP will have to take these challenges explicitly into account with careful definitions of system variables and their associated uncertainties. Correction of Systematic Errors There is no foolproof methodology for determining systematic error. An empirically observed correlation between a presumed cause and effect may be wholly spurious due to omission of a causal factor. Finding all systematic errors necessitates examining a whole series of ad hoc possibilities, some of which may be completely unknown to the observer. The nature of the problem is shown in Figure 4.1, which illustrates one of the most well-established areas of scientific research: the speed of light. Figure 4.1 displays the results of the well-specified problem of determining the speed of light under carefully controlled laboratory conditions at different times. In assuming that the most recent value is indeed closest to the true value (an assumption that would require serious effort to confirm), it might have been anticipated that the standard errors of the earlier measurements—made by extremely careful and expert physicists—would have encompassed the apparent correct value. That many of the earlier results do not do so likely can be attributed either to random effects or to unsuspected, and therefore uncorrected, systematic errors. 7   Reilly, J., P.H. Stone, C.E. Forest, M.D. Webster, H.D. Jacoby, and R.G. Prinn, 2001, Uncertainty and climate change assessments, Science, 293, 430–433.

OCR for page 55
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program FIGURE 4.1 Estimated values of the speed of light at different points in history. Vertical bars are the expected value with standard error. Note that the vertical scales are slightly different. SOURCE: Henrion, M., and B. Fischhoff, 1986, Assessing uncertainty in physical constants, American Journal of Physics, 54, 791–798. Copyright 1986, American Association of Physics Teachers. Errors must be considered, even for a problem as simple as determining average temperature. To measure temperature accurately, it is necessary to consider the potentially erroneous calibration of the thermometer, dependence on the housing of the thermometer, urban development in the vicinity of the measurements, and differences in the way different observers read a thermometer. Realistic representation of uncertainties requires estimates of the possible contributions of all significant effects, some of which may be relatively unknown. As science advances, phenomena often become more fully characterized. One typical result is that phenomena not initially understood to be

OCR for page 55
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program relevant are found to contribute to the effect being forecast—perhaps increasing uncertainty as a result. One such example is the introduction of an interactive land surface into climate models in the interval of time between the publication of the Charney report and the most recent IPCC assessment. These interactive model components were added to global climate models because climate-vegetation feedbacks were discovered to be a potential mechanism for altering climate sensitivity predictions. Such an innovation does not necessarily reduce uncertainty and in fact, given the diversity of interactive land surface models, is likely to have increased uncertainty. Nevertheless, it is not a research failure, because it contributes to an advanced characterization of the problem. However, it might be erroneously classified as such by an incautious application of uncertainty reduction as the sole metric for measuring advancements in knowledge. Revelation of Chaotic Systems For some variables and for some scales, the Earth’s climate and weather systems are chaotic. That is, in any projection of this nonlinear system, irreducible small errors in the initial conditions increase with time until the prediction becomes meaningless. Prediction beyond a certain time horizon is impossible in principle. In classic work by E. Lorenz, this phenomenon was found to hold for weather systems.8 Consider the problem of predicting precipitation in Washington, D.C., on July 1 of any particular year. Three- and five-day forecasts of precipitation can be made with considerable skill. Fifty years ago, a meteorologist might have rationally concluded that obtaining accurate forecasts several weeks in advance would be only a matter of time and effort and that the uncertainty existing at that time could only diminish as models and observations improved. Today we know, through theories of chaos, that such an expectation would have been false—there is virtually no hope of accurate forecasts of daily precipitation one month ahead. Indeed, as observational records have grown in length and as ever-more-extreme events are recorded, estimates of the uncertainty of month-ahead precipitation forecasts have increased, not decreased. Hence, a forecast made in 1950 might well have been considerably more optimistic than one made today, but less accurate scientifically. As knowledge of the interacting systems increases, estimates of uncertainty associated with some climate variables and some scales could decrease and/or increase, perhaps markedly. 8   Lorenz, E., 1963, Deterministic nonperiodic flow, Journal of Atmospheric Science, 20, 130–141.

OCR for page 55
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program USE OF UNCERTAINTY METRICS The ability to properly characterize uncertainty is of great value. Uncertainty plays a key role in policy formation because decisions often turn on the question of whether scientific understanding is sufficient to justify particular types of response. For this reason, metrics that mark advancements in this area will be valuable. However, it must be reemphasized that advances in the knowledge of climate systems may result not only from decreases in uncertainty, but also from increases as more is understood about governing elements. Hence, imprudent application of any simple measure of uncertainty could be very damaging to scientific efforts. In many cases, it may be more useful to consider successes in identifying uncertainties and successes in understanding the nature of uncertainties. The problem of living with contingencies whose uncertainty cannot be reduced or eliminated is a familiar one and has led over the centuries to a practice of risk-reducing investment, insurance, and expenditure to maintain options for future choice. Global change falls into this category—possible outcomes (no global warming, moderate to severe global warming, global cooling) can be stated only in probabilistic terms.9 Given the constraints described above, reduction of uncertainty should not be relied upon as a metric for assessing progress in the CCSP. Alternative measures that do not have these shortcomings are presented in Chapter 6. 9   Mastrandrea, M.D., and S. Schneider, 2004, Probabilistic integrated assessment of “dangerous” climate change, Science, 304, 571–575.

OCR for page 55
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program This page intentionally left blank.