Not for Sale

## CHARACTERIZING UNCERTAINTY

### PETER STONE

Peter Stone, from the Massachusetts Institute of Technology, explained that in order to provide useful information about climate change given all the uncertainties, you have to use probabilistic approaches. In the MIT model, PDFs are developed for the key uncertain elements of the climate system, including sensitivity (Str), rate of ocean heat uptake, and aerosol radiative forcing. In addition, the model includes the major economic uncertainties that affect CO2 emissions, including changes in labor productivity (which implicitly includes population growth), efficiency of energy use, and the cost of non-carbon technologies. These inputs are propagated through the model and yield an output in the form of a PDF for global mean temperature change and other climate variables of interest. With this approach, they can show how the potential distribution of climatic changes would differ for various policy responses.

The main advantage of the model is its broad scope (that is, its inclusion of economic variables along with critical physical variables of the oceans and atmosphere). The trade-off is that it is a two-dimensional zonal mean model, and thus cannot represent longitudinal details. Also, the lack of a third dimensional leads to important challenges in simulating the two-dimensional character of the circulation properly.

The value of this type of analysis is that it provides a measure of how much you reduce the high-risk outcomes under various policy response scenarios (i.e., future emissions). They find that the different policy response options do not lead to significant changes in the most probable outcome, but large effects are seen in the tail of the distribution (Webster et al., 2003). Thus, understanding the probability of high-risk outcomes may depend strongly on constraining uncertainties in the various input functions.

A Monte Carlo approach is used to define the PDF of the outcomes (such as the increases in global mean surface temperature by 2100), and the distribution limits are basically determined by the number of runs carried out. More than 250 model runs are required to accurately define the 5-95 percent confidence limits. To further constrain the tail of the distribution (i.e., get a 99 percent confidence limit), you have to consider whether it is worth the resources required to carry out the extra runs that would be needed.

#### Discussion

Leggett: Some people feel that defining the distribution tail is an important goal, but can we use this information, even if it is obtainable? Can we really assign any meaningful difference between the 98th, 99th, 99.99th percentiles of a distribution?

Wigley: His simple model allows an essentially unlimited number of runs, but the problem is that the high-end tail of the output distribution is strongly affected by small changes in the input values. This presents an inherent uncertainty.

Stone: Since uncertainty increases the farther you go into the future, part of the problem may be our insistent focus on projecting to the year 2100. The uncertainties in many critical variables are much smaller if one focuses on shorter time spans.

Mahlman: However, the use of shorter time spans omits critical information on how global warming plays out over its natural time scales (many centuries).

Prather: It would be useful to look at the range and distribution of potential impacts, not just a global mean temperature change. Even a relatively low climate sensitivity value may yield some significant impacts (for instance, high-latitude warming that leads to melting of the Greenland ice sheet).

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 17
CHARACTERIZING UNCERTAINTY PETER STONE Peter Stone, from the Massachusetts Institute of Technology, explained that in order to provide useful information about climate change given all the uncertainties, you have to use probabilistic approaches. In the MIT model, PDFs are developed for the key uncertain elements of the climate system, including sensitivity (Str), rate of ocean heat uptake, and aerosol radiative forcing. In addition, the model includes the major economic uncertainties that affect CO2 emissions, including changes in labor productivity (which implicitly includes population growth), efficiency of energy use, and the cost of non-carbon technologies. These inputs are propagated through the model and yield an output in the form of a PDF for global mean temperature change and other climate variables of interest. With this approach, they can show how the potential distribution of climatic changes would differ for various policy responses. The main advantage of the model is its broad scope (that is, its inclusion of economic variables along with critical physical variables of the oceans and atmosphere). The trade-off is that it is a two-dimensional zonal mean model, and thus cannot represent longitudinal details. Also, the lack of a third dimensional leads to important challenges in simulating the two-dimensional character of the circulation properly. The value of this type of analysis is that it provides a measure of how much you reduce the high-risk outcomes under various policy response scenarios (i.e., future emissions). They find that the different policy response options do not lead to significant changes in the most probable outcome, but large effects are seen in the tail of the distribution (Webster et al., 2003). Thus, understanding the probability of high-risk outcomes may depend strongly on constraining uncertainties in the various input functions. A Monte Carlo approach is used to define the PDF of the outcomes (such as the increases in global mean surface temperature by 2100), and the distribution limits are basically determined by the number of runs carried out. More than 250 model runs are required to accurately define the 5-95 percent confidence limits. To further constrain the tail of the distribution (i.e., get a 99 percent confidence limit), you have to consider whether it is worth the resources required to carry out the extra runs that would be needed. Discussion Leggett: Some people feel that defining the distribution tail is an important goal, but can we use this information, even if it is obtainable? Can we really assign any meaningful difference between the 98th, 99th, 99.99th percentiles of a distribution? Wigley: His simple model allows an essentially unlimited number of runs, but the problem is that the high-end tail of the output distribution is strongly affected by small changes in the input values. This presents an inherent uncertainty. Stone: Since uncertainty increases the farther you go into the future, part of the problem may be our insistent focus on projecting to the year 2100. The uncertainties in many critical variables are much smaller if one focuses on shorter time spans. Mahlman: However, the use of shorter time spans omits critical information on how global warming plays out over its natural time scales (many centuries). Prather: It would be useful to look at the range and distribution of potential impacts, not just a global mean temperature change. Even a relatively low climate sensitivity value may yield some significant impacts (for instance, high-latitude warming that leads to melting of the Greenland ice sheet).

OCR for page 17