Review of Chapter 5

Chapter 5 asks the following: how well can the observed vertical temperature changes be reconciled with our understanding of the causes of these changes? This chapter aims to explain the different observational surface and tropospheric temperature trends through using state-of-the-art modeling results, principally from integrations that include multiple climate forcing factors. Most of the model simulations analyzed are relatively new, using model integrations performed for the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4) assessment.

Overall the committee liked this chapter. It is the clearest and most lucid of all of the chapters. However, there are several important issues that should be addressed, especially related to the correct use of statistical uncertainties and comparisons with satellite data. This chapter has copious footnotes unlike all the others. This approach seems better for the presumed audience and is therefore recommended for all other chapters. Thus, sufficient detail can be presented without damaging the flow for the more general reader.

MAJOR COMMENTS

1. The conclusions reached are often based on estimates of trends, neglecting uncertainty levels, and many statements on comparison are inaccurate because of this. The report should be more explicit about the choices made regarding the treatment of trend confidence intervals in model-data comparisons. If the authors believe that including error bars could hide model-data discrepancies, inadequate understanding of model uncertainties, or both, then this view should be discussed, possibly within the first conclusion or earlier. The second and third conclusions regarding the influence of volcanoes and El Niño Southern Oscillation (ENSO) could be removed and only briefly mentioned earlier. The volcano aspect could be included as an example of the single forcing whose signal can be detected in observational temperatures in the first concluding point. There should also be discussion of volcanoes in the context of Douglass and Knox (2005) and Lindzen and Giannistis (1998, 2002).

2. Error bars are essential on the plots, notably Figures 5.3 and 5.4, and all dots should be horizontal bars to allow for sampling uncertainty. This is important because ENSO is not in the same sequence in the coupled models. This is partly discussed in lines



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 31
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product on Temperature Trends in the Lower Atmosphere Review of Chapter 5 Chapter 5 asks the following: how well can the observed vertical temperature changes be reconciled with our understanding of the causes of these changes? This chapter aims to explain the different observational surface and tropospheric temperature trends through using state-of-the-art modeling results, principally from integrations that include multiple climate forcing factors. Most of the model simulations analyzed are relatively new, using model integrations performed for the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4) assessment. Overall the committee liked this chapter. It is the clearest and most lucid of all of the chapters. However, there are several important issues that should be addressed, especially related to the correct use of statistical uncertainties and comparisons with satellite data. This chapter has copious footnotes unlike all the others. This approach seems better for the presumed audience and is therefore recommended for all other chapters. Thus, sufficient detail can be presented without damaging the flow for the more general reader. MAJOR COMMENTS 1. The conclusions reached are often based on estimates of trends, neglecting uncertainty levels, and many statements on comparison are inaccurate because of this. The report should be more explicit about the choices made regarding the treatment of trend confidence intervals in model-data comparisons. If the authors believe that including error bars could hide model-data discrepancies, inadequate understanding of model uncertainties, or both, then this view should be discussed, possibly within the first conclusion or earlier. The second and third conclusions regarding the influence of volcanoes and El Niño Southern Oscillation (ENSO) could be removed and only briefly mentioned earlier. The volcano aspect could be included as an example of the single forcing whose signal can be detected in observational temperatures in the first concluding point. There should also be discussion of volcanoes in the context of Douglass and Knox (2005) and Lindzen and Giannistis (1998, 2002). 2. Error bars are essential on the plots, notably Figures 5.3 and 5.4, and all dots should be horizontal bars to allow for sampling uncertainty. This is important because ENSO is not in the same sequence in the coupled models. This is partly discussed in lines

OCR for page 31
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product on Temperature Trends in the Lower Atmosphere 641- 659, but model simulations cannot be definitive given the exceptional nature of the 1997-98 event. Even in Figure 5.7 only a single set of error bars is given. 3. The chapter notes the importance of the stratospheric contribution to the channel 2 temperatures and refers to Fu et al. in lines 580-583, but then never allows for this in subsequent comparisons. As a result, Figures 5.2B, 5.3, 5.4, and 5.5 and discussion are all misleading because the models clearly have different cooling in the stratosphere; discussions in Chapter 4 suggest that this accounts for 0.05 K/decade trend in channel 2 discrepancy. Several parts of the text ought to be substantially revised as a result of this (including lines 614-624, 631-633, and others). 4. Regarding the focus on the global means and some zonal means, regional trends differ a lot from global values (Agudelo and Curry, 2004). For instance, the large increase in surface temperature over northern land and the smaller decrease in the troposphere, which is related to changes in surface inversions (Chapters 1 and 3), are not examined in the models and not picked up. The chapter comes closest with Figure 5.5, but that fails to account for the stratospheric contamination. The fact that sondes are not global is also not dealt with. Subsampling of the modeling data at sonde locations is not done. 5. There should be more explicit discussions of the specific responses to individual forcings and how these combine together. This could be done in the first of the conclusions that have “some confidence”—see details towards the end of the specific comments. There should also be a discussion of the use of multiple regional forcings in models. 6. In the presentation by B. Santer during the February 23, 2005 NRC meeting (Chicago, Illinois), the committee liked the two model plots (of standard deviations and trends at the surface and at low and middle troposphere) and would hope that these can be included in a revised chapter. Also included should be as many results from additional models as time allows. 7. The committee had some discussion of how the basic methodology of “Detection and Attribution” should be presented in the report. What is needed is not a full mathematical description of the method (for that one can refer to the original source papers) but a discussion of the main principles behind the methodology that would be appropriate for a climate scientist who does not work directly in this area of research. There needs to be better understanding of the strengths and limitations of detection and attribution analyses. What follows is a tentative suggestion of how to do this. In addition, the authors may find the work of Levine and Berliner (1999) useful in revising this discussion. Detection and attribution methods try to represent an observed climatic data set in terms of signals due to forcing factors such as greenhouse gases, aerosols and solar fluctuations, plus correlated random noise. The methods are also called “fingerprint analysis” because it is possible to think of the method as identifying specific fingerprints (spatial patterns of climate change due to specific forcing factors) in the observational climate record. The climate data typically consist of temperature or rainfall averages over grid boxes and are very high-dimensional.

OCR for page 31
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product on Temperature Trends in the Lower Atmosphere There are two versions of the method, one developed by Santer et al. (1995, 1996) based on estimating pattern correlations between observational data and fingerprints and the other developed primarily by Hegerl and Allen and their co-authors, (Allen and Tett, 1999; Hegerl and Allen, 2002; Allen and Stott, 2003; Allen et al., 2004), which uses regression analysis to decompose climate data as a linear combination of forcing factors, plus correlated noise. If the regression coefficient due to a forcing factor is statistically significantly different from zero, then we can claim to have “detected” that factor in the climate record. “Attribution” refers to the process of attributing the observed climate change to the different forcing factors. The two approaches are mathematically equivalent, though they differ in specific details because of different implementation decisions made by the two groups. A critical feature to both versions of the method is how to estimate the spatial correlation structure of the unforced internal variability. Standard techniques—such as estimating the correlation between each pair of grid boxes from the observational data and assembling the resulting pair-wise correlations into a correlation matrix—fail completely because the number of data vectors available for estimating the correlation matrix is so much smaller than the dimension of the data. Therefore, an indirect approach is used. Samples, typically of around 1,000 years in length, are generated from control runs of the climate model where all forcing factors are kept constant. The covariance matrix is estimated from these control runs together with an orthogonal decomposition to reduce dimension (a technique variously known as Empirical Orthogonal Function (EOF) analysis, principal components analysis, or Karhunen-Loève expansion). Typically around 10-15 orthogonal components are used. Based on this decomposition, it is possible to greatly reduce the dimension of the original data and hence to estimate the pattern correlations or regression coefficients, with realistic approximations to the sampling distributions of those estimates. Some potential difficulties with the methodology should be noted. There are various technical issues such as how many orthogonal components to choose (or the broader question of how well covariances calculated from control model runs represent those in real data). The method does assume that the signals or fingerprints are known—Myles Allen has proposed an extension that allows for random error in the signals but this makes the analysis much more complicated and it is not clear that it works well in this situation. As a result, the methodology is probably not appropriate for incorporating climate change effects where there is large uncertainty about the signal (one might argue that land-use and lan-cover effects fall in this category). Also, the methodology should probably not be used with too many different signal components; most successful applications have used just the major signals mentioned above (greenhouse gases, aerosols and solar fluctuations, possibly including also volcanic forcings, but always bearing in mind that some of these forcings

OCR for page 31
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product on Temperature Trends in the Lower Atmosphere are not well known and have large error estimates). It is a feature with any regression analysis that including too many collinear regressors lowers the precision of estimated regression coefficients, and this aspect is only made worse by the difficulties associated with estimating covariances. The method assumes that the response to a combination of different forcing factors is a linear combination of the responses to the individual forcing factors (a property that statisticians call additivity). In principle one could get around this assumption by running, for example, climate models under different combinations of forcing factors and using these combined signals in the detection and attribution analysis. This is not done because of the computational expense of obtaining such multiple model runs and the statistical difficulties just mentioned of using detection and attribution techniques to select among a large number of possible model-based signals. The assumption of linearity in response should also be evaluated, as many of the forcings are not orthogonal to each other. In summary, detection and attribution methods are an extremely powerful technique. They are essentially the only method available for formally analyzing the agreement between climate models and observational data. However, they cannot be expected to do everything. In particular, they cannot be expected to detect a poorly defined signal or to discriminate among a very large number of possible signals that might represent different explanations of climate change. SPECIFIC COMMENTS 1. The word “lockstep” in line 52 should be replaced with “evolve together” or “in unison”. 2. References should be provided for the differences of opinion discussed in lines 93-94. 3. In lines 108-109, while the model will simulate similar ENSO to the real world when run in Atmospheric Model Intercomparison Project (AMIP) mode, this is much smaller for North Atlantic Oscillation (NAO), which is not ocean forced. 4. The sentence in lines 112-113 could be expanded to be more informative. 5. In lines 122-123, the reason for using ensemble forecasts is not only because the full state of the climate system is not known. Producing a deterministic forecast of the climate would also require a perfect model. 6. Add regional aspects in footnote 11? 7. Evidence of the 0.3°C cooling since the 1970s over India should be provided in line 216. This cooling is not evident in the maps the IPCC AR4 will use for 1979-2004. Over this timeframe most of India shows warming. 8. The urban heat island should be mentioned in lines 233-237 because it is a major effect in urban areas and of opposite sign to rural land-use and land-cover effects. 9. In line 237, Matthews et al. (2003) is not in the reference list (and it should actually be Matthews et al., 2004.). The cooling is small in global terms, but it might be large regionally, although it is hard to pin down because the noise levels are much higher.

OCR for page 31
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product on Temperature Trends in the Lower Atmosphere In general, regional changes on a scale smaller than the Rossby radius tend to be confined to the boundary layer. Also, Table 1 of Chapter 1 says land-use change effects are small. 10. Another reference relevant to lines 281-282 is Jones (1994). 11. Are the very small estimates of error reported in footnote 18 still believed? 12. Lines 320-321 state, “volcanic effects probably contribute to slow changes in lapse rate variability”. Do the authors mean changes in lapse rate variability, or changes in lapse rate here? 13. In footnote 21, can the HC/CRU surface data be referred to as HadCRUT2v and have the Jones et al. (2001) reference? This should be done elsewhere in the other chapters and can be in one of the footnotes. 14. Section 4.3 would benefit from more synthesis and assessment, rather than just reporting of results. 15. Which climate forcings does line 357 refer to? 16. In line 397, “various datasets” should be “various models”. 17. The IDAG reference in line 411 is missing. 18. In lines 424-425, are there also different variables than temperature? Are there some more recent references regarding pressure detection? There could be a better link to the preceding paragraph. 19. In lines 428-432, positive detection results obtained in the absence of some forcing should not be taken as evidence of absence of that forcing. The same argument could be made about volcanic forcing, solar forcing, or even sulfate forcing, yet we know they have had an effect on the climate. This sentence should certainly be deleted. (In fact four pages later, the authors themselves argue that our inability to detect sulfate in some studies should not be taken as evidence of absence of a sulfate signal). This also reflects the potential non-orthogonality of the various forcings. 20. “This apparent contradiction” in line 433 is not really a contradiction, for the reasons given above. 21. What is the relevance of the final sentence in lines 498-500? 22. In lines 502-509, the difficulty of detecting the sulfate response could also be explained by degeneracy between the sulfate and greenhouse gas response patterns. The inability of some studies to detect the sulfate response should not be taken as evidence of absence of a sulfate signal, but at the same time, by itself it does not show that “it is important for detection work to account for large temporal changes in the fingerprint pattern”. 23. None of the model runs for IPCC AR4 mentioned in lines 541-542 have been written up yet. As long as the references detailing the new integrations are submitted this is not a problem. 24. HadCM3 has also run all the experiments discussed in lines 550-551. Is it possible to include it, or are you hoping to use HadGEM? 25. In lines 553-554, the use of different forcings in the different IPCC models is sometimes presented as an advantage, since it folds in some approximation of forcing uncertainty into the analysis. 26. In lines 591-592, insert “partly” before “due”. As far as we are aware, no one has shown that water vapor changes can explain the full difference between simulated and observed trends. 27. Insert “Body” before “temperature” at the start of line 782.

OCR for page 31
Review of the U.S. Climate Change Science Program’s Synthesis and Assessment Product on Temperature Trends in the Lower Atmosphere 28. Line 790 should also refer to Robock and Oppenheimer (2003), which looks more at circulation patterns. There is a paper in that book by Jones et al. (2003). 29. Lines 800-802 states, “At constant relative humidity, water vapor is expected to increase nonlinearly with temperature (Soden et al., 2002).” Water vapor does increase nonlinearly with temperature at constant relative humidity. This is just the Clausius-Clapeyron equation. If a reference is cited it should be to Clausius and Clapeyron. 30. In line 808, ocean temperature data has very ambiguous implications as noted in Lindzen and Giannitsis (2002). 31. More discussion should be provided in line 811 about the widespread and accelerating glacial retreat. It is those mountain glaciers in low and midlatitudes that are melting systematically. This is a terrific proxy measurement because the mountain glacier melting is unprecedented in modern history and is now happening within the lower atmosphere that is a primary focus of this report. The report needs to point out that just glaciers that respond to summer temperatures are retreating. Many glaciers that respond to winter precipitation are advancing. 32. Trends are influenced by ozone, greenhouse gases, aerosols, etc. In addition to addressing combined forcings in lines 849-863, also discuss the response of the model to each of the forcings individually and the uncertainties in the various forcings. 33. Volcanoes and ENSO do not make much difference to the trend. As these series are now slightly longer than earlier studies so that they no longer end with a major (1997-1998) ENSO event, the points about volcanoes and ENSO could be removed. There could be a brief discussion in lines 865-881 or preferably earlier regarding their effects. 34. The section in lines 903-910 should be more explicit in saying which of the various components contribute separately to the agreement. It should also be more explicit about exactly which forcings are included in the “all” integration.