adjustment for temporal correlation in the proxy calibration, and the sensible use of principal components or other methods for data reduction. On the basis of these assumptions and an approximate Gaussian distribution for the noise in the relationship between temperature and the proxies, one can derive prediction intervals for the reconstructed temperatures using standard techniques (see, e.g., Draper and Smith 1981). This calculation will also provide a theoretical MSE for the validation period, which can be compared to the actual mean squared validation error as a check on the method.

One useful adjustment is to inflate the estimated prediction standard error (but not the reconstruction itself) in the predictions so that they agree with the observed CE or other measures of skill during the validation period. This will account for the additional uncertainty in the predictions that cannot be deduced directly from the statistical model. Another adjustment is to use Monte Carlo simulation techniques to account for uncertainty in the choice of principal components. Often, 10-, 30-, or 50-year running means are applied to temperature reconstructions to estimate long-term temperature averages. A slightly more elaborate computation, but still a standard technique in regression analysis, would be to derive a covariance matrix of the uncertainties in the reconstructions over a sequence of years. This would make it possible to provide a statistically rigorous standard error when proxy-based reconstructions are smoothed.

Interpreting Confidence Intervals

A common way of reporting the uncertainty in a reconstruction is graphing the reconstructed temperature for a given year with the upper and lower limits of a 95 percent confidence interval to quantify the uncertainty. Usually, the reconstructed curve is plotted with the confidence intervals forming a band about the estimate. The fraction of variance that is not explained by the proxies is associated with the residuals, and their variance is one part of the mean squared prediction error, which determines the width of the error band.

Although this way of illustrating uncertainty ranges is correct, it can easily be misinterpreted. The confusion arises because the uncertainty for the reconstruction is shown as a curve, rather than as a collection of points. For example, the 95 percent confidence intervals, when combined over the time of the reconstruction, do not form an envelope that has 95 percent probability of containing the true temperature series. To form such an envelope, the intervals would have to be inflated further with a factor computed from a statistical model for the autocorrelation, typically using Monte Carlo techniques. Such inflated intervals would be a valid description of the uncertainty in the maximum or minimum of the reconstructed temperature series.

Other issues also arise in interpreting the shape of a temperature reconstruction curve. Most temperature reconstructions exhibit a characteristic variability over time. However, the characteristics of the unknown temperature series may be quite different from those of the reconstruction, which must always be borne in mind when interpreting a reconstruction. For example, one might observe some decadal variability in the reconstruction and attribute similar variability to the real temperature series. However, this inference is not justified without further statistical assumptions, such as the probability of a particular pattern appearing due to chance in a temporally correlated series. Given the attenuation in variability associated with the regression method and the temporal correlations within the proxy record that may not be related to tempera-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement