BJORN B. STEVENS

*Department of Atmospheric and Oceanic Sciences*

*University of California, Los Angeles*

*Los Angeles, California*

Simulations of weather and climate have long posed challenges for computational science. Atmospheric and oceanic circulations operate on spatial scales ranging from micrometers or smaller to planetary scales and temporal scales ranging from microseconds to millennia and beyond. The representation of this range of scales is far beyond the capacity of any envisioned computational platform. For now and the foreseeable future, simulations of atmospheric and oceanic circulations will require a massive truncation of scale and hence a loss of information. Because the circulations of interest are turbulent, truncation is not a trivial matter. To prevent the truncation of information at some scale from entailing a loss of predictability at the remaining scales, procedures must be developed for representing the effects of the truncated, or unresolved, scales on those that remain.

The spectrum of energy in a system often serves as a guide to choosing which scales to keep and which to discard. As we know from common experience, variability is most pronounced on the largest scales (i.e., seasonal differences in weather are larger than day-to-day variations, and the weather varies more from continent to continent than it does from one side of town to another). Consequently, simulations of the climate system invariably begin by explicitly representing the largest spatial scales and working their way down the spectrum as computational resources permit. The high spatio-temporal correlation among atmospheric processes—small scales tend to be fast, and large scales tend to be slow—means that truncation also has a temporal projection.

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 89

Tenth Annual Symposium on Frontiers of Engineering
Small-Scale Processes and Large-Scale Simulations of the Climate System
BJORN B. STEVENS
Department of Atmospheric and Oceanic Sciences
University of California, Los Angeles
Los Angeles, California
Simulations of weather and climate have long posed challenges for computational science. Atmospheric and oceanic circulations operate on spatial scales ranging from micrometers or smaller to planetary scales and temporal scales ranging from microseconds to millennia and beyond. The representation of this range of scales is far beyond the capacity of any envisioned computational platform. For now and the foreseeable future, simulations of atmospheric and oceanic circulations will require a massive truncation of scale and hence a loss of information. Because the circulations of interest are turbulent, truncation is not a trivial matter. To prevent the truncation of information at some scale from entailing a loss of predictability at the remaining scales, procedures must be developed for representing the effects of the truncated, or unresolved, scales on those that remain.
The spectrum of energy in a system often serves as a guide to choosing which scales to keep and which to discard. As we know from common experience, variability is most pronounced on the largest scales (i.e., seasonal differences in weather are larger than day-to-day variations, and the weather varies more from continent to continent than it does from one side of town to another). Consequently, simulations of the climate system invariably begin by explicitly representing the largest spatial scales and working their way down the spectrum as computational resources permit. The high spatio-temporal correlation among atmospheric processes—small scales tend to be fast, and large scales tend to be slow—means that truncation also has a temporal projection.

OCR for page 89

Tenth Annual Symposium on Frontiers of Engineering
Currently available computational models of the global atmosphere and ocean are typically restricted to a representation of their respective fluids by numerical meshes capable of sampling spatial scales on the order of 100 to 200 kilometers on the horizontal plane and perhaps 100 to 1,000 meters on the vertical plane. The very largest computers in existence can produce calculations on horizontal and vertical meshes with linear dimensions more refined by a factor of 10. But even these calculations leave an enormous range of scales unresolved.
Thus, one of the central questions in our field, and the focus of this paper, is whether the net effect of the smaller-faster scales on the larger-slower scales can be represented as a function of the larger-slower scales that are explicitly tracked in a simulation. Although this question is posed in purely practical terms, it has an esthetic quality in that any such representation can be thought of as a formalism of our understanding.
Atmospheric and oceanic scientists often use the word parameterization to describe this formalism. In our jargon, the goal is to parameterize the collective effects of small-scale processes on large-scale processes. Because small-scale processes are sundry, the parameterization problem is multifaceted. Typically, small-scale processes are broken down into distinct classes of problems—clouds, radiative transfer, hydrometeor interactions, surface interactions, small-scale turbulence, chemistry, and so on—processes that can be thought of as the atoms. Although one may be interested only in the net effect of all of these processes, atomization facilitates idealization and subsequent study.
An artifact of this kind of decomposition is that it raises the question of how, and on what scale, individual processes (atoms) (e.g., clouds, radiation, chemistry, etc.) interact, and hence the extent to which parameterizations must be coupled to one another, and not just to larger-scale processes. Thermodynamic analogies are useful to a point; for instance, diffusion parameterizes molecular transport in fluids. However, any attempt to develop a kinetic theory capable of aggregating many small-scale processes is impeded by our lack of understanding of what exactly constitutes the atoms and the rules that govern their behavior.
A conspicuous example of a parameterization in the atmosphere is for fluxes of heat, momentum, and matter from an interface (Garratt, 1992). Simply stated, the question is: Given the state of the large-scale flow above an interface, and a gross characterization of an interface, for instance a measure of its roughness, its temperature, etc., what are the fluxes of momentum, matter, and enthalpy from the interface? Physically these fluxes are carried by correlated fluctuations in the velocity and temperature fields—eddies—whose sizes scale with their distance from the interface. At the interface, small roughness elements (capillary waves on the ocean; rocks, sand, bushes, cars on land) disturb the flow, leading to small-scale pressure gradients around obstacles that accelerate the flow and generate eddies that in turn transport enthalpy and matter, which has diffused from the surface roughness elements into the fluid, into the interior of the fluid.

OCR for page 89

Tenth Annual Symposium on Frontiers of Engineering
In almost any practical application, the net effect of these eddies must be represented in some fashion to provide meaningful boundary conditions for the larger-scale flow. Attempting to aggregate the fundamental solutions of equations for flow around an ensemble of obstacles has proven fruitless. Instead, a so-called similarity approach (Barenblatt, 1996) has been developed. A key aspect of the similarity approach is simplifying the problem to a point where it becomes empirically tractable and then hoping that the answers so derived are relevant to less idealized situations.
For the surface flux problem, the similarity approach usually consists of first considering flow over a uniformly rough wall in the absence of temperature differences; the essence of the flow can only conceivably be retained if only two variables are considered, namely the distance, z, from the surface and a velocity scale that measures the momentum flux, e.g.,
(1)
Here {u, w} denotes the horizontal and vertical components of the velocity field, and primes indicate deviations from a large-scale average denoted by an over-bar. To the extent that u* and z are the only relevant parameters, it is possible to argue on purely dimensional grounds that
(2)
where α is a dimensionless constant.
The scaling law in Eq. (2) derives its simplicity, and hence its empirical tractability, from the neglect of a variety of other, potentially important, parameters. For instance, the formulation implicitly says that the structure of the near surface flow is independent of viscosity, v, the rotational frequency of the Earth, f, the depth of the turbulent boundary layer, h, and so on. These arguments are asymptotic rather than absolute statements. They effectively state that the Reynolds number (in this case the inverse of the nondimensional viscosity), Re = u*z/v, is so large that the flow ceases to depend on it. Likewise for the Rossby number, Ro ≡ u*/fz. In these cases, we speak of the flow obeying Reynolds or Rossby number similarity.
The lack of an outer scale measuring the depth of the boundary layer, or the atmosphere as a whole, suggests that, to the extent our idealization is valid, it depends on z being much less than h. Similarly, the application of this formalism assumes that z is much greater than the height of the surface roughness elements. Insofar as all of these statements are true, then α should be universal; that is, once it is empirically determined, it can be universally applied. Given α, the problem of relating the small-scale flux of momentum (which is responsible for

OCR for page 89

Tenth Annual Symposium on Frontiers of Engineering
accelerating the mean flow) to a function of the mean flow itself is reduced to integration:
(3)
where z0 is an effective height (called the roughness height and defined by the character of the surface) where the extrapolated velocity profile vanishes.
Equation (3) forms the basis for the parameterization of surface fluxes in all models of atmospheric circulations. This approach can be generalized to account for heat fluxes that are accompanied by buoyant acceleration of fluid elements. In this case, α must be replaced by a function Ψ(ζ), where ζ is a nondimensional parameter that measures the relative contributions of buoyancy and mechanically induced effects on fluid accelerations. Further extensions to account for a variety of other effects (most notably surface heterogeneity) not included in the formulation above are invariably also based on elaborations of (3) and remain an active area of research (Fairall et al., 2003).
In concentrating on the details of the formalism above, we should not lose track of the basic ideas of the similarity approach, wherein: (1) insight is used to reduce a problem to an essential and idealized formulation; (2) dimensional analysis is used to identify nondimensional dependencies; and (3) empiricism is used to determine the form of the functions of the relevant dimensionless numbers. Unfortunately, for most processes we wish to parameterize this three-step recipe is not easy to follow. More often than not our insights are not sufficiently developed for us to arrive at compelling simplifications. Even when they are compelling, however, the empirical step often involves measuring functions that involve more than one variable and are not readily accessible to measurement.
To address these problems, a boot-strapping approach has been developed, wherein idealized fluid simulations designed to isolate particular processes, or collections of processes, are used to develop our intuition. Slowly, these simulations are refined to their essence, from which pseudo-empirical statements are extracted, and the parameter space is explored.
An example of this approach is an attempt to parameterize the effects of clouds in large-scale models. Most of the processes directly responsible for cloud formation are related to circulations much smaller than the smallest scale represented by large-scale models. However clouds, and cloud regimes, do exhibit large-scale patterns and thus seem to be under control of the large-scale state. This raises the possibility of using a fine-scale model with a given large-scale forcing to learn which large-scale parameters are essential to cloud formation and how cloud fields respond to changes in these parameters (e.g., Xu and Randall, 1996).
Based on this approach, simple statistical rules can be derived, both for use in larger-scale models and for comparison to data. The latter provides a means of evaluating the fidelity of fine-scale models, which often depend on a parameter-

OCR for page 89

Tenth Annual Symposium on Frontiers of Engineering
ization of an even finer-scale process (Stevens and Lenschow, 2001; Randall et al., 2003b). In reference to the initial example, this approach would involve solving directly for the flow over a variety of surfaces. Based on the simulation results, essential parameters would then be isolated, leading to a formulation of the problem similar to Eq. 2. At this stage, simulations for a variety of roughness types could be conducted to evaluate the constancy of α as u* and z vary.
Given the variety of processes in the atmosphere that are not resolved in large-scale models and our propensity for making problems more complicated rather than simpler, this approach takes enormous effort. Although it can be rewarding when used creatively, it can also often be tedious, particularly when nature resists simplification.
To address these problems, another approach has recently been developed. Here fine-scale simulations are embedded in larger-scale simulations, in a sense performing the procedure outlined above “on the fly.” For some processes, this approach has great potential—particularly processes that resist simplification and in situations where myriad interactions among unresolved processes occur on a narrow range of scales that are clearly separated from the smallest of the resolved “large-scales.”
Although these qualifications appear onerous, they are satisfied by some elements of one of the more vexing parameterization problems in atmospheric sciences—relating the statistics of deep convective clouds to the state of the large-scale circulations. Recent applications of this approach, called super-parameterization, or cloud-resolving-convective parameterization (Grabowski and Smolarkiewicz, 1999), have led to remarkable improvements in the fidelity of some important aspects of simulations of large-scale atmospheric phenomena (Khairoutdinov and Randall, 2003).
Super-parameterization, however, is very new, and strategies for implementing it are just being explored (Randall et al., 2003a). Computationally, it is very intensive, and thus it requires access to great resources. Nevertheless, it has the potential to enrich our phenomenology and make the more traditional strategies outlined above more effective. For this reason, and because of its immediate practical benefits, it is being explored vigorously.
REFERENCES
Barenblatt, G. 1996. Scaling, Self-Similarity, and Intermediate Asymptotics. Cambridge, U.K.: Cambridge University Press.
Fairall, C.W., E.F. Bradley, J.E. Hare, A. Grachev, and J. Edson. 2003. Bulk parameterization of air-sea fluxes: updates and verification for the COARE algorithm. Journal of Climate 16(4): 571–591.
Garratt, J.R. 1992. The Atmospheric Boundary Layer. Cambridge, U.K.: Cambridge University Press.
Grabowski, W.W., and P.K. Smolarkiewicz. 1999. CRCP: a cloud resolving convective parameterization for modeling the tropical convective atmosphere. Physica D: Nonlinear Phenomena 133(1-4): 171–178.

OCR for page 89

Tenth Annual Symposium on Frontiers of Engineering
Khairoutdinov, M.F., and D.A. Randall. 2003. A cloud resolving model as a cloud parameterization in the NCAR Community Climate System Model: preliminary results. Geophysical Research Letters 28(18): 3617–3620.
Randall, D.A., M. Khairoutdinov, A. Arakawa, and W. Grabowski. 2003a. Breaking the cloud parameterization deadlock. Bulletin of the American Meteorological Society 84(11): 1547–1564.
Randall, D.A., S. Krueger, C. Bretherton, J. Currey, P. Duynkerke, M. Moncrieff, B. Ryan, D. Starr, M. Miller, W. Rossow, G. Tselioudis, and B. Wielicki. 2003b. Confronting models with data: the GEWEX Cloud Systems Study. Bulletin of the American Meteorological Society 84(4): 455–469.
Stevens, B., and D.H. Lenschow. 2001. Observations, experiments and large-eddy simulation. Bulletin of the American Meteorological Society 82(2): 283–294.
Xu, K.-M., and D.A. Randall. 1996. A semi-empirical cloudiness parameterization for use in climate models. Journal of Atmospheric Science 53(21): 3084–3102.