5
Source Identification and Apportionment Methods

Two questions invariably present themselves to those who must devise ways to protect or improve visibility: ''Which sources cause the visibility problem under study?'' and "How large is each significant source's contribution of visibility-reducing particles and gases?" The first question, of source identification, must be answered to reach even a qualitative understanding of the problem. If emission controls are to be applied efficiently to the major sources, then one needs a quantitative understanding of each source's contribution to the visibility problem. The quantitative assignment of a fraction of an entire visibility problem to one or more sources is called source apportionment.

It usually is impractical to conduct a source apportionment study by experimenting on all the major air pollution sources in a large region—that would require an expensive control program just to observe its effects. Instead, analytical methods and computer-based predictive models have been developed to quantify the connection between pollutant emissions and changes in visibility. There are several major classes of methods and models. Speciated rollback models are relatively simple, spatially averaged models that take changes in pollutant concentrations to be directly proportional to changes in regional emissions of these pollutants or their precursors. Receptor-oriented methods and models infer source contributions by characterizing atmospheric aerosol samples, often using chemical elements or compounds in those samples as tracers for the presence of material from particular kinds of sources. Mechanistic computer-based models conceptually follow pollutant emissions from source to receptor, simulating as faithfully as possible the pollutants'



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas 5 Source Identification and Apportionment Methods Two questions invariably present themselves to those who must devise ways to protect or improve visibility: ''Which sources cause the visibility problem under study?'' and "How large is each significant source's contribution of visibility-reducing particles and gases?" The first question, of source identification, must be answered to reach even a qualitative understanding of the problem. If emission controls are to be applied efficiently to the major sources, then one needs a quantitative understanding of each source's contribution to the visibility problem. The quantitative assignment of a fraction of an entire visibility problem to one or more sources is called source apportionment. It usually is impractical to conduct a source apportionment study by experimenting on all the major air pollution sources in a large region—that would require an expensive control program just to observe its effects. Instead, analytical methods and computer-based predictive models have been developed to quantify the connection between pollutant emissions and changes in visibility. There are several major classes of methods and models. Speciated rollback models are relatively simple, spatially averaged models that take changes in pollutant concentrations to be directly proportional to changes in regional emissions of these pollutants or their precursors. Receptor-oriented methods and models infer source contributions by characterizing atmospheric aerosol samples, often using chemical elements or compounds in those samples as tracers for the presence of material from particular kinds of sources. Mechanistic computer-based models conceptually follow pollutant emissions from source to receptor, simulating as faithfully as possible the pollutants'

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas atmospheric transport, dispersion, chemical conversion, and deposition. Mechanistic models are source oriented; they take emissions as given and ambient concentrations as quantities to be estimated. Because these models require pollutant concentrations only as initial and boundary conditions for a simulation they can therefore be used to predict the effects of sources before they are built. The members of the committee do not aim to give advice on how to choose a single best source apportionment technique for analyzing a given visibility problem. Instead, the committee offers guidance on how to view the air quality modeling process. The way to view air quality models is that they provide a framework within which information about the basics of the problem can be effectively organized. This basic information includes data on the air pollutant emission sources, observations on meteorological conditions, data on the ambient air pollutants that govern visibility, and information on emission control possibilities. The quality of the outcome of the modeling process usually depends as much or more on the quality of the data used as inputs to the model than it does on the modeling method chosen, thus placing a premium on the accuracy with which the basic facts of the problem are known. The objective of the analyst is to capture the scientific relationships between emissions and air quality such that important decisions about the effect of emission controls or about the siting of new sources can be answered. Depending on the decisions to be made, there may be either a strict or a more relaxed requirement for technical accuracy or detail. Federal regulatory programs are permitted to make regulatory decisions in the face of continuing scientific uncertainty. Within many likely regulatory structures, attribution of contributions due to individual sources may be unnecessary—attribution to classes of like sources or upwind geographic regions would suffice. In those cases where approximate answers are satisfactory, there are many possible ways to approach answering questions about the relationship between emissions, air quality, and visibility. When approaching the analysis of the causes of a particular visibility problem, the best strategy is generally to use a nested progression of techniques from simple screening through more complex methods. Simple methods can be used to screen the available data and find an approximate solution. Next, more complex methods can be applied to determine source contributions with greater resolution. Advanced methods are appropriate when a problem is scientifically complex or when

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas control costs are high enough that more detailed or more highly resolved information is warranted. In general, the simpler methods use subsets of the data required by the more complex methods; this nesting of data requirements yields a natural progression of techniques. Receptor-oriented methods, for example, form a progressive series, where each additional measured variable contributes new information. When it is necessary to collect data to support a more complex method, simpler methods often can be applied inexpensively using these same data. Even when the simpler methods fail to produce sufficiently specific findings, the information they offer can be valuable because it is easy to grasp and to communicate to policy makers. The cause of visibility impairment can take two general forms: widespread haze and plume blight, and source apportionment models for them must account for markedly different physical, chemical, and meteorologic processes. The committee evaluated models applicable to both kinds of impairment but focused on source apportionment models for multiple sources that contribute to widespread haze, because regional haze is the main cause of reduced visibility in Class I areas. A later section of this chapter contains a section on single source and plume blight models, and that section is prefaced by a description of the differences between widespread haze and plume blight. First, we provide criteria for evaluating the relative merits of source identification and apportionment methods in the context of a national program to protect visibility. We then evaluate various methods, roughly in order of increasing resources required for their application: simple source identification methods; speciated rollback models; receptor models, including chemical mass balance models and regression analysis; models for transport only and for transport with linear chemistry (these are simplified mechanistic models that are either receptor or source oriented); advanced mechanistic models; and hybrid models. These methods are described in Appendix C. Appendix C should be read before encountering the critique of modeling methods that follows, as that appendix contains the definition of certain uncommon modeling methods (e.g., the speciated rollback model) and important text related to air quality models based on regression analysis. We generally describe models that predict source effects on atmospheric pollutant concentrations only, and not on visibility itself; these are known as air-quality models. It is understood that once pollutant concentrations are

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas apportioned among sources, the source contributions to light extinction can be calculated by the optical models discussed in Chapter 4. We then discuss the selection of apportionment methods to assess single source siting problems and air-quality problems other than visibility. CRITERIA FOR EVALUATING SOURCE IDENTIFICATION AND APPORTIONMENT METHODS A national visibility protection program could employ many alternative modeling methods. Source apportionment studies are generally best conducted through the successive use of simple screening models followed by more precise methods. At each stage of this process, one must decide whether further analysis and investigation by more complex methods is warranted. How can one judge the merits of an investment in more sophisticated analysis? Will a particular source apportionment approach yield results of acceptable accuracy? Is that approach consistent with resource constraints and legal requirements? This section sets forth criteria for use in comparing alternative methods of source apportionment. Some criteria might seem to reveal some methods as either adequate or inadequate, but the committee's intention is to provide standards for comparing methods across the board. Few, if any, source apportionment methods can be rated highly in all respects, and it can be expected that regulatory decisions will be based on imperfect models. Some of the desirable properties of source apportionment methods (technical validity and simplicity, for example) can in fact conflict with one another. However, the following criteria should help make more informed decisions about the suitability of a given method for application to a particular visibility problem. Technical Adequacy The first set of criteria concern the technical adequacy of source apportionment methods.

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas Validity The methods for modeling air quality and visibility should have sound theoretical bases. Air-quality models can be based on solving the atmospheric diffusion equation, which provides a mechanistic description of the atmospheric physics and chemistry of pollutant transport, transformation, and removal. Simplifying assumptions and approximations usually are made to speed the solving of these equations. In a particular model formulation, these assumptions should be made to capture the essence of the problem at hand rather than to oversimplify the problem to the extent that there is little assurance that source-receptor relationships are represented correctly. The same criteria apply to mechanistic models for predicting the optical properties of the atmosphere described by Mie theory. For example, the derivation of the calculation scheme must be understood, and the effects of any simplifying assumptions should be small enough that reasonably accurate results can be obtained. Empirical models that relate emissions to air quality and air quality to visibility parameters also can be judged for their theoretical foundations. Some empirical models are derived directly from the differential equations that explain the physical phenomena of interest, and they have a well-understood theoretical basis. Other empirical models use the concepts of materials balances or concepts that require the whole to equal the sum of its parts. Finally, some empirical models are purely phenomenological, with little structural relationship to physical processes in the atmosphere. Source apportionment models should be examined for valid theoretical bases, and models that are not developed carefully and in the light of first principles should not be used. Compatibility of Source and Optics Models It should be possible to link the model for source contributions to pollutant concentrations to a model for pollutant effects on visibility. The assessment of source contributions to visibility impairment generally requires two types of calculations. First, source contributions to ambient concentrations of pollutants are computed; next, the effects of those pollutants on visibility are determined. The results of the ambient pol-

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas lutant calculation should satisfy the input data requirements of the visibility model. Not all air-quality and visibility models are compatible; for example, a conventional rollback air-quality model probably will not provide the particle size distribution data needed to perform a Mie theory light-scattering calculation. Input Data Requirements The data required for application of a particular approach to source apportionment should be understood and obtainable in a practical sense. The input data requirements of the various methods for apportionment differ tremendously. A rollback air-quality model might require only tens to a few hundred observations on emissions source strength and air quality. A photochemically explicit model for secondary-particle formation, however, easily can require millions of pieces of spatially and temporally resolved emissions and meteorologic data, along with size-resolved and chemically resolved aerosol data to check the model's calculations. It should not come as a surprise that the more theoretically elegant techniques place the greatest demands for field experimental data. Evaluation of Model Performance The performance of the candidate source apportionment model should have been adequately evaluated under realistic field conditions. Confidence in model performance builds up over time as a result of its successful applications. New and untested systems require thorough testing and evaluation before they can be recommended for use in a national visibility program. Source Separation The source apportionment method should distinguish the sources that contribute to a particular visibility problem with the level of accuracy required by the regulatory framework within which the model must operate.

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas Some source apportionment methods (receptor models) can attribute visibility impairment with considerable accuracy to generic source types (sources of sulfur oxides, for example) but cannot distinguish among different sources of the same type (they often cannot tell which power plants are contributing to the problem). Other modeling methods (such as speciated rollback models) could predict the effect of individual sources in a region on air quality at each receptor (the prediction would be that the atmospheric concentration increment is proportional to the fraction of the emissions contributed by that source to the air basin), but that prediction might not be accurate. The source separation achieved by a particular method should serve as a basis for an effective regulatory program, given that the amount of source separation needed depends on the legal framework within which that program must operate. Temporal Variability The source apportionment method should account for the temporal character of the visibility problem. Many models directly calculate pollutant concentrations over averaging times that range from a day to as long as a month or a year. However, reduction in visual range is instantaneous, and often it is impossible to explain short-term reductions in visibility from data on long-term average pollutant loadings. A model's averaging time can limit its usefulness in visibility analysis. Geographic Context The source apportionment approach should be suited to the geography of the visibility problem; the spatial characteristics of an air-quality model should be matched to the spatial character of the problem at hand. If the terrain of interest is complex (for example, the Grand Canyon), then models that assume flat topography might not capture the location of plumes that travel between the observer and the features of the elevated terrain. In the case of grid-based air-quality models, the spatial scale of the grid defines the smallest area for which air quality can be examined. If the grid system is too coarse, the essence of a source-receptor relationship can be lost.

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas Source Configurations The source apportionment method should be suited to the physical arrangement of the sources. Some air-quality models, such as rollback models, assume that the locations of emissions sources will not change. Some receptor-oriented models can apportion emissions from existing sources but cannot readily predict emissions from new ones. Some mechanistic models are better than others at predicting the effects of changes in the elevation of emissions. Error Analysis and Biases The method's error characteristics should be known. No technique can be expected to be completely accurate in its attribution of an environmental effect to a particular source. The limits of scientific knowledge about the atmospheric dispersion of air pollution and the workings of chance in atmospheric processes prevent absolute certainty. Obviously, the greater a technique's expected error, the less useful it will be in a regulatory program. It is best to conduct a systematic analysis of the error bounds that surround the predictions made by a candidate method. It should be known whether the errors affect all source contribution estimates equally or whether biases are likely to distort the relative importance of different sources. Attribution techniques are often skewed in their error characteristics; a given technique, for instance, could be known to under-rather than overpredict the contribution of a source to an effect. A technique's error characteristics could restrict its use to a specific type of regulatory program. For instance, a technique that systematically overpredicts could be useful in a technology-based program that requires only a conservative screening model; the same technique might not be useful in a program that attempts to base control requirements on a more precise estimate of a source's effects. Similarly, a technique that systematically underpredicts source contributions would be of limited use in a program, such as that prescribed by the Clean Air Act, which takes a preventive approach to environmental problems.

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas Availability A source apportionment method should be fully developed, available in the public domain, and ready for regulatory application; otherwise, further research and development should take place before it can be recommended for use in a national visibility protection program. Some promising source apportionment methods (such as the models for atmospheric formation of size-distributed secondary particles, linked to Mie theory light-scattering calculations) are now being developed but are not ready for widespread use. Administrative Feasibility Technical merit alone does not determine the suitability of a source apportionment method. If a particular approach is to be the basis for a national program of visibility protection, it should be structured to fit the administrative requirements of a regulatory program. Resources The resources required to apply a particular source apportionment system should be clearly understood. Before a source apportionment method is selected, it should be known how many people, how much time, and how much money are required to start and maintain an assessment of source contributions to visibility impairment in Class I areas. Otherwise, it is unlikely that a regulatory or research program would be established with the amount of support needed to do the work correctly. Regulatory Compatibility The source apportionment method should be compatible with the various regulatory frameworks that have been or could be imposed on the national visibility problem. If a national program were based on the principle of non-deterioration

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas of existing air quality, there might be little need to determine the precise causes of current visibility impairment. A system of source registration and emissions offsets might suffice to meet regulatory needs. Alternatively, a regulatory program might specify national visibility standards and require remedial action to improve visibility in particular Class I areas by a specified amount. In that case, a source apportionment system would have to be able to apportion existing visibility impairment among contributing sources and to forecast whether a changed distribution of emissions would lead to compliance with standards. Multijurisdictional Implementation Where several government agencies have jurisdiction over different parts of a regional visibility problem, the source apportionment method should be suitable for use on a common basis by all parties. Responsibilities for visibility protection in Class I areas are now divided among the National Park Service, the Forest Service, the Fish and Wildlife Service, the Environmental Protection Agency, and state agencies. Some simple source apportionment systems, such as plume blight models, might be applicable for use by each of these agencies and could be used nationwide by different agencies acting independently. On the other hand, regional haze analyses that extend over several states and incorporate several Class I areas within a single analytical framework would need large amounts of data and might require a more unified approach to visibility regulation than has been taken to date. Communication The source apportionment approach should facilitate open communication among policy makers. One can envision two models of equal technical accuracy, one based on readily understood material balance assumptions, the other consisting of a mathematic simulation that policy makers must accept on faith. The more easily understood model could be preferred. Within the framework of an easily understood model, policy makers could conduct a rapid (if informal) analysis of the effects of alternative policies; rapid analysis and discourse might be impossible with a less understandable model. If policy judgments must be made by

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas officials who do not have technical expertise, then the ease of communicating results to policy makers will be an important consideration in model selection. Economic Efficiency The source apportionment method should support an economic analysis directed at finding the least expensive solution to a visibility problem. In addition to being able to identify the source contributions to a regional visibility problem, the method should be capable of being matched to a analysis of the least expensive way to meet a particular visibility improvement goal. Some source apportionment methods, particularly linear methods, are readily linked to cost optimization calculations. Non-linear chemical models can be difficult to use within a system that requires economic optimization. Flexibility Source apportionment methods that can be adapted readily to new scientific findings or to the changing nature of a particular visibility problem are preferable to less flexible methods. Conditions outside the range of past experience will probably arise in the future. Some source apportionment systems could be more adaptable than others to new circumstances and new scientific understanding. Balance There should be a balance between the resource requirements and the accuracy of a source apportionment system. A source apportionment method might require elaborate field experiments to supply data for simplified calculation schemes whose inherent inaccuracy does not warrant such great expense. One also can envision elaborate calculation schemes whose sophistication exceeds the quality of the available input data. The effort required and the cost required of data collection should strike a reasonable balance with that required for data analysis.

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas tive to both factors (White and Patterson, 1981; White et al., 1986). The passage of a cloud across the sun can instantly change the color of a particle plume from bright white to dark brown or cause it to disappear. Changes in viewing angle can have similar effects. The differences between widespread haze and plume blight lead to differences in the models designed to determine their effects. Since plume blight occurs near sources, plume blight models treat advection and dispersion much more simply. The standard EPA models rely on standard Gaussian plume parameterizations developed to predict ground-level effects (EPA, 1970). Some of these models have been described and evaluated by Latimer and Samuelsen (1978), Eltgroth and Hobbs (1979), Bergstrom et al. (1981), and White et al. (1985). Because plume blight occurs near sources and is often of concern in clean, dry air, plume blight models also treat atmospheric chemistry fairly simply. The EPA screening model (EPA, 1988a), for example, neglects the conversion of SO2 to sulfate. Even more sophisticated models (e.g., EPA, 1980a) incorporate only a rudimentary mechanism for the gas-phase oxidation of SO2 to form sulfate particles, along with minimal O3-NOx chemistry for the production and loss of NO2. Plume blight models do, however, treat primary particle emissions in considerable detail (White and Patterson, 1981; White et al., 1986). The visual effects of approximately uniform haze are adequately characterized for many purposes by a single number, the light scattering or total extinction coefficient. The effects of plume blight, however, depend on the scattering coefficient, absorption coefficient, and scattering phase function (giving the angular distribution of scattered light) of both plume and background, along with the width, distance, and elevation of the plume (Latimer and Samuelsen, 1978; White and Patterson, 1981; White et al., 1986). Plume blight models therefore require detailed calculations of radiative transfer. Measurement strategies for characterizing widespread haze and plume blight also differ. The composition of a plume from a tall stack can be difficult to ascertain without airborne measurements (Richards et al., 1981). In-stack measurements can be unreliable indicators of plume particle characteristics, and ground-level measurements of the entrained plume can be dominated by background. Haze, on the other hand, is routinely sampled by fixed surface observatories (Chapter 4). Plume blight is by definition remotely sensed; it is most naturally documented

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas by teleradiometer or camera (Seigneur et al., 1984). While haze can also be remotely sensed, it is more reliably documented by ambient light extinction measurements. As noted above, current regulatory programs rely heavily on plume blight models for judging the visibility effects of proposed new sources. However, a proposed source's potential for plume blight does not necessarily correlate well with its potential to form widespread single-source haze or with its contribution to regional haze. The question of correlations has not received much study, but certain qualitative observations can be made. In support of a correlation, a source's potential for both plume blight and haze increases with its emission rate. Moreover, incremental increases in both plume blight and haze are most noticeable in otherwise clean air. However, haze is due predominantly to secondary particles, while plume blight is mostly due to NO2 and primary particles. Source potentials for forming haze and plume blight are thus not necessarily comparable across source types emitting different pollutants. Furthermore, haze and plume blight are enhanced by different atmospheric conditions, so their incidence frequencies may not be comparable. Finally, plume blight cannot occur where local topography restricts sightpaths, while haze can occur anywhere. Critique of Single-Source Plume Blight Models Models for the visual appearance of coherent plumes from single sources fall into two categories. The simpler plume blight models recommended for use by EPA are modified Gaussian plume dispersion models developed for time-averaged estimates of plume concentrations and visual appearance. More advanced single-source models incorporate explicit chemical reactions and aerosol processes within the dispersing plume. These will be referred to as reactive plume models. See Appendix C for more information on both kinds of models. Technical Adequacy The simple plume blight models and more complex reactive plume

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas models both are derived from solutions to the atmospheric diffusion equation and have a well-documented theoretical basis. Both types of models have been adapted for use with atmospheric optical models. However, the simpler models have several limitations based in their representation of atmospheric mixing, their abbreviated chemistry, and their geometric assumption that the plume must be seen from the outside. One weakness of single-source models in general arises from the sensitivity of the horizontal integral of light extinction to vertical dispersion. Given the instantaneous nature of plume perception, an accurate depiction of many plumes requires capturing their often looping or meandering appearance. Gaussian plume (averaged) models have a symmetric geometry that cannot capture such instantaneous and irregular plume structure. Even for the more advanced models that can depict meandering plumes, data on the temporal and spatial patterns of turbulence at the elevation of plumes are difficult to obtain. The simpler plume blight models are most appropriate for use in cases (often near the source) where light extinction within the plume is due to primary particles and NO2. In cases where light extinction is dominated by secondary particles formed within the plume, a reactive plume model that can track particle formation should be used. In cases where the observer is located within the plume (within a plume traveling down a canyon, for example), a reactive plume model should be used in which the observer can be inside the plume. The input data required by single-source models are difficult to obtain. Particle size distributions and water content measured at the high temperatures inside stacks before the pollutants are released to the atmosphere can be quite different from the size distribution and water content of particles beyond the tip of the stack where the plume cools rapidly. One would like to use the composition of the plume a few tenths of a kilometer downwind as the initial condition for a simple plume model. Such measurements are difficult to make; they can require the use of aircraft for sampling. One alternative is to sample from the stack, using a dilution sampler that cools and dilutes the effluent prior to measurement, thereby mimicking the cooling that occurs in the early stages of plume insertion into the atmosphere. The EPA-recommended plume blight models and similar models proposed by others have been evaluated against data collected for this purpose at a variety of large point sources (White et al., 1985, 1986). Plumes were viewed against the sky in all cases. The accuracy with which plumes' essentially instantaneous appearance could be predicted

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas was limited by the statistical treatment of fluctuating dispersion characteristics in all models. When observed rather than predicted dispersion parameters were used as inputs to the plume chemistry and optics modules, the EPA and Environmental Research and Technology models satisfactorily reproduced the observed appearance of NO2-dominated plumes. The EPA and other models have been much less successful in reproducing the appearance of plumes that have significant particle loadings. Reactive plume models have undergone limited evaluation against field data, largely because data sets sophisticated enough to support these models are hard to obtain. Most data sets do not include size-resolved plume particle composition; background particle concentrations; or background concentrations of ammonia, hydrogen peroxide, or reactive hydrocarbons. The most comprehensive evaluation study to date was carried out by Hudischewskyj and Seigneur (1989). Single-source plume models obviously do not have to separate the effects of many contributing sources. The temporal resolution of single-source models is usually a few minutes or longer; actual plumes can vary in appearance in just seconds. The simpler, Gaussian plume-based visibility models cause difficulty if they are used for rough terrain, because they cannot represent a plume that impinges directly on such elevated features. Even when the plume does not impinge directly on such features, it can be difficult to see a plume when there is elevated terrain in the background. Two versions of EPA's plume visibility model are in general circulation, PLUVUE and PLUVUE II. PLUVUE II has been known for some time to have coding errors that render it essentially unusable in situations where the plume is viewed against terrain rather than sky. PLUVUE has recently been found to have an error that can cause it, too, to generate faulty output for terrain backgrounds. Both models are now being corrected by EPA (D. Latimer, pers. comm., Latimer and Associates, Denver, Colo.). Administrative Feasibility The personnel resources required for the use of simple Gaussian plume blight models are readily available to most air-pollution control agencies. Indeed, most agencies already perform calculations using these models as part of the new-source review process. The more complex reactive plume models must be applied by PhD-level personnel.

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas The simple Gaussian models are compatible with current regulatory programs for the prevention of significant deterioration of air quality in relatively clean areas; indeed, they are about the only tools used routinely for visibility analysis by regulatory agencies. The reactive models are not yet widely used within the regulatory community, but efforts in that direction should be encouraged because these models could address a wider range of conditions than can the simple plume blight models. Simulated photographs that show how plumes would look when viewed against the background sky can help communicate the results of plume models (Williams et al., 1980). Flexibility The Gaussian single-source plume blight models that have been adopted for regulatory use are fairly inflexible. To the regulator, this could appear advantageous in that all model users will have to make similar simplifying assumptions to apply those models to a particular problem. The disadvantage is that the assumptions (that straight plumes are formed, dominated by NO2 and primary particles) often do not correspond to the situation being modeled. The reactive plume models, while more difficult to apply, are better able to represent actual conditions. They also provide a more flexible framework for incorporating advances in the scientific understanding of plume structure and aerosol processes. Balance The balance between data collection and data analysis is not likely to be distorted when using simple Gaussian plume blight models, as the data and personnel resources needed for their use are modest. Programs that involve reactive plume models must be carefully structured. Funds must not run short before the experiments needed to acquire input data for a reactive plume model are completed, or the models will have to be run using assumed rather than measured emissions and ambient data.

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas Bridging the Gap between Near-Source Models and the Regional Scale If a proposed single new source is made large enough, it may have significant effects on regional haze. A succession of large sources analyzed and built one at a time, also eventually can in the aggregate affect regional haze. For that reason, even in single source siting decisions, it may be necessary to consider effects on visibility at spatial scales greater than those of plume blight models or reactive plume models. The clear answer to how to do this is that the proposed new source or modified existing source should be introduced into an appropriate multiple source regional-scale model chosen from those described in Appendix C and in the regional haze section of this chapter. This requires that data bases on regional air quality and visibility and on the other sources already present in the region be maintained. An interagency working group on air quality modeling recently has been formed to assess the feasibility of multiple-source regional air quality modeling conducted by public agency staffs in support of single new source siting decisions (P. Hanrahan, pets. comm., Oregon Department of Environmental Quality, 1992). Their preliminary conclusion is that this is highly desirable and technically feasible. A multiple source Lagrangian puff model with linear chemistry probably will be selected for use initially, followed by a search for a model with a more explicit description of atmospheric chemistry and aerosol processes. The model will be driven by the U.S. Environmental Protection Agency's national emissions data base, presumably as modified by any more exact data available locally. Factors that will determine the rate of progress toward achieving an operational model at the regional scale in the west include issues of coordination between the various agencies, and access to slightly larger computers than are presently owned by most such agencies. A training program will be needed to diffuse the operational skills needed to support such models throughout the affected agencies. The most important missing piece of this system, within a regulatory context, is the present lack of clear criteria for judging how much of an increment to a regional haze problem due to a new single source is ''too much.''

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas SELECTION OF MODELS TO ADDRESS OTHER AIR-QUALITY PROBLEMS Visibility degradation is just one of many important air-quality problems, some of which have been or are being considered as objects of federal, state, or local legislation and regulation. Control measures that focus on one issue, such as ecosystem damage caused by acid rain or the health effects of fine particles, can alleviate other air-quality problems, such as visibility degradation in Class I areas. This complicates the task of the regulator, who might be accustomed to dealing with individual issues separately. The source apportionment models discussed in this report can provide the technical basis for determining the effects of proposed controls on the attainment of several air-quality goals. In analyzing several problems simultaneously, one must choose a model that can be applied to all the problems at hand. In many cases complex mechanistic models will describe a broad range of physical and chemical processes associated with the major gaseous and particulate pollutants. As they analyze visibility impairment, these models can simultaneously determine the concentrations, fluxes, and effects of primary emissions as well as secondary oxidants, acids, and aerosols. Simpler approaches could be applied to analyses of limited scope. For example, simpler rollback models can be used to make a preliminary examination of the effect of sulfur reduction mandated by the 1990 Clean Air Act Amendments on visibility in Class I areas in the eastern United States (Trijonis et al., 1990). Such an analysis can provide useful information about visibility and acid deposition, because sulfate is a dominant component of eastern haze, and sulfuric acid is a major component of acid rain in the East. If NOx or volatile organic compound controls are to be evaluated, then more complex models would be preferred because the interactions associated with these chemicals are more complex. The possible visibility effects of PM10 controls in nonattainment areas also illustrate the complications that arise when multiple regulatory goals and multiple sources are taken into account. The question in this case is whether controls adopted for countywide PM10 abatement are adequate to produce acceptable visibility in nearby Class I areas. If a small PM10 nonattainment area is the only major source of airborne particles in the

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas region, then there might be a simple way to assess the effects of local PM10 controls on broader scale visibility. However, if the local PM10 nonattainment area is only a small part of a large multiple-source region, then more complex modeling than that needed for a study of PM10 control will be required to sort out the various source contributions to the regional visibility problem. An assessment of the potential benefits on visibility of existing and expected legislation and regulation aimed at alleviating other air-quality problems should be an integral part of the decision-making process. If visibility is considered in the selection of air-quality models that will be used to analyze those other problems, then the effects on visibility of policies directed at other air-quality problems are more likely to be taken into account. SUMMARY The committee evaluated several alternative methods that could be used alone or in combination to analyze the effects of individual emissions sources or source classes on atmospheric visibility. Empirical methods range from qualitative tools, such as photography, through models based on material balances, such as speciated rollback and chemical mass balance models, to techniques based on statistical inference. We also have described models that are derived from the basic equations that govern atmospheric transport and that sometimes include chemical reaction and aerosol processes. Many of the more empirical models already have been developed almost to their fullest potential, and therefore a fairly clear picture of how these models could fit into a comprehensive source apportionment program is emerging. Photographic methods and other simple source identification systems provide an inexpensive way to qualitatively implicate single emissions sources that create visible plumes in or near Class I areas. Photographs, videotape, and film can provide sufficient evidence for regulatory action in simple cases, but they are not likely to be useful for source apportionment of widespread regional hazes. Speciated rollback models linked to light extinction budget calculations represent perhaps the only complete system of analysis that can be used for regional haze source apportionment throughout the United

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas States based on data available today. The feasibility of such analyses is demonstrated for several regional cases in Chapter 6: A speciated rollback model is used to develop a preliminary description of the likely origin of visibility impairment in the eastern, southwestern, and northwestern regions of the United States. The speciated rollback model is best suited to analyses of regional haze and is not suited to projecting the effects of changes in single members of a group of similar sources or the effects of proposed new sources in areas where the air is now clean. New sources are outside the realm of the past experience upon which rollback models are built. Receptor-modeling methods are valid, well accepted, and widely available for source apportionment of primary particles. Regression analysis and chemical mass balance techniques each have appropriate uses; regression analysis requires less prior knowledge of emissions characteristics, but it is more susceptible to model specification errors and consequent biases. However, neither model is yet developed to the point of acceptance for the purpose of apportioning secondary particles (sulfates, for example) among sources. It is not known whether further research will lead to a fully successful receptor-oriented model for secondary-particle formation; certainly many attempts are being made to expand the applicable range of receptor models. Receptor models currently must be used in conjunction with other models for secondary-particle formation. This simultaneous use of more than one model is useful in many cases, and therefore one can expect chemical mass balance receptor models to be used as part of comprehensive systems of source apportionment (either as a primary tool for source apportionment or to check the consistency of findings obtained by other methods). Mechanistic source apportionment models for use in the analysis of regional haze problems are also in considerable demand. In principle, such models could contain a physically based cause-and-effect description of atmospheric processes that could provide detailed and accurate predictions. In practice, mechanistic models exist in pieces that have not been fully assembled and tested. Their completion and testing should be pursued as a high priority. Some mechanistic models are available that present a partial picture of the effects of source emissions on pollutant concentrations. They can trace transport paths from source to receptor and in some cases can predict concentrations of secondary particles, such as sulfates and nitrates. These partial mechanistic models probably will find use in re-

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas gional visibility analysis, but must be combined with other empirical methods (such as the imposition of measured particle size distributions) to form a complete source apportionment system. Whether these semiempirical, semimechanistic models can produce predictions that are more accurate than those of a fully empirical speciated rollback model is an interesting research question that should be investigated. There is a clear need for a comprehensive and versatile single-source model for use in predicting the effects of large new sources. An advanced single-source model should not be restricted to Gaussian plume model formulations. Further research will be needed to create such a model, because current plume blight models do not include the atmospheric aerosol processes that lead to light scattering and absorption by particles. Efforts are underway by public agencies in the western states to develop the operational capability to evaluate the effect of proposed large single sources on regional scale visibility. The procedure is to maintain a baseline multiple source regional model into which data on a single new source can be inserted. This development effort should be encouraged. In summary, we will consider methods for source apportionment that are either available or could be put together from available components. Following our emphasis on a nested approach in which models of increasing difficulty and accuracy are chosen, the most attractive systems are judged to be as follows: Regional haze assessment: Speciated rollback models; Hybrid combinations of chemical mass balance receptor models with secondary-particle models; Mechanistic transport and secondary-particle formation models used with measured particle size distribution data to facilitate light-scattering calculations. Analysis of existing single sources close to the source: Photographic and other source identification methods (in simple cases only); Hybrid combinations of chemical mass balance or tracer techniques with secondary-particle formation models that include explicit transport calculations and an adequate treatment of background pollutants; The most advanced reactive plume models available, hybridized

OCR for page 143
Protecting Visibility in National Parks and Wilderness Areas with measured data on particle properties in such plumes and accompanied by an adequate treatment of background pollutants. Analysis of new single sources close to the source: The most advanced reactive plume models available, hybridized with measured data on particle properties in the plumes of similar sources and accompanied by an adequate treatment of background pollutants. Analysis of single sources at the regional scale: Insertion of the single source in question into an appropriately chosen multiple-source description of the regional haze problem. The hybrid models mentioned above are available to the extent that the necessary pieces of the modeling systems exist. Any novel combination of existing models should be carefully evaluated. For the reasons explained in the introduction to this chapter, most source apportionment studies would benefit from the use of several candidate models, and hence groups of models rather than single models are noted above. We emphasize that the skill and knowledge of the personnel executing a modeling study are often more important in determining the quality of the study than is the choice of the modeling method. We recommended research to achieve several goals. First, fully developed mechanistic models for the chemical composition, size distribution, and optical properties of atmospheric particles and gases should be created and tested. Two types of mechanistic models are needed: an advanced reactive plume aerosol process model for analysis of single-source problems close to the source and a grid-based multiple-source regional model for analysis of regional haze. In pursuit of those objectives, a program of careful field experiments and data analysis must be designed and conducted to support the use of aerosol process models (for example, to collect data on the degree of uptake of water by airborne particles) and to better characterize emission sources (to measure the chemical composition and size distribution of primary particles at their source in addition to the gaseous precursor emissions). Finally, experimental programs must be designed and conducted to test the performance of completed models of all kinds against field observations on emissions and air quality.