Click for next page ( 67


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 66
D Workshop Presentations A DEVELOPING GENERATION OF OBSERVATION AND MODELING STRATEGIES James G. Anderson Harvard University In the next few decades, several problems of considerable societal interest will emerge with a central theme of the balance between societal objectives and scientific curiosity-driven research, both of which are very important for the fu- ture. Three problems of key importance in the area of observations and modeling are: Migration of nitrate, sulfate, heavy metals, and organic soot from urban or regional areas into the broader environment and associated public health issues; Forecasting climate change and testing the forecast in a way that is ac- ceptable to a much broader range of individuals involved both in science and in public policy; and . . Ultraviolet dosage, which is in many ways a statement of what it is that society really cares about, particularly in connection with the ozone question. The way in which we attack these problems the observations, the model- ing, and the way public understanding evolves are crucial issues. For example, Charles Kolb's presentation provides a beautiful introduction to the issue of undersampling and the requirement for significant advances in core technology that must underpin the scientific case leading to effective public policy. 66

OCR for page 66
JAMES G. ANDERSON 67 The issue of carbon sources and sinks is strategically tied back to the ques- tion of nitrate, sulfate, heavy metal, and organic soot emission. If we can attack one problem and solve it, we will be attacking both. The question of how nitrate will affect human health is closely tied to carbon sources and sinks, which in turn are linked to climate. Consequently, answering the carbon-nitrate-sulfate question requires very high spatial resolution of fluxes, isotopes, and reactive intermediates. There are particular problems with region-specific studies (Box 11. For ex- ample, the Indian subcontinent as it links into the tropical region during the monsoon season is as different from the other seasons as any two regions on Earth. In addition, the way in which these systems couple from the regional to the global scale are extremely important for the issue of prediction. Describing a system and understanding it well enough to predict really separate the strategies of observations and modeling. In addition to seasonal signatures, we want to understand vertical fluxes and urban regional source sinks driven by human activity. We understand chemical transformations very well as illustrated by the Los Angeles basin. We know ex- actly what reactions are taking place, but the lack of specificity on the location and strength of sources and the way those couple into the chemical transforma- tion serves to prevent a link between science and public policy on that question.

OCR for page 66
68 APPENDIX D The strategy for analyzing NOX across the globe by comparing satellite ob- servations, models, and in situ aircraft observations hinges on the fact that the NO to NO2 ratio goes up dramatically with increasing height in the troposphere. For example, NO2 measurements from the European Space Agency's Global Ozone Monitoring Experiment (GOME) satellite provide a way of analyzing NO2 in the boundary layer and just above the boundary layer. Therefore it possible to test the differences between the GOME satellite measurements and global chemical mod- els. Those differences tell us where we have to attack the problem with a more sophisticated set of measurements. In particular, the large differences between the modeled and satellite-observed concentration fields establish the priorities for aircraft field campaigns that are the only means by which we can adjudicate these differences. Now consider the third question that of UV dosage. Here we have an emerging marriage between the atmospheric dynamics, chemistry, and medical communities because malignant melanoma, as we'll see in a moment, is a huge and increasing health issue. In fact, skin cancers are the only cancers worldwide that are increasing in a statistically important way in the face of developing medi- cal methods. So it's this union between the dynamics and the loss of ozone at midlatitudes (Figure 1) that provides the fundamental information that most of the ozone loss is taking place in the very lower part of the stratosphere. This leads to several questions: Which mechanisms are responsible for the continuing erosion of ozone over mid latitudes of the Northern Hemisphere? Will rapid loss of ozone over the arctic in late winter worsen? Are these large losses coupled to midlatitudes? How will the catalytic loss of ozone respond to changes in boundary con- ditions on water and temperature forced by increasing CO2, CH4, and so forth? How we respond in the tuture requires an understanding of the mechanisms that control the long-term erosion of ozone at midlatitudes. We know why ozone is destroyed over the Antarctic and over the arctic, but there is also a seasonally dependent midlatitude ozone loss. The months of March, April, and May define a key period when schools let out, final exams are over, and the younger popula- tion gets a large episodic dose of ultraviolet radiation. This March-April-May period shows the largest long-term erosion over the past 20 years, approaching 10% ozone loss per decade. However, the strategy one takes depends, to an ex- tent, on whether this is a societal or a scientific issue. Let's take the position that we don't know the mechanisms that control ozone erosion over midlatitudes (which at this juncture is true) and assume that the erosion of ozone will continue as it has in the past decades. Then we're going to look at the coupling with the large ozone losses in the arctic triggered by conver- sion of inorganic chlorine to free-radical form on ice particles or cold liquid aero-

OCR for page 66
JAMES G. ANDERSON 69 FIGURE 1 Midlatitude ozone loss. sots each winter in the polar vortex. We come to the conclusion, in such an analy- sis, that it's the dynamical structure of the atmosphere that underlies the funda- mental cause of this midlatitude ozone loss, not chemistry directly. From the societal perspective, the raw numbers are large and important for basal cell carcinoma: 800,000 cases per year. Until we understand the mechanism of midlatitude ozone loss, a simple extrapolation may be inaccurate, but it pro- vides an important reference for discussion (Figure 2~. By simple extrapolation there would be an increase from 800,000 cases of basal cell carcinoma in the United States annually to nearly 1.9 million by 2060. The logarithmic depen- dence of the cross section results in a 2% increase in UV at these optical depths for a 1% decrease in ozone. The biological amplification factor emerges from the medical community, and it's a change in human morbidity. For malignant mela- noma, the numbers are very much smaller, but the fractional death rate, as op- posed to morbidity, is much greater. Consider the pattern of ozone over the Northern Hemisphere, and ask if this simply results from large ozone concentrations over the arctic migrating back into midlatitudes. That would correspond to chlorine- and bromine-catalyzed

OCR for page 66
70 APPENDIX D FIGURE 2 Hypothetical trend for basal cell carcinoma if the trends in midlatitude ozone erosion continue unchanged. ozone destruction over the winter arctic merging back into midlatitudes. It's a completely reasonable, simple, understandable hypothesis, and it emerges from the fact that in the early 1970s we had this dome of ozone that represented the sequestration of ozone moving from the low latitudes into the high latitudes. As the ozone moves northward and downward, it's sequestered in this dome, and that's the way the world has worked for millions and millions of years. Until the late l990s, as the ozone layer began to thin in many winters, it was emulating the Antarctic in a dramatic way. Does this low-ozone air created in the winter vortex flow back into midlatitudes, causing the observed minimum in the March-April- May period? An analysis of data from the last National Aeronautics and Space Administration (NASA) arctic mission indicates clearly that this does not occur. There are indeed large ozone losses in the vortex, but all of the large-scale flow is from the tropics northward and downward. Also, because we know the seasonal phase of CO2 and water over the tropical tropopause, we know that there is no communication backward from the polar regions to midlatitudes in these key months of ozone loss. Therefore, it isn't simply ozone-depleted arctic air moving back into midlatitudes. Does long-term ozone erosion result from chemical loss of ozone at midlatitudes in the lower stratosphere? Susan Solomon made the very reasonable suggestion that the penetration of cirrus clouds and cold aerosols into the lower stratosphere initiates the conversion of inorganic chlorine to free-radical form,

OCR for page 66
JAMES G. ANDERSON 71 and she carried out a number of modeling studies that support this.) However, evidence garnered from hundreds of crossings of the tropopause by the ER-2 aircraft which provides very high-resolution simultaneous measurements of tropopause position, water vapor and temperature, percentage of observations with ice saturation, and C1O concentrations demonstrates that cirrus clouds and cold aerosols capable of providing the heterogeneous site for inorganic chlorine to free-radical conversion do not exist. The absence of observed C1O is the most compelling point. It would take about a 50-200 parts per trillion (ppt) of C1O to drive the ozone loss that's observed, but the experimental measurements showed less than 2-4 ppt of C1O, all the way up to 4 km above the tropopause. Consequently, we do not believe that in situ loss of ozone is responsible for the long term trend in midlatitude ozone erosion, even though that would be the most reasonable explanation. This brings us back to the fundamental unsolved question of the coupling between the tropics and high latitude. As the temperature of the ocean surface warms in response to increasing CO2 forcing, how does it affect the boundary condition on the entrance of water vapor into the stratosphere? The tropical tropopause constitutes a valve that desiccates the stratosphere and strictly controls water vapor entering the stratosphere from the troposphere. Large-scale ascent operating above the tropical tropopause delivers this mix- ing ratio of water vapor into the high latitudes. For temperatures running from 192 to 200 K and a given water vapor curve of 6 parts per million (ppm), the trigger point for formation of high C1O is about 195 K. We have verified this experimentally using ER-2 observations with trajectory calculations demonstrat- ing the temperature history of air parcels moving within the vortex. Consequently, the amount of water vapor, as it increases in the stratosphere in response to increasing temperatures at the tropical tropopause, would instigate a shift in the threshold temperature required to instigate C1O formation to tem- peratures above 195 K, thereby aiding the dramatic loss of ozone in the arctic winter vortex. At the same time, the increase in water vapor in the vortex induces radiative cooling that drops the temperature of the wintertime lower stratosphere at high latitudes, thus exacerbating the destruction of ozone by chlorine radicals. The water vapor is the crucial quantity, and the structure of the tropics turns out to be the centerpiece for understanding all of these issues linking climate and trends in UV dosage. What is needed is to couple the boundary layer in the tropics all the way up through the tropical transition layer into the stratosphere (Figure 3~. The geo- graphic coverage is crucial. The eastern tropical Pacific over Central America is dramatically different from the western tropical Pacific. It is the structure of con- iSolomon, S.; Borrmann, S.; Garcia, R.R.; Portmann, R.; Thomason, L.; Poole, L.R.; Winker, D.; McCormick, M.P. J. Geophys. Res. 1997, 102(Dl7), 21411-21429.

OCR for page 66
72 7 APPENDIX D FIGURE 3. Deep convection in the tropics. vector in the tropics, the control of water vapor in the middle-upper troposphere, and the formation of high-altitude cirrus in the region between 13- and 18-km altitude that must be understood before predictions of the impact of climate change can be made. There are important suggestions in the paleorecord. Consider the Eocene 50 million years ago. In central Wyoming, there were turtles, alligators, and palm trees, a combination of plant and animal life that extended into central northern Canada. It was an era defined by deep ocean temperatures, running 10 K above

OCR for page 66
JAMES G. ANDERSON 73 present and warm polar sea surface temperatures. The Northern Hemisphere con- tinental interiors were warm throughout the year. There was no glaciation. How this occurred is an important question that carries crucial messages for today- including how we attack the problem scientifically. The mechanisms responsible for Eocene climate have been suggested to be enhanced meridional heat fluxes. The ocean is always involved here, with reorga- nization of the atmospheric circulation, enhanced greenhouse warming due to high carbon dioxide, and reductions in global topography. Yet none of the models have been able to capture the gentle difference in temperature between the tropics and high latitudes. This was explained by Sloan and Pollard in 1998, when they introduced polar stratospheric clouds into the model.2 These are the site for het- erogeneous reaction, but they also are profoundly important for trapping infrared radiation. They introduced methane in our favorite reaction to produce the oxida- tion leading to the formation of water. They proposed that the methane came from swamps and wetlands. The Eocene went on uninterrupted for 10 million years, but the chemical lifetime of methane is about 7 years. Consequently, we don't believe that this is the explanation. We believe that polar stratospheric clouds are trapping high- latitude radiation in the infrared, but we believe the mechanism comes from the following: If CO2 enters the system or heat moves northward, the gradient be- tween the tropics and the poles begins to soften, leading to reduced excitation of gravity waves and planetary waves driving from the troposphere up into the strato- sphere. As the flux of upwelling gravity waves and planetary-scale waves is re- duced, so is the wave drag effect, which is the dominant pump that lifts material up in the tropics and pushes it down at high latitudes. If the effectiveness of that pump is reduced, the system relaxes back over the tropics so that the boundary condition on water vapor increases, allowing significantly more water to get into the system. We believe that this is the crucial climate state, and it is at the heart of our understanding of the current climate and also of UV dosage. This brings us back to analyzing the meridional cross section in three dimen- sions from the equator to the pole. A long-duration balloon that could remain in the lower stratosphere would allow us to sample vertically by lowering a pack- age. This is similar to stratospheric experiments from the 1980s, but the subtlety of the connection of the dynamics with the radiative and chemical properties demands an entirely different look. We know that we can scan 10 km back and forth, but now we have some tremendous help from fuel cells. The technology of fuel cells allows us to use solar energy for balloon flights lasting several months. Solar energy drives a very efficient propeller to position the balloon at whatever latitude we want to scan. This energy technology has produced major breakthroughs. Even though other approaches are important, we think that the subtlety of this connected system will 2Sloan, L.C.; Pollard, D. Geophys. Res. Lett. 1998, 25(18), 3517-3520.

OCR for page 66
74 APPENDIX D FIGURE 4 Atmospheric radiative forcing trends. emerge only out of observations of tracers, particles, and the velocity components associated with this when done in a very sensitive way from an immobile plat- form. We must still address the question of testing climate forecasts. Atmospheric CO2 levels are rapidly increasing, as documented in the Intergovernmental Panel on Climate Change (IPCC) report.3 A number of different scenarios have been Intergovernmental Panel on Climate Change (IPCC), Climate Change 2001: The Scientific Basis, Houghton, J.T.; Ding, Y.; Griggs, D.J.; Noguer, M.; van der Linden, P.J.; Xiaosu, D., Eds., Cam- bridge University Press: Cambridge, New York, 2001 (http://www.grida.no/climate/ipcc_tar/).

OCR for page 66
THOMAS W. ASMUS 75 developed, and it is clear that we're hugging the upper boundary on the release of CO2 that drives the forcing. We are projecting nearly 4 W/m3 by the middle of the twenty-first century (Figure 4~. This is emerging out of the natural variability, and it will be extremely important. The final point is the question of societal objectives. If you look at the ques- tion of climate forecast from societal objectives, what we need is an operational forecast that is tested and trusted. Yet this nation has no operational climate fore- cast. We have no test of the veracity of that forecast; and until we do, we cannot deliver what is needed to the public. The backbone of the climate forecast, of course, is the operational model that links the short-term E1 Nino scale to the longer term. The observing system is the key challenge for testing the veracity of calculations. Carbon sources and sinks have been discussed. I believe that upper ocean observations, climate data records at the surface, and benchmark observation that establish the long-term evolution of the climate in an absolute sense constitute the centerpiece of what must be done. DIESEL ENGINES FOR CLEAN CARS? Thomas W. Asmus DaimlerChrysler Corporation While the Diesel engine has been growing in popularity in the light-duty vehicle segments in many parts of the world, it has essentially stalled in the United States in these segments. In large part this results from the absence of financial incentives given the low levels of fuel taxation in the United States, plus an image problem based largely on experiences of two decades ago. Since that time, Diesel engine technology has improved in many ways, and this manifests itself as in- creased performance and fuel economy accompanied by reduced emissions, odor, and noise. In large part this has been made possible, not by any scientific break- throughs, but rather by machine design and manufacturing technology advance- ments particularly in fuel-injection system components. In the light-duty vehicle segments today the typical fuel economy expectation of a Diesel engine is 40% greater than that of the gasoline engine at equal vehicle performance on a fuel volume-normalized basis. Because Diesel fuel has roughly 15% more energy per gallon than gasoline, the benefit is roughly 25% on an energy-normalized basis. The U.S. regulatory mandates on criteria emissions have been within reach for gasoline and Diesel-powered vehicles to the present, but in 2007 the mandates will be beyond reach for the Diesel with any kind of sensible emissions abate- ment scheme. This, combined with a very weak business case based on low fuel taxation, effectively discourages U.S. industry investment in Diesel the only practical high-fuel-economy alternative to the more conventional and well-known

OCR for page 66
76 APPENDIX D gasoline engine technology. While the cost of a modern, high speed Diesel en- gine is substantially more than its gasoline counterpart, it is the most cost attrac- tive alternative to conventional gasoline. Evolution of the Diesel Engine Diesel engine combustion is a highly mixing-limited, stratified-charge pro- cess, and herein lies its formidable efficiency advantage over homogeneous- charge gasoline as well as its challenges with respect to NOx and particulate mat- ter (PM). Whereas the homogeneous-charge relies on inlet throttling for load control, the Diesel relies only on injected fuel quantity for this. Early attempts at managing this mixing-limited process often involved using an outside source of compressed air to assist the fuel atomization process. Later, much attention fo- cused on manipulating in-cylinder air flows to hasten the mixing process. Mean- while, machine design and manufacturing specialists found practical means to increase fuel-injection pressures and to better manage machining process for bet- ter control of fuel-injection precision. Ultimately there was less reliance on in- cylinder air flow manipulation to support the mixing process; hence higher en- gine speeds were enabled by faster combustion, while NOx and PM emissions were reduced. With this, the pre-combustion chamber Diesel became obsolete, and direct injection (open combustion chamber design) became the standard for Diesel engines of most all sizes and duty cycles. With this came reductions in combustion chamber surface area and also in-cylinder turbulence, both of which contribute to reduced heat losses. Typically a 15% increase in thermal efficiency accompanied this shift. High-pressure common-rail fuel systems are state of the art, they provide substantially improved combustion manipulation ability com- pared to all progenitors, and turbocharging has become standard on all small high-speed, automotive Diesel engines. (Turbocharging Diesel engines is highly beneficial to vehicle fuel efficiency since smaller engines with lower friction can be used. Any fuel efficiency benefit derived from turbocharging gasoline engines is somewhat more conditional.) Relative to conventional gasoline engines, these produce significantly higher torque density and near-competitive power density. At least a portion of Europeans' enthusiasm for Diesel power is their highly de- sirable drivability and performance characteristics. Engine-Out Emissions The fuel efficiency advantage of the Diesel over conventional gasoline is based primarily on the means of load control via injected fuel quantity while the air flow is essentially unrestricted (i.e., there is no inlet throttling); therefore the gas exchange (or pumping) loss is minimal. At light loads, therefore, the overall air-fuel ratio is sufficiently high that homogeneous-charge flame propagation is not possible. Hence, stratified charge, diffusion-limited combustion is the princi-

OCR for page 66
138 APPENDIX D FIGURE 1 Two examples of ampiphilic siderophores produced by marine heterotrophic bacteria. Redrawn from Martinez et al. (2000~. tration of CO2 in the deep oceans. All the steps in the nitrogen cycle are catalyzed by metalloenzymes (Figure 2~. For example, all forms of nitrogenase (the enzyme responsible for N2 fixation) contain a large number of iron atoms. Iron, molybde- num, and copper are also involved in the various enzymes that carry out denitrifi- cation. There is presently wide speculation that iron availability limits the overall

OCR for page 66
FRAN'(~OIS M.M. MOREL 139 FIGURE 2 A diagram of the nitrogen cycle with catalyzing enzymes and metal require- ments of each step. NOTE: AMO = ammonium mono-oxygenase; HAO = hydroxylamine oxidoreductase; NAR = membrane-bound respiratory nitrate reductase; NAP = periplasmic respiratory nitrate reductase; NR = assimilatory nitrate reductase; NIR = respiratory nitrite reductase; NiR = assimilatory nitrite reductase; NIT = nitrogenase; NOR = nitric oxide reductase; N2OR = nitrous oxide reductase. rate of nitrogen fixation in the oceans.8 9 There is also some evidence that, in some suboxic waters, the concentration of available copper may be too low for the activity of the copper enzyme nitrous oxide reductase and result in N2O accu- mulation and release to the atmospheres Understanding the nitrogen cycle of the oceans and the release of some important greenhouse gases such as N2O to the atmosphere thus requires that we elucidate the acquisition of metals such as iron and copper and their biochemical utilization by various types of marine microbes. As exemplified above, the major goals of environmental bioinorganic chem- istry are to elucidate the structures, mechanisms, and interactions of important "natural" metalloenzymes and metal-binding compounds in the environment and to assess their effects on major biogeochemical cycles such as those of carbon and nitrogen. By providing an understanding of key chemical processes in the biogeochemical cycles of elements, such a molecular approach to the study of global processes should help unravel the interdependence of life and geochemis- try on planet Earth and their convolution through geological times. ~Falkowski, P. G. Nature 1997, 387, 272-275. 9Berman-Frank, I.; Cullen, J. T.; Shaked, Y.; Sherrell, R. M.; Falkowski, P. G. Limnology and Oceanography 2001, 46, 1249-1260. iGranger, J.; Ward, B. Limnology and Oceanography 2002, 48, 313-318.

OCR for page 66
140 APPENDIX D ENVIRONMENTALLY SOUND AGRICULTURAL CHEMISTRY: FROM PROCESS TECHNOLOGY TO BIOTECHNOLOGY Michael K. Stern Monsanto Company Agncultural practices are undergoing a transformation driven partially by advances at the interface of chemistry and biotechnology. This paper outlines some of the new technologies that are playing an integral role in catalyzing that change. Topics of discussion include new chemical process technology and chemical catalysis that allows for more efficient production of herbicides as well as transgenic crops and their benefits to agriculture and the environment. Glyphos ate is the active ingredient in Roundup herbicide. The process Monsanto uses to manufacture glyphosate relies heavily on chemical catalysis (Figure 1~. Improvements in our catalyst have driven profound environmental and economic benefits in the manufacturing of glyphosate. One of the main themes around environmental chemistry moving into the future will be a renewed focus on the development of novel homogeneous and heterogeneous catalysts. One of the beautiful things about catalysts is their ability to positively impact the economics of a process often without the requirement for major capital invest- ments. Steps DEA \ Catalyst or , HCN ~ CH2O ~ NH3 i'N into Na-O H O-Na DSIDA P4 + Chlorine I PCI 3 q~N - O DSIDA ~ Formaldehyde , HO ~ OH PO 3H 2 Gl Gl ~O Catalyst FIGURE 1 The current glypho sate process. H O PA -No Aft H OH Glyphosate

OCR for page 66
MICHAEL K STERN Copper Plating Technology FIGURE 2 Second-generation catalyst. 141 A key intermediate in the manufacturing of glyphosate is disodium iminodiacetic acid (DSIDA). Originally an HCN-based route was the only pro- cess used to manufacture this product. Some of the advantages of this technology are (1) it's proven, (2) it uses readily available raw materials, and (3) the process typically gives good yields. However there are a lot of challenges associated with this chemistry. For instance, a considerable amount of waste is produced and HCN is a difficult raw material to handle due to its toxicity. Accordingly, Monsanto wanted to explore other technologies when we were faced with the need to expand our DSIDA capacity in the early 1990s. We ultimately settled on a novel catalytic route for the production of DSIDA. The initial catalyst was a Raney copper composition that allowed us to convert diethanolamine, to DSIDA in a really interesting dehydrogenation reaction. This is an endothermic reaction that gives off hydrogen as a by-product. We were able to get the facility to work with this catalyst, but there were a lot of operational issues associated with this technology. Raney copper is a very malleable soft metal which resulted in catalyst stability issues. Ultimately we needed to go ahead and find something that was better. Copper is essentially the only metal that catalyzes this reaction. The technic cat challenge was to find a way to stabilize copper under the reaction conditions. We spent a lot of time looking at whether you could put copper directly on car- bon. It turns out that you really can't do that very well. Copper likes to move around, particularly under the reaction conditions. So a new catalyst technology was developed using platinum as an anchor that was then coated with copper. This resulted in a very stable catalyst (Figure 2~. The new catalyst technology had significant environmental benefits. These included the use of less toxic raw mate- rials and the elimination of nearly all of the waste produced by the older technol- ogy.

OCR for page 66
142 APPENDIX D Let me switch gears to another catalytic reaction. This is the reaction where we take glyphos ate intermediate and convert it to glyphos ate using a carbon cata- lyst. This appears to be a very simple reaction, but there are several technical issues related to the production of by-products. The problem is that the by-prod- ucts go ahead and react with your desired product to make other undesired by- products. What we needed was a catalyst that could do two reactions at the same time. The first reaction converts glyphos ate intermediate to glyphosate and the other is to react away the undesirable by-products. We were successful in devel- oping this type of catalyst which resulted in significant environmental and com- mercial benefits (Table 1~. The major benefits associated with this technology result from the more effi- cient use of water in the process. With the implementation of this new catalyst technology we have been able to reduce the amount of water flowing through our process by nearly 300 million gallons a year. This resulted in a concomitant re- duction in flows to our biotreatment systems that reduce the amount of biosolids we send to landfill. Overall this was a very successful project for Monsanto. I'd like to change focus from catalysis to another theme in green chemistry: that of atom efficiency. If you noticed in the glyphos ate process, we're not com- pletely atom efficient. This is due to the fact that putting on the phosphonomethyl group is challenging. The issue is that if you don't protect the primary amine, you end up doing two phosphonomethyl reactions that yield the undesirable product glyphosine. The current solution to this problem is to protect the primary amine with a carboxymethyl group, which ultimately gets removed in the last step of the process. The challenge was to find a more atom-efficient protecting group. A team investigated this and developed a whole new technology based on novel platinum catalysts. This technology allows for the selective demethylation of N-methylglyphosate to produce glyphosate acid directly. This would save one TABLE 1 Environmental Benefits of New Catalyst and Process Technology Annual Reductions Projected by 2002 Resources Steam (BTUs/yr) Demineralized water (gal/yr) Waste Flow to biosystems (gal/yr) SARA 313 deep well injection (lb/yr) Biosludge (lb/yr) Land-filled solid waste (lb/yr) SARA 313 air emissions (lb/yr) Carbon dioxide production (lb/yr) 880,000,000,000 380,000,000 800,000,000 52,000 8,000,000 1,380,000 17,600 100,000,000 NOTE: SARA = Superfund Amendments and Reauthorization Act.

OCR for page 66
MICHAEL K STERN H2 3 cat. A - ~N~CO2H A H2O3P\' N. ::CO2H 2 Moedritzer-lrani ~ H H2O3P\' N. ::CO2H + FIGURE 3 N-isopropyl glyphosate: an atom-efficient intermediate. 143 ~'N::CO2H N-isopropyl glycine By H2O3P\: N. ::CO2H N-isopropyl glyphosate carbon atom when compared to the traditional route. However it was also discov- ered that you could use other protecting groups besides methyl. In fact it was possible to use an isopropyl group, which under the reaction conditions could be removed to generate acetone and glyphosate. The acetone can be recycled back into the process, resulting in an extremely atom-efficient system (Figure 3~. As mentioned above, glyphosate is the active ingredient in Roundup herbi- cide. Roundup plays an important part in the new wave of agricultural products derived from biotechnology. This new technology has many economic and envi- ronmental benefits. Expansion of the global acreage planted with Roundup Ready crops has resulted in a reduction of the use of pesticides by nearly 50 million pounds per year. This can affect groundwater positively by reducing agricultural chemical contamination in watersheds where a large percentage of Roundup Ready crops are planted. When crops such as cotton and corn are protected against insect pest through biotechnology, we also see a benefit to nontarget organisms. So in summary, biotechnology has delivered significant environmental benefits. Many of these benefits are consistent with the EPA's guidelines and focus. In conclusion, the chemical industry is going to be even more dramatically transformed in the future, and key advances will be made at the interface of chem- istry and biology. Discovery and development of new environmentally beneficial catalyst and process technologies will be critical for the chemical industry to thrive in the United States. The new technologies will need to be relevant and have a positive impact on the earnings and competitiveness of the chemical in- dustry. Advances in catalysis will be a key driver in the development of cleaner and more efficient chemical processes. Finally, breakthrough discoveries are likely to be the products of interdisciplinary work teams.

OCR for page 66
44 APPENDIX D STABLE ISOTOPES AND THE FUTURE OF ENVIRONMENTAL- CHEMICAL RESEARCH Mark H. Thiemens University of California, San Diego In 1947, Harold Urey and, simultaneously, Bigeleisen and Mayer developed the formalism for determination of the position of equilibria in isotope exchange reactions. These papers, for the first time, calculated at high precision the posi- tion of isotope exchange equilibria as a function of temperature. In the same year, Nier reported the development of the double collector isotope ratio mass spec- trometer, which allowed for measurements of isotope ratios at a precision suffi- cient to measure the modest isotope ratio changes associated with the temperature dependencies of exchange reactions. In this sense, 1947 represents the birth of stable isotope chemistry. Subsequently, an enormous range of applications has emerged utilizing isotope ratio measurements as a probe of natural processes that include studies of atmospheric chemical processes, paleoceanography and cli- mate, stable isotope geochemistry, and planetary sciences. Although the utilization of stable isotopes as a means to resolve environmen- tal processes has had many applications for more than a half-century, there have been some limitations on the extent to which specific processes, both chemical and physical, might be resolved. These limits arise because with only a single isotope ratio, there is a certain lack of specificity associated with the measure- ments. In 1983, Thiemens and Heisenreich reported a new isotope effect. This par- ticular effect was unique in that the isotope ratios (e.g., of oxygen) alter on a basis other than mass. For example, in the case of ozone formation, the isotopomers 160160170 160160180 form at essentially equally rates that exceed those associated with 16016ol6o. These reactions proceed in part on the basis of isotopic symmetry, with the asym- metric species forming at a rate greater than the purely symmetric species. Marcus and colleagues at Cal Tech have developed a chemical formalism that accounts for a large proportion of the laboratory experiments. There remain some funda- mental issues, however, with respect to a fully developed quantum-level theory. While additional theoretical formalisms for the isotopic fractionation event are still needed, the mass-independent isotopic fractionation process has provided a new and definitive mechanism by which an extraordinary range of environmental

OCR for page 66
MARK H. THIEMENS 145 processes may be resolved. The inclusion of the second isotope ratio measure- ment adds a sensitive probe of processes that may not be afforded by concentra- tion or single isotope ratio measurements. As such, the use of mass-independent isotope compositions of environmental molecular species has provided a new probe to understand natural processes and to characterize anthropogenic impacts. There will be a wide range of new and significant applications in the future that will provide a powerful complement to other measurement techniques and model- ing efforts. These include studies of the atmosphere, hydrosphere, and geosphere, as well as paleoenviroments and global environmental change. Such studies will be of critical importance in evaluating and predicting global environmental sustainability. Present and Future Applications There exist numerous recent review articles on the subject of mass-indepen- dent isotope effects and details are available in these articles.) 2 3 A key point of the culmination of observations is that the most important chemical issues in the environment today, and as expected in the future, may be studied utilizing mass- independent isotopic compositions. In the context of this report, this represents a future chemical frontier area of chem~cal-environmental research. There exist a number of gases that possess mass-independent isotopic com- positions, and with future measurements, the frontiers of environmental chem~s- try may be extended. The following are specific, though not inclusive, examples. Stratospheric and Mesospheric CO2 It was first observed by Thiemens et al.4 that stratospheric CO2 possesses a large and variable mass-independent isotopic composition. This composition was suggested as denying from isotopic exchange with O(iD), the product of ozone photolysis.5 6 As later confirmed by rocket-borne collection of stratospheric and mesosphenc air, this unique isotopic signature provides an ideal tracer of odd oxygen chemistry of the Earth's upper atmosphere, one of the most important upper atmospheric processes. There are, however, several features that require further measurement (laboratory and atmosphenc) and theoretical considerations. iThiemens, M. H. Science 1999, 283, 341. 2Weston, R. E. Chem. Rev. 1999, 99, 2115. 3Thiemens, M. H.; Savarino, J.; Farquhar, J.; Bao, H. Acct. Chem. Res. 2001, 34, 645. 4Thiemens, M. H.; Jackson T. L.; Mauersberger, K.; Schuler, B.; Morton, J. Geophys. Res. Lett. 1991, 18, 669. 5Yung, Y. L.; DeMore; W. B.; Pinto; J. P. Geophys. Res. Lett. 1991,18, 13. 6Yung, y. L.; Lee, A. Y. T.; Iriow, W. B.; Demaro, W. B.; Chen, J. J. Geophys. Res. 1997,102, 10857.

OCR for page 66
146 APPENDIX D This represents a future goal in need of pursuit because both climate and chemical budgeting considerations are impacted. As discussed in an earlier report,7 upper atmospheric CO2 possesses a mass-independent isotopic composition, while tro- posphenc CO2 is strictly mass dependent (as a result of equilibrium isotopic ex- change with water). This renders CO2 an ideal tracer of stratosphere and tropo- sphere mixing. Quantification of this process is also of significance for understanding global chemical budgets and lifetimes of several greenhouse spe- cies. Given that significant uncertainties remain, future measurements will be crucial in quantification of stratospheric and tropospheric mixing and in develop- ment of remediation policies associated with global climate change. Greenhouse Gas Characterization As discussed in the review article,8 several greenhouse gases have been ob- served to possess mass-independent isotopic compositions. These include O3, CO, and N2O. In each instance, this measurement has provided significant insight unattainable by other measurement techniques. The case of atmospheric N2O is particularly interesting. Nitrous oxide is a greenhouse gas with a warming capac- ity nearly 200 times that of CO2 on a per-molecule basis, and serves as a major sink for stratospheric ozone via photochem~cal destruction. In spite of decades of research, the N2O budget is still inadequately understood. Most recently, isotopic measurements of a new variety have proven to be particularly valuable. These observations utilize high-precision measurements of the isotopomenc fragments of NO in a mass spectrometer.9~0~2 From such measurements, the internal N2O isotopomenc distribution may be determined. It is now recognized that the isotopomenc distributions of 969. 15N14N160 15N14N180 14N15N160 14N15N180 7Thiemens, M. H.; Jackson, T.; Zipf, E.C.; Erdman, P.W.; Van Egmond, C. Science 1995, 270, ~Thiemens, M. H. Science 1999, 283, 341. 9Toyoda, S.; Yoshida, N. Anal. Chem., 1999, 71, 4711-4718. iBrenninkmeijer, C. A. M.; Rockmann, T. Rapid Commun. Mass Spec. 1999,13, 2028. i~Rockmann, T.; Kaiser, J.; Crowley, J. W.; Brenninkmeijer, C. A. M.; Crutzen, P. Geophys. Res. Lett. 2001, 28, 503. i2Toyoda, S.; Yoshida, N. C.; Urabe, T.; Aoki, S.; Nakazawa, T.; Sugawara, S.; Honde, H. J. Geophys. Res. 2001,106, 7515.

OCR for page 66
MARK H. THIEMENS 147 are highly characteristic of specific processes, such as photolysis or defining indi- vidual point sources (biologic, abiologic). In the future, this particular variety of measurements will provide a new level of detail in understanding the global atmospheric cycles of N2O as well as the other gases. There is a clear need for expansion of such measurements in a variety of global environments and to obtain fundamental physical-chemical information simultaneously. With such concomitant developments, new details of N2O atmo- spheric processes may be obtained. Atmospheric Aerosol Sulfate and Nitrate Atmospheric sulfate aerosols are known to exert a significant influence on Earth's surficial processes. They mediate climate, both as cloud condensation nuclei and as light-scattering agents. Sulfate is also a pernicious respirable mol- ecule with well-known consequences for human health. It is estimated that there are more than 60,000 deaths a year from cardiovascular disease associated with aerosol inhalation. Recent studies have demonstrated a link to cellular damage. Additionally, following wet and dry deposition, sulfate destroys biota, alters biodiversity, and causes pervasive structural damage. The segregation of sulfur between gas- and aqueous-phase oxidative processes has implications for the de- gree of indirect effect of sulfate aerosols on climate due to its dependence on aerosol number densities. Sulfate derived from gas-phase oxidation results in new particle formation. However, for aqueous-phase oxidation, the sulfate is gener- ated on a previously existing particle and does not contribute to the total aerosol number. It has recently been shown that atmospheric sulfate possesses a significant and variable oxygen mass-independent isotopic composition. From a series of laboratory and atmospheric measurements it has been demonstrated that these measurements provide a highly sensitive mechanism by which the relative pro- portions of hetero- and homogeneous oxidative pathways may be quantified. This represents a significant observational advancement, and future measurements on a global scale will dramatically enhance understanding of this important atmo- spheric species. It has also been shown that sulfate oxygen isotopic measure- ments of aerosols collected during the Indian Ocean Experiment (INDOEX) re- vealed that the Intertropical Convergence Zone (ITCZ) is a source of new aerosol particles, which has significant consequences. First, this is a previously unrecog- nized process unaccounted for in any global climate model. Secondly, this is a source of large (micron-sized) particles and the mechanism associated with their formation is unknown. Gas-to-particle conversion processes produce submicron- rather than micron-sized particles. The likely mechanism for this process may be surface catalysis, possibly on carbonaceous particles. Should this be confirmed, there would be significant consequences. Climate models assume that sulfate par- ticles are white and reflective of visible light. If sulfur is catalytically oxidized on

OCR for page 66
148 APPENDIX D carbon surfaces, the sign of sulfate radiative forcing may be the reverse of what has been assumed, in which case there could be significant error in climate mod- els. It is therefore of considerable importance that the nature and magnitude of this process be resolved. Fully understanding the reaction pathways can be ac- complished only by an intensive combination of isotopic measurements, climate models, and laboratory studies of the relevant surface catalytic reactions. Such a program typifies the future needs of environmental chemical research. As is the case of sulfate, nitrate aerosols also possess large, mass-independent isotopic compositions. Nitrate concentrations may double in the next half-century with severe environmental consequences, including alteration of biodiversity, enhanced algal blooms, and loss of agricultural productivity. As in the case of sulfate, the large mass-independent isotopic signature has afforded new insights into a major global cycle the nitrogen cycle. There remains a large range of studies to be pursued, which will develop new understanding of the chemistry of the Earth's environment. Once more, this may be accomplished by combined laboratory physical-chemical measurements, field observations, and modeling efforts. The ultimate consequence is a significant advancement in understanding global interactions of the chemical environment. Summary The utilization of mass-independent isotopic measurements of a wide variety of atmospheric, hydrospheric, and geologic species has advanced understanding of a wide range of environmental processes. The future development of the utili- zation and understanding of this new technique clearly will have numerous appli- cations that should, and will, be advanced. Issues in climate change, health, agri- culture, biodiversity, and water quality all may be addressed. Simultaneous with the acquisition of new environmental insight will be enhanced understanding of fundamental chemical physics.