Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 49
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering ENERGY FOR THE FUTURE AND ITS ENVIRONMENTAL IMPACT
OCR for page 50
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering This page in the original is blank.
OCR for page 51
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering Deregulating the Electric Grid: Engineering Challenges THOMAS J. OVERBYE Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, Illinois Most people seldom give much thought to electric power. And why should they? The electrical system was designed as the ultimate in plug-and-play convenience, and the regulated monopoly structure of the industry means they have had no choice but to buy their power from the local utility. Furthermore, our society's tremendous dependence on electric power has been transparent to most, except, of course, during the occasional blackout when it becomes all too apparent. However, there has been the issue of the monthly bill. In particular, the issue of why electric rates have been so different across the country. There had to be a better way! So began the process of deregulating the electrical grid, with no one quite sure where it would end up. This paper describes several of the engineering challenges associated with this deregulation. BACKGROUND Prior to deregulation electric utilities were vertically integrated "natural" monopolies serving captive markets governed by a regulatory compact. That is, in a particular service territory the electric utility did everything from owning and operating the generation, owning the transmission grid in that market, providing the wires that actually connected to the customer, to reading the customer's meter. It was a cost-plus business—the utility and regulators determined the allowable expenses, which were used to determine the customer rates, which the customers (previously known as ratepayers) had to pay. In exchange for this monopoly franchise, the utility accepted an obligation to serve all existing and future customers on a nondiscriminatory basis. From an engineering standpoint this vertical monopoly provided a stable basis for building a reliable system. In an era of economies of scale, large power
OCR for page 52
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering plants, and the high-voltage transmission system needed to move the power from these plants to the customers, could be engineered, built, and operated with the assurance that legitimate costs could be passed on to the ratepayers. Control was centralized, with the transmission grid shared by a relatively small group of utilities, more colleagues than competitors, all operating under the same paradigm of vertical monopolies. Starting in the 1970s with the OPEC oil embargo, things began to change. Slowly the grid was opened to competition. Key recent events were the passing by the U.S. Congress of the Federal Power Act of 1992, which required opening the transmission system to competition, and in 1996 the issuance by the U.S. Federal Energy Regulatory Commission (FERC) of Orders 888 (Promoting Wholesale Competition Through Open Access, Nondiscriminatory Transmission Services by Public Utilities) and 889 (Open Access Same-Time Information Systems). The aim of these changes is simple: to provide nondiscriminatory access to the high-voltage transmission system so as to open the grid to true competition in the generation market, with the eventual goal of providing choice to the customers. However, the engineering challenges in doing this can be significant. This paper addresses three of these challenges: 1) bulk electricity market development, 2) market power assessment in electricity markets, and 3) power system data aggregation and visualization. BULK ELECTRICITY MARKET DEVELOPMENT FERC Orders 888 and 889 provided the broad guidelines for restructuring the U.S. power industry. How best to achieve this restructuring was left to individual state governments. Given the divergent political views of the states and their differences in average electric rates, it should not come as too much of a surprise that restructuring is progressing at vastly different rates across the country. Those states with the highest electric rates have been the first to restructure, while those with low rates are finding it difficult to see any advantage to changing the status quo. In this section I will examine some of the generic issues associated with an important step in restructuring, the development of a bulk electricity market. The foundation for such a market is the high voltage electric transmission grid. The interconnected electric transmission grid in North America is one of the largest and most complex man-made objects ever created. This grid, which encompasses the entire North American continent, consists of four large 60-Hz ac synchronous subsystems. These subsystems are 1) the Eastern Interconnect, which supplies electric power to most users east of the Rocky Mountains; 2) the Western Interconnect, which supplies power to most users west of the Rockies and portions of Northern Mexico; 3) the Texas Interconnect, which supplies most of Texas; and 4) the Quebec Interconnect. These four subsystems are in turn connected to each other by dc transmission lines and back-to-back convert-
OCR for page 53
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering ers that allow for limited power transfers between the subsystems. Altogether the grid consists of billions of individual components, tens of millions of miles of wire, and thousands of individual generators. The high degree of interconnection is beneficial, since it allows for economy and emergency transfers of electric power between regions, and hence the establishment of large electricity markets. A high degree of connectivity also has a detrimental side effect: failures in one location can propagate through the system at almost the speed of light. Large-scale blackouts can quickly affect tens of millions of people with losses reaching billions of dollars, as was illustrated by widespread outages of the Western Interconnect in 1994 and 1996 (Taylor, 1999). The engineering challenge of open access is to set up an open electricity market on what is really just one huge electrical circuit and simultaneously maintain reliability. The transmission grid has several aspects that make setting up effective markets challenging. First, there is no mechanism to store electrical energy efficiently; total electrical generation must equal total load plus losses at all times. This continual matching must occur even as the load on the grid is constantly varying, with daily changes in demand of over 100% not uncommon. Second, with few exceptions, there are no mechanisms to directly control the flow of electricity on the tens of thousands of individual high-voltage transmission lines and transformers in the grid. Rather, the electric flow through the grid is dictated by the impedances of the transmission lines and the locations where electric power is injected by the generators and removed by the loads. Consequently, there are no analogous electrical elements to a gas industry control valve, a busy signal in the telecommunications industry, or a holding pattern in the airline industry. Thus, to transfer say 1,000 MW of power from Tennessee to Illinois the actual power flow would "loop" around on a large number of transmission lines in the Eastern Grid; this effect is known as "loop flow." Third, the transmission grid has capabilities to transfer power that are finite but often difficult to quantify. These values have been defined by the North American Electric Reliability Council (NERC) as the available transfer capability (ATC) (NERC, 1996). ATC values are dependent on a number of constraints, including the need to avoid exceeding transmission/transformer thermal limits, voltage magnitudes limits, transient stability limits, and oscillatory stability limits. When an element is loaded to its limit it is said to be congested. Any additional power transfers that would increase the loading on that element are not allowed. However, because of loop flow when a single element is congested it can have a major systemwide impact. The real power losses associated with moving power in a market are nonlinear, varying as the square of the individual transmission line current. Thus, there is no unique way to assign the losses to particular market participants. FERC Orders 888 and 889 mandated the need for a functional unbundling of system services to allow separate pricing and hence competition for these so-called ancillary services. Ancillary services include
OCR for page 54
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering power scheduling and dispatch, loading following, operating reserves, energy imbalance, real power loss replacement, and voltage (Hirst and Kirby, 1996). Previously these services had been bundled and supplied by the local utility. How much these services can really be unbundled and supplied by external markets remains an unanswered question. While space limits a discussion of how these constraints are dealt with in individual markets, there is one consequence that merits special attention—extreme price volatility. Just about all electricity markets have experienced substantial price variation, with the price spike of June 1998 in the Midwest as a prime example (FERC, 1998). During this incident, which lasted for several days, spot market prices for electricity experienced almost unheard of volatility, soaring over two hundredfold from typical values of about $25 per megawatt hour (MWh) up to values of $7,500 per MWh. While the causes of this volatility are complex, they ultimately arise from several of the underlying characteristics of the electrical grid. During the June 1998 price spike, load levels were at or near record levels in the Midwest. As the load went up and with no way to store electricity, generation became an increasingly valuable commodity. Generation was in short supply, but was available elsewhere on the grid. Because of congestion due to thermal limits on just two elements, a transmission line in northwest Wisconsin and a transformer in southeast Ohio, no additional power could be transferred into the Midwest from either the West or the East. This situation allowed the remaining suppliers of power to rapidly raise prices to levels never before seen. Efficiently managing electricity markets where congestion on a single element can impact thousands of other elements, as well as power transfers, continues to be a challenge. This leads to a second key engineering challenge, developing markets in which market power is not abused by the market participants. MARKET POWER ASSESSMENT IN ELECTRICITY MARKETS One of the key goals of deregulation is obtaining lower prices through the advent of competition. However, as the grid is deregulated, there are significant concerns that the benefits gained from breaking up the vertical market power of a traditional utility may be lost through the establishment of horizontal market power, particularly in generation markets. Market power is the antithesis of competition. It is the ability of a particular seller or group of sellers to maintain prices profitably above competitive levels for a significant period of time. When an entity exercises market power, it ceases to be a price taker and becomes a price maker. Market power analysis typically involves three steps: 1) identification of the products and services; 2) identification of the geographic market in which the product competes; and 3) evaluation of market concentration in that geographic market, using an index such as the Herfindahl-Hirschman index (Scherer, 1980).
OCR for page 55
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering For electricity markets step two is by far the most difficult since the geographic market depends on the actual loading of the transmission grid (Overbye et al., 1999a). Even in large electrical networks that contain a large number of market participants, congestion in the transmission grid can create "load pockets" in which there are only a small number of participants in the generation market, for example. Experimental evidence suggests that in such situations exploitation of market power by the participants could be expected (Zimmerman et al., 1999). This presence of load pockets should not be surprising since the transmission grid was originally designed to meet the needs of a vertically integrated utility moving power from its generators to its load. Market power is an issue that needs to be considered by regulators when examining utility mergers and third-party generation acquisition. POWER SYSTEM DATA AGGREGATION AND VISUALIZATION As the electricity industry becomes increasingly competitive, knowledge concerning the capacity and constraints of the electric system will become a commodity of great value. Understanding the rapidly changing electricity market before others do can give an important competitive advantage. The problem is that this knowledge is often contained in a tidal wave of data. The calculation of ATC for the Mid-America Interconnected Network (MAIN) is a prime example (Januzik et al., 1997). MAIN is one of the NERC regional reliability councils and covers most of Illinois, Wisconsin, the eastern part of Missouri, and the upper peninsula of Michigan. Thirty times each week MAIN calculates ATC values for over 200 buy/sell directions using a 14,000-bus power system containing about 20,000 transmission lines. For each direction about 1,300 contingencies must be considered. Thus, assuming one is interested in bus voltage magnitude and angle, and the real/reactive flow on each transmission line, the weekly data output of MAIN's ATC studies is about 1800 gigabytes of data. And MAIN is just one of ten regions, and ATC is just one value needed to participate in electricity markets. The result has been that transmission providers and market participants are being overwhelmed by ATC data, but they have gained little insight into the mechanisms impacting their ability to obtain transmission service. Without this insight, it is nearly impossible for participants to make informed business decisions regarding the interaction between their desired transactions and the constraints imposed by the transmission system. For example, when electricity market players need transmission service they must determine whether to pay a higher premium and purchase nonrecallable transmission services or try to get by with less expensive recallable transmission services. Also, with either type of service they must develop a feel for the likelihood of curtailments. These situations require business decisions about managing the risk associated with the transmission system. The successful market participants will be the ones who
OCR for page 56
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering understand the underlying transmission systems and can thus make fully informed decisions. Newer visualization techniques such as animation of power system flow values, contouring of transmission line flow values, data aggregation techniques, and interactive 3D visualization (Overbye, 1999b) have started to make a dent in this mountain of data, but more extensive methods are definitely needed. REFERENCES FERC (U.S. Federal Energy Regulatory Commission). 1998. Causes of wholesale electric pricing abnormalities in the Midwest. Staff report to Federal Energy Regulatory Commission, June 1998. Hirst, E., and B. Kirby. 1996. Electric Power Ancillary Services. Oak Ridge, Tenn.: Oak Ridge National Laboratory. Januzik, L. R., R. F. Paliza, R. P. Klump, and C. M. Marzinzik. 1997. MAIN regional ATC calculation effort. Proceedings of the American Power Conference, Chicago, April 1–3, 1997. Chicago: Illinois Institute of Technology. NERC (North American Electric Reliability Council). 1996. Available Transfer Capability Definitions and Determination. Princeton, N.J.: NERC. Overbye, T. J., G. Gross, P. W. Sauer, and M. J. Laufenberg. 1999a. Market power evaluation in power systems with congestion. Pp. 61–69 in Game Theory Applications in Electric Power Markets. New York: Institute of Electrical and Electronics Engineers. Overbye, T. J., K. P. Klump, and J. D. Weber. 1999b. A virtual environment for interactive visualization of power system economic and security information. Paper presented at the IEEE Power Engineering Society 1999 Summer Meeting, Edmonton, Alberta, Canada, July 18–22, 1999. Scherer, F. M. 1980. Industrial Market Structure and Economic Performance. Chicago: Rand McNally College Publishing Co. Taylor, C. W. 1999. Improving grid behavior. IEEE Spectrum 36(6):40–45. Zimmerman, R. D., J. C. Bernard, R. J. Thomas, and W. Schulze. 1999. Energy auctions and market power: An experimental examination. Paper presented at the 32nd Hawaii International Conference on System Sciences, Maui , January 5–8, 1999.
OCR for page 57
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering The Future of Nuclear Energy PER F. PETERSON Department of Nuclear Engineering University of California Berkeley, California The discovery of nuclear fission in 1938 fundamentally altered the trajectory of the twentieth century. As this century closes, the future role of nuclear energy—both fission and fusion—remains unclear, but will range somewhere between limited specialty applications like isotope production and underwater propulsion, up to the large-scale commodity production of electricity and hydrogen. The next century should see nuclear energy's long-term role become defined, as fossil fuel dominance eventually erodes and as the environmental and economic costs of energy alternatives are explored in larger-scale deployments, and as research in advanced fission and fusion energy provides more attractive commercial products. Simultaneously, civilian nuclear infrastructure will play a critical role in managing nuclear weapons materials declared excess to military needs; in establishing increasingly rigorous norms for accounting, aggregating, protecting, and disposing of nuclear materials; and in seeing the global adoption of these stringent norms, particularly in the former Soviet Union. NEAR TERM: THE NEXT DECADE The near term will not see new nuclear power plants built in the United States, and only modest numbers constructed elsewhere. Most existing U.S. nuclear plants will continue to run, with a substantial fraction receiving license extensions. Deregulation will play an important role, opening the possibility of sale of nuclear power plants to dedicated operating companies (Joosten, 1999). The recent sale of the troubled Clinton power plant, which has been shut down for three years to address safety problems, suggests that plants that formerly would have shut down due to poor utility management can instead become at-
OCR for page 58
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering tractive opportunities for purchase by operating companies with the technical and managerial capability to maintain high levels of reliability and safety, and thus maintain high capacity factors. With license extensions, the momentum provided by existing nuclear infrastructure—17 percent of current global electricity production—will maintain a foundation of substantial nuclear capability for three or more decades into the 21st century. Nuclear energy research with the greatest near-term impact will focus on improving the economic performance of existing light water reactor (LWR) nuclear power plants. Interesting research opportunities will include studies to increase further fuel burn-up levels to extend the time between outages and reduce spent fuel generation, and to develop and license dry-cask storage systems, transportation systems, and high-level radioactive waste repositories. Coupled with increased fuel burn-up, intriguing research will also be directed toward high burn-up thorium and inert matrix-based LWR fuel designs that generate smaller amounts of plutonium. INTERMEDIATE TERM: 10-40 YEARS During this period increasing oil and gas prices, driven by depletion and potentially by carbon taxes, will result in growing fractions of electrical power generation coming from non-fossil sources, if the world rejects the environmental costs of further increases of coal combustion without carbon sequestration. In the absence of major technical breakthroughs, LWRs will remain the lowest-cost nuclear power option. Uranium prices are projected to remain low during this period, so that once-through fuel use will remain the most economically attractive option. The regional competitiveness of new nuclear power generation capacity will depend on resource availability. The initiation of new plant construction in the United States will be impeded by higher interest rates due to uncertainty about the performance of new regulatory systems and risk of construction delays, and by the economies of scale that make LWRs most attractive at sizes greater than 1 GW and thus make the capital investment for the first new LWR quite high. Following construction of a first plant, particularly if construction occurs successfully in under five years as recently achieved in Japan, interest rates for subsequent plants could be substantially lower. Recent boiling water reactor (BWR) designs with internal pumps or natural circulation, which eliminate bulky external reactor equipment like jet pumps and steam generators, permit extremely compact containment buildings. Because modular construction technology can be readily applied to these BWR containments, the quantities of material and construction times are reduced substantially. In the absence of subsidies, BWRs can be expected to increasingly dominate the economic competition in the LWR market. Research activities with the greatest potential to accelerate new reactor construction will focus on further improvements to advanced LWRs. In particular,
OCR for page 59
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering research on LWR passive-containment cooling systems for accident response (e.g., Peterson et al., 1998) will have large leverage due to the potential to reduce the plant cost by eliminating active cooling systems and diesel-generator power supplies, to reduce the safety envelope volume, to simplify operations and maintenance activities, and to improve reliability and safety. At the back end of the fuel cycle, progress on siting, design, and construction of geologic repositories will be important regardless of growth or decline in fission energy production. The evolution of international infrastructure and standards for managing nuclear materials will have major long-term nonproliferation implications, substantially negative if the evolution continues toward increasingly dispersed long-term surface storage rather than toward aggregation and disposal in regional or multinational facilities (Peterson, 1996). Improved understanding of the health effects of low levels of radiation, where currently standards are set by simple linear extrapolation of health effects observed at large doses, may potentially result in reassessment, either up or down, in the safety of nuclear power and the consequences of nuclear accidents. The demonstration of a threshold for radiation effects, postulated by some researchers, would result in a major decrease in the calculated consequences of severe accidents, and would also affect design requirements for radioactive waste disposal. Markets for nuclear energy will also depend strongly on the evolution of other energy technologies. Particularly favorable developments for nuclear energy would include substantial improvements in energy storage technology and in time-of-day pricing and demand-side management. Storage potentially opens new electricity markets for mobile applications, and also displaces fossil-fired peaking capacity and increases the value of nighttime sales from capital-intensive base-load sources. More effective demand management also reduces the market share for expensive fossil-peaking capability and shifts demand toward off-peak periods. LONG TERM: BEYOND 40 YEARS In the long term, if coal's environmental costs are rejected, traditional fossil energy will play a substantially diminished role in energy production, with a corresponding increase occurring in the contributions of non-carbon-emitting energy sources. In this time range modest or potentially substantial contributions to energy production may come from both fission and fusion energy sources. For growing contributions from fission energy, most economic analyses suggest that uranium prices will become sufficiently high after some five decades to economically warrant the recycle of spent fuel to capture its additional energy content, and a transition to fast spectrum reactors capable of operating at breeding ratios slightly above one. Proliferation resistance will be an important goal for future changes to the nuclear fuel cycle. Issues related to proliferation resistance are better understood
OCR for page 60
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering now, particularly because it is now widely known that all isotopes of plutonium must be treated as potentially weapons-usable, as well as the transuranic elements neptunium and americium. Likewise, the United Nation's International Atomic Energy Agency has now determined that geologic repositories for spent fuel will require permanent safeguards monitoring and institutional control (Linsley and Fattah, 1994). While several decades of global spent fuel production can be placed in a small number of multinational repositories to reduce the long-term risks and burdens of the safeguards monitoring, long-term commitments to nuclear fission will require gradual transition to technologies that generate waste streams qualifying unambiguously for permanent safeguards termination. Here the most promising direction for research may be toward lead and lead/bismuth coolants for reactors. Early U.S. fast reactor research chose to focus on liquid sodium as a coolant, due to lower corrosivity as well as low density, which enables high velocity flow and higher power density, achieving more rapid breeding of new plutonium for the startup of new fast reactors. Currently, high power density is no longer considered a virtue, and sufficient plutonium exists for the startup of large numbers of fast reactors if it is ever desirable. Furthermore, the end of the Cold War brought information that the Russians have solved the corrosion issues associated with lead coolants, and use lead/bismuth cooled reactors in their existing submarine fleet (Chekunov et al., 1997). Lead, with its high atomic weight, extracts essentially no energy in elastic collisions with neutrons, and thus lead-cooled reactors can have an extremely hard neutron spectrum. This has several beneficial effects. Little spectrum hardening occurs due to void generation, while neutron leakage increases substantially, making it much easier to obtain negative void reactivity coefficients, a task that is difficult with sodium. The exceptionally hard spectrum also allows effective burning of the transuranic elements neptunium and americium that are of concern from a waste safeguards termination perspective. Conversion ratios above one are possible without blankets, so that the fuel self-breeds enough plutonium internally to maintain constant reactivity, and the only limitation on core life comes from radiation damage, potentially allowing core lifetimes approaching 15 to 20 years, three or four times greater than current LWRs. Because the cores can be homogeneous, multiple recycle of the fuel can be accomplished by high-temperature volatility-based methods that remove only fission products and are incapable of separating uranium and plutonium, making them highly proliferation resistant and capable of consuming LWR spent fuel inventories. Because lead is chemically inert with the water required to generate steam for power production, and because the vapor pressure of lead is extremely low, lead-cooled reactors do not require the massive containment structures of LWRs and thus have the potential to be less expensive than current LWR technology, particularly if refueling occurs at greater than 10-year intervals. In whatever form, improvements in fast-spectrum reactor technology and the ability to pro-
OCR for page 61
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering duce waste streams qualifying unambiguously for permanent safeguards termination will be important if fission makes long-term contributions to global energy production. Fusion faces important scientific and engineering hurdles to becoming an economical energy source. However, because the fuel supply for fusion is effectively infinite, the waste generation and accident consequences are far smaller than for fission plants, and the land resources and environmental impact required by fusion could be much smaller than for renewables. Therefore, fusion energy deserves substantial attention and continued development. For magnetic fusion energy (MFE), confinement concepts can be categorized on a scale leading from plasmas externally controlled by large magnets to simpler self-organized plasma configurations that create confining magnetic fields primarily by internal plasma currents. The diverse range of potential plasma configurations, and the risks and benefits associated with each, provides the primary dilemma for prioritizing MFE research. Recent reviews of the U.S. fusion program (SEAB, 1999) have recognized this dilemma and have recommended a balanced research portfolio that continues efforts with externally controlled plasmas (e.g., tokamaks) where the path to success is well understood but where development costs are high, and self-organized plasmas where greater technical uncertainty exists but where the development costs, and potential power plant costs, may be very low. For inertial fusion energy (IFE), major progress has been made with the ongoing construction of the National Ignition Facility, a large laser system that is anticipated to ignite fusion targets around 2006, and with design of heavy-ion accelerator and laser driver systems that could operate at the 5-Hz rate required for fusion power plants. National development efforts for MFE and IFE fusion power sources include extensive technology development components. Very interesting are developments toward the use of high temperature liquids to shield fusion chamber structures and remove fusion energy (Moir, 1995). By minimizing activation and waste generation and providing high power density, liquid protection would have strong, positive economic benefits. Combined with a simple self-organized plasma configuration, or inertial fusion with an innovative, inexpensive driver system, liquid protection would offer the potential for fusion energy to be less expensive than fission with greater innovation, and to be competitive with current natural gas prices. Such success would have enormous implications for future human welfare. The careful management and use of nuclear technologies and materials will remain a permanent responsibility for this and future generations, with large long-term leverage on global economic, environmental, and security conditions. Research, coupled with continued strengthening of the international regime for the management of nuclear materials, can influence this important leverage, under the assumption that bright and motivated people continue to enter the field and work to maximize the benefits that nuclear technologies can potentially bring.
OCR for page 62
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering REFERENCES Chekunov, V. V., D. V. Pankratov, Y. G. Pashkin, G. I. Toshinsky, B. F. Gromov, B. A. Shmatko, Y. S. Belomitcev, V. S. Stepanov, E. I. Yefimov, M. P. Leonchuk, P. N. Martinov, Y. I. Orlov. 1997. Use of lead-bismuth coolant in nuclear reactors and accelerator-driven systems. Nuclear Engineering and Design 173(1–3):207–217. Joosten, J. 1999. U.S. electric market restructuring: Implications for nuclear plant operation and safety. Nuclear News 42(6):41–48. Linsley, G., and A. Fattah. 1994. The interface between nuclear safeguards and radioactive waste disposal: Emerging issues. IAEA Bulletin 36(2):22–26. Moir, R. W. 1995. The logic behind thick, liquid-walled, fusion concepts. Fusion Engineering and Design 29:34–42. Peterson, P. F. 1996. Long-term safeguards for plutonium in geologic repositories. Science and Global Security 6(1):1–29. Peterson, P. F., V. E. Schrock, and R. Greif. 1998. Scaling for integral simulation of mixing in large, stratified volumes. Nuclear Engineering and Design 186(1–2):213–224. SEAB (Secretary of Energy Advisory Board). 1999. Realizing the Promise of Fusion Energy. Final Report of the Task Force on Fusion Energy. Washington, D.C.: U.S. Department of Energy. [Online]. Available: http://www.hr.doe.gov/seab [December 21, 1999].
OCR for page 63
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering Renewable Energy Technologies: Today and Tomorrow JAMES M. CHAVEZ Sandia National Laboratories Albuquerque, New Mexico JANE H. DAVIDSON Department of Mechanical Engineering University of Minnesota Minneapolis, Minnesota ABSTRACT Renewable energy sources have served humankind for thousands of years, and they will continue to be vital resources in the future. Because they are sustainable, solar, wind, biomass, and geothermal systems can meet many of our energy needs with minimal impact on the environment. Yet to date, they generate less than 1 percent of the worldwide energy needs and only 2 percent of the electricity generation in the United States. Experts believe that these technologies have not become major sources of energy because conventional sources cost less and have perceived advantages over renewables. However, they are enjoying renewed appeal and opportunities because of issues such as global climate change, carbon emissions, environmental concerns, utility restructuring, and growth in energy demands in developing nations. In fact, some studies (e.g., by the World Energy Council, Shell Corp., and the United Nations) project that renewable energy technologies are going to contribute to the world's energy supplies at the 20 percent to 50 percent level by the year 2040. Such rapid growth in the use of renewable energy requires that the technologies become more cost-effective. Improvements in manufacturing processes, efficiency of operation and maintenance, and technical advances in materials, processes, and storage will be needed. In this paper, we discuss renewable energy technologies, their current status, and their potential to have a major impact in the future. INTRODUCTION Renewable energy technologies have improved dramatically in the last 25 years. Efficiencies are higher, reliability is better, and costs are lower. Howev-
OCR for page 64
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering er, because the cost of power from conventional energy sources continues to decline, these technologies have not made a significant impact on the marketplace to date. Renewable energy (excluding hydro) currently generates less than 1 percent of the energy needs worldwide and accounts for only 2 percent of the electricity generated in the United States, as shown in Figures 1 and 2. However, the combination of continuing growth of energy consumption (more than 3 percent) annually and the threat of increased emissions of greenhouse gases (see Figure 3 for carbon emissions in the world), demands that we consider a more sustainable energy mix. No ''silver bullet" exists to solve the world's energy needs and mitigate the environmental impact of energy production. Instead, a portfolio of energy sources, including the renewables—biomass, geothermal, wind, and solar—will be needed. All have cost and technological issues that must be addressed if they are to be widely commercialized without a substantial increase in the cost of electricity and without government subsidies. Even the relatively mature renewable technologies require technological advances (efficiency and reliability improvements) that will drive cost down. In this paper, emerging renewable technologies will be addressed from the perspective of electricity generation (hydropower is considered a mature technology). The following overview includes a brief introduction to the technologies and current areas of research to improve them. FIGURE 1 World consumption for energy generation. SOURCE: Data courtesy of the Energy Information Administration (1999).
OCR for page 65
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering FIGURE 2 Electricity generation in the United States by source and from renewable energy sources for 1997. SOURCE: Data courtesy of the Energy Information Administration (1998).
OCR for page 66
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering FIGURE 3 World carbon emissions by source. SOURCE: Data courtesy of the Energy Information Administration (1999). OVERVIEW OF RENEWABLE ENERGY TECHNOLOGIES Biomass power is combustion of plant-derived material, wood and agricultural residues, cultivated trees or herbaceous plants such as sorghum or sugar cane, solid municipal waste, and, in the less developed nations, dung. The energy density of biomass is approximately two-thirds that of bituminous coals, but it offers other advantages. It has lower sulfur and ash content than coal, is easier to gasify, and is not considered to impact greenhouse gases. The carbon dioxide emissions from burning are offset by absorption during photosynthesis. However, potential environmental impacts are land and water use. Biomass power accounted for approximately 10,500 MW or 1.2 percent of U.S. electric generating capacity in the years 1993 to 1997. Direct combustion of municipal solid waste, which is not truly a renewable resource, accounts for 3,400 MW. U.S. demonstrations of biomass gasifiers capable of connecting to gas turbines are in Hawaii and Vermont. Currently, six U.S. power plants are co-firing coal and wood residue products on a commercial basis. According to the U.S. Department of Energy, domestic biomass generation capacity could reach 20–30 GW by the year 2020. The emerging biomass technologies are production of biofuels, including ethanol, methanol and biodiesel from vegetable oils, production of gasified biomass for use in gas turbines, and co-firing of biomass with coal to reduce sulfur emissions. Technology needs include development of high-growth-rate biomass
OCR for page 67
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering crops, lower cost generation of biofuels and combined-cycle gasifier plants running on biomass. Geothermal energy is obtained from the vast heat resources beneath the Earth's surface. The low-temperature heat contained in shallow ground can be used to heat or cool homes and commercial buildings either directly or with ground-coupled heat pumps. In active geothermal zones, the temperature gradient is about four times the nominal value of 30°C per kilometer, and high-pressure steam or water is used for electric power generation. Geothermal power plants release on average only 5 percent of the carbon dioxide emitted by a fossil fuel plant. However the environmental impact of exploiting geothermal resources is controversial. Most of the U.S. hydrothermal systems with obvious surface resources have been explored. Twenty-one countries generate 8,000 MW of electricity from geothermal resources, and 11,300 thermal megawatts are used for applications such as aquaculture, greenhouse operations, and industrial processing. Domestic power plants generate 2,800 MW at 5 to 7.5 cents per kilowatt-hour. Power production is possible from hydrothermal reservoirs, geopressurized reservoirs that contain methane under high-pressure, hot dry rock, and magma, but currently only hydrothermal reservoirs are used. The needs of this technology are to develop models and instrumentation for exploration and characterization of new reservoirs, for cost-effective deep well drilling, to improve cost-effective binary conversion cycle efficiency, and to develop high-temperature corrosion-resistant materials and coatings. Exploitation of hot dry rock, magma, and geopressured aquifers remain formidable challenges to this technology. Wind is the most cost competitive of the renewable energy technologies. Advanced turbines, primarily horizontal axis designs, have replaced the sails and blades of traditional farm windmills. The focus is on large power plants with individual turbines as large as 1,000 kW. Since the 1980s, cost has dropped, efficiency has increased, and reliability has improved. Modern wind turbines produce electricity for 5 to 6 cents/kWh. Bird kill, visual impact, and noise are perceived as environmental issues. The intermittent and site-specific nature of the wind poses challenges. The world's installed capacity exceeds 10 GW, with large installations in Europe, the United States and India. Forecasts indicate a growth rate of over 2 GW per year. In the United States, 15,000 wind turbines in California have supplied more than 1 percent of the electricity for that state since 1980. In the last 2 years an additional 900 MW was installed in the United States, bringing the national total to about 2.5 GW. In fact, one estimate suggests wind could provide 20 percent of U.S. electricity needs with current technology. The new "Wind Powering America" initiative has goals of 5 GW by 2005, 10 GW by 2010, and 40 GW by 2020 (about 5 percent of the U.S. electricity consumption).
OCR for page 68
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering Technological advances to provide cheap, reliable power focus on power electronics, materials, structure design and manufacturing, and transmission with a continuously fluctuating input power. Sandia National Laboratories engineer Paul Veers summarizes the challenges this way, "The airfoils must work in a dynamic stall environment. The structure must withstand billions of stress cycles with low cost materials.... A control system should smooth all power fluctuations.... It must be accomplished in an environmentally non-intrusive way" (Veers, 1996). Solar technologies are simply divided into thermal and direct-conversion technologies. The thermal technologies include systems used for domestic heating and cooling, process heat, water and air treatment, thermochemical processing and power generation. This discussion addresses only electricity production. Photovoltaics uses semiconductor technology to convert sunlight directly into electricity with no heat engine. The solar cell or photovoltaic cell was discovered in 1954 by Bell Telephone researchers examining the sensitivity of a silicon wafer to sunlight. Photovoltaic products are commercially available for lighting, communications, remote site electrification, traffic signs, water pumping, vehicle battery charging, as well as for grid-interactive electricity generation. In 1995 global shipments of photovoltaics were nearly 90 MW, electric grid-connected systems making up 4 percent of the total. Most commercial sales are small 1- to 2-kWpeak standalone modules for lighting and remote sites. High cost limits their use in grid-connected applications; the estimate is 50 cents/kWh, 10 times the cost of power from a fossil fuel plant. Costs must be reduced to about $3/peak watt for photovoltaics to gain significant market share. Efforts to reduce cost focus on increased efficiencies and advanced manufacturing techniques. Theoretical conversion efficiencies of photovoltaic systems depend on the semiconductor materials used in the cells and on the ambient temperature. The materials currently used to make photovoltaic cells can be grouped into three broad categories: 1) expensive, efficient monocrystalline silicon, 2) less efficient but much lower cost polycrystalline silicon, and 3) the lowest cost and poorest performer, amorphous silicon material. Conversion efficiencies of commercial polycrystalline silicon cells are 10 to 15 percent. Now the primary development areas are in how to use monocrystalline silicon with solar concentrators and making thin-film cells by depositing a 5- to 20-micron film of silicon onto an inexpensive substrate, because the estimated efficiency of these cells is above 20 percent. Work is ongoing with other materials, including amorphous silicon (a-Si), copper indium diselenide (CuInSe 2 or CIS) and related materials, and cadmium telluride (CdTe). Concentrating solar power systems use mirrors to concentrate sunlight to produce temperatures high enough to drive modern, efficient heat engines and to produce electrical power. All concentrating solar power technologies rely on
OCR for page 69
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering four basic systems: collector (mirror), receiver (absorber), transport-storage, and power conversion. Concentrating solar power stations need to concentrate large amounts of sunlight from the collector into a small area, the receiver, to produce high-temperature heat, which in turn can be converted into electricity in a conventional heat engine. Three types of concentrating solar systems have been developed, characterized by the shape of the mirrored surface on which sunlight is collected and concentrated: parabolic troughs, power towers, and dish/engine systems. All concentrating solar power systems can be put in series with a fossil-fuel-driven heat source that can either heat the working fluid, charge storage, or drive the power conversion system during periods of low sunlight. Depending on the system, concentrating solar power generation can be designed to produce from tens to hundreds of megawatts of electricity. The power from it can be dispatchable, because these systems have cost-effective thermal storage and they can be hybridized (coupled with a conventional power plant). Consequently, these plants can produce power before or after sunrise (or 24-hour operation if desired). Concentrating solar power stations are best suited to be either peak-load or intermediate-load power stations. All concentrating solar technologies have been validated and demonstrated, with trough-electric systems being the most mature of the three types. In fact, nine systems were installed in California in the 1980s, and they are still operating reliably. Power tower technologies have been demonstrated at the 10-MW Solar One and Solar Two pilot plants in California, and dish/Stirling systems have and are being demonstrated in the Middle East, the U.S. desert southwest, and the Mediterranean region. Although these technologies have been under development for the last 20 years, additional technology development is needed to continue to reduce costs and ensure their reliability. The major areas of technology development are manufacturing of the collectors and improving system reliability. CONCLUSIONS Perhaps the most dramatic effect on the renewable energy industry is the push toward cleaner technologies, as global climate change has become the focus of international attention. Forecasts indicate that in the near future, we can expect more industry support and acceleration of research and development into renewable energy sources. For renewable energy to take its place in the power picture of the new millennium, we need 1) research and development to improve the efficiency and reliability of these systems, 2) policy changes that address the barriers to use of renewable energy, and 3) acceptance by the government and the public that renewable energy is needed to help mitigate environmental problems and contribute to the world's energy supply. Biomass, geothermal, wind, and solar resources have been used for thousands of years. With changes in the
OCR for page 70
Fifth Annual Symposium on Frontiers of Engineering: National Academy of Engineering environment and improvements in technology, they will become a larger part of power supplied in the future. REFERENCES Energy Information Administration. 1998. Annual Energy Outlook, 1999 with Projections to 2020. DOE/EIA-0383(99). Washington, D.C.: U.S. Department of Energy. Energy Information Administration. 1999. International Energy Outlook 1999 with Projections to 2020. DOE/EIA-0484(99). Washington, D.C.: U.S. Department of Energy. Veers, P. S. 1996. Foreword to special issue on wind energy. ASME Journal of Solar Energy Engineering 118(4):197. ADDITIONAL RECOMMENDED READINGS Arvizu, D. E., and T. E. Drennen. 1997. Technology Progress for Sustainable Development. Report SAND097-0555C. Albuquerque, N.M.: Sandia National Laboratories. Dorf, R. C., ed. 1996. The Engineering Handbook. New York: CRC Press. Eliasson, B. 1998. Renewable Energy: Hydro, Wind, Solar, Biomass, Geothermal, Ocean: Status and Prospects. Baden-Dattwil, Switzerland: ABB Corporate Research Ltd. EUREC Agency. 1996. The Future for Renewable Energy: Prospects and Directions. London: James and James Ltd. Kreith, F., and R. E West, eds. 1997. CRC Handbook of Energy Efficiency. New York: CRC Press. Lay, K. L. 1998. Changes and innovation: the evolving energy industry. Presented at the 17th Congress of the World Energy Council, Houston, Tex., September 14, 1998. Shell International Limited. 1996. The Evolution of the World's Energy Systems. London: Shell International. Energy Information Administration. 1998. Renewable Energy Annual 1998 with Data for 1997. DOE/EIA-0603(98)/1. Washington, D.C.: U.S. Department of Energy. Zweibel, K. 1995. Thin films: Past, present, future. Progress in Photovoltaics: Research and Applications 3(5):279–293.
Representative terms from entire chapter: