5
Enhancing Analytical Capabilities

The final discussion panel was organized to take stock of the current suite of analytical approaches for responding to policymaking informational needs, reflecting on the previous day’s discussions. Panelists explored how existing models and analytic approaches could be improved, suggested specific areas that might deserve more focus, discussed new approaches worthy of further consideration, and highlighted opportunities for government agencies and other institutions to enhance their policy-analytic capabilities. Panelists were Ed Rubin, Carnegie Mellon University; John “Skip” Laitner, American Council for an Energy Efficient Economy (ACEEE); Nebojsa Nakicenovic, International Institute for Applied Systems Analysis; David Montgomery, Charles River Associates (CRA) International; Brian Murray, Duke University; Bryan Hubbell, U.S. Environmental Protection Agency; and Ray Kopp, Resources for the Future. John Weyant moderated the subsequent discussion and guided participants to think strategically about how existing resources in the analytical community could be used more efficiently, and also what might be done with a modest amount of additional resources. This chapter summarizes the major themes raised by the panelists and carried over into the group discussion.

Modeling Technological Change

Over the long run, how technological change is treated is a central issue to modeling. Bill Nordhaus said that the Monte Carlo runs on his last DICE model showed that technological change—both in the general sense as it affects the economy, and among different carbon-saving technologies—was the major uncertain variable. He also outlined three approaches to representing technological change and stated that its representation is the single most unsatisfactory element in models: exogenous change, technological learning, and the Romer model. Exogenous technical change, projections based on historical trends, was the industry standard until the late 1990s, but the obvious problem is that as soon as one introduces changes in prices, particularly the price of carbon relative to other input prices, then technological change is induced.

The learning-by-doing approach can be attractive for modelers because it is simple and requires little data. However, Nordhaus and several other participants pointed to problems with this approach. The econometric literature suggests that use of simple bivariant coefficients can lead to an upward bias of the learning coefficient (e.g., Berndt, 1991). It is even more problematic if used in an optimization model, because it ultimately leads to an overadaptation by learning technologies (those assumed to improve with experience over time) relative to non-learning technologies. Ed Rubin seconded this observation and pointed out that historical analysis has shown that costs can often go up considerably before they go down. Graham Pugh of DOE noted that learning curves tend to focus on applied R&D but leave out potential game-changing solutions that require basic research. Richard Newell noted that, when learning curves are plugged into an optimization model, the model is not accounting for the opportunity costs of learning in one sector versus another—learning in a renewable energy technology may mean learning less in nuclear or clean-coal power, for example. Marilyn Brown added that for energy technologies, some will need next-generation approaches, making it unrealistic to expect staying on a steady learning curve. Nebojsa Nakicenovic added that these learning curves are used mechanistically, even though we do not understand the details and processes behind the curves. Figure 5 shows that technology cost curves are not all uniform. Analyses generally utilize curves that indicate large improvements over time. However, several



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 20
5 Enhancing Analytical Capabilities The final discussion panel was organized to take stock of the current suite of analytical approaches for responding to policymaking informational needs, reflecting on the previous day’s discussions. Panelists explored how existing models and analytic approaches could be improved, suggested specific areas that might deserve more focus, discussed new approaches worthy of further consideration, and highlighted opportunities for government agencies and other institutions to enhance their policy-analytic capabilities. Panelists were Ed Rubin, Carnegie Mellon University; John “Skip” Laitner, American Council for an Energy Efficient Economy (ACEEE); Nebojsa Nakicenovic, International Institute for Applied Systems Analysis; David Montgomery, Charles River Associates (CRA) International; Brian Murray, Duke University; Bryan Hubbell, U.S. Environmental Protection Agency; and Ray Kopp, Resources for the Future. John Weyant moderated the subsequent discussion and guided participants to think strategically about how existing resources in the analytical community could be used more efficiently, and also what might be done with a modest amount of additional resources. This chapter summarizes the major themes raised by the panelists and carried over into the group discussion. Modeling Technological Change Over the long run, how technological change is treated is a central issue to modeling. Bill Nordhaus said that the Monte Carlo runs on his last DICE model showed that technological change—both in the general sense as it affects the economy, and among different carbon-saving technologies—was the major uncertain variable. He also outlined three approaches to representing technological change and stated that its representation is the single most unsatisfactory element in models: exogenous change, technological learning, and the Romer model. Exogenous technical change, projections based on historical trends, was the industry standard until the late 1990s, but the obvious problem is that as soon as one introduces changes in prices, particularly the price of carbon relative to other input prices, then technological change is induced. The learning-by-doing approach can be attractive for modelers because it is simple and requires little data. However, Nordhaus and several other participants pointed to problems with this approach. The econometric literature suggests that use of simple bivariant coefficients can lead to an upward bias of the learning coefficient (e.g., Berndt, 1991). It is even more problematic if used in an optimization model, because it ultimately leads to an overadaptation by learning technologies (those assumed to improve with experience over time) relative to non-learning technologies. Ed Rubin seconded this observation and pointed out that historical analysis has shown that costs can often go up considerably before they go down. Graham Pugh of DOE noted that learning curves tend to focus on applied R&D but leave out potential game-changing solutions that require basic research. Richard Newell noted that, when learning curves are plugged into an optimization model, the model is not accounting for the opportunity costs of learning in one sector versus another—learning in a renewable energy technology may mean learning less in nuclear or clean-coal power, for example. Marilyn Brown added that for energy technologies, some will need next-generation approaches, making it unrealistic to expect staying on a steady learning curve. Nebojsa Nakicenovic added that these learning curves are used mechanistically, even though we do not understand the details and processes behind the curves. Figure 5 shows that technology cost curves are not all uniform. Analyses generally utilize curves that indicate large improvements over time. However, several 20

OCR for page 20
1.5 1.5 0.1% Nuclear Reactors France 1977-2000 C o s t in d e x ($ /k W ) 1.0 1.0 50% interval 90% interval mean learning rate (115 case studies): 0.5 0.5 -20% per doubling PVs Japan 1976-1995 0.1% 0 .0 0.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Number of doublings (installed capacity) FIGURE 5. Technological uncertainties: learning rates (push) and market growth (pull). SOURCE: Nebojsa Nakicenovic, International Institute for Applied Systems Analysis, presentation given at the Workshop on Assessing Economic Impacts of Greenhouse Gas Mitigation, National Academies, Washington, D.C., October 2-3, 2008. current technologies have instead seen limited improvement despite continuous investment. Therefore, Nakicenovic and Nordhaus both recommended doing sensitivity analyses with the learning off, to help bound expectations. Finally, Nordhaus described a third approach, the Romer model (Romer, 1990), which in his view is the right kind of model. It has conceptual and data problems that need serious work, but it has an explicit link between R&D and other inputs and the technology outputs. David Montgomery agreed that a Romer-type model may be the appropriate one to use, but offered three concerns. First is that the process of basic research is not very clear or predictable. Second, we do not entirely know how efficient the market for innovations is. Third, it is not easy with this model to determine where the levers might be to influence the rate or direction of technological progress, in order to reduce GHG emissions, particularly given that the influence of prices on R&D decisions is not well understood. Nakicenovic added that the lack of data may be the major constraint to using a Romer-type model. He pointed to a recent report that Germany and some other countries in Europe are expecting to increase energy R&D efforts after more than a decade of drastic declines—trying to understand what drives this apparent inducement of technological change will be key to modeling issues. Richard Newell seconded the notion that empirical data is a critical limitation and suggested that there is a major need for more work in theoretical development of ways to model technological change. He and colleagues are currently working on some aspects of this, such as understanding market imperfections and the effects of spillovers. Skip Laitner concurred that the Romer model was an appropriate beginning point but cautioned that modelers still tend to have an outdated view of technology and ought to improve their understanding of 21st century technologies. Inja Paik noted that the OECD has done a considerable amount of work in examining national innovation systems, how countries organize R&D resources to generate knowledge, and how diffusion of knowledge eventually contributes to their GDP. This work seems to have implications for the Romer model approach. Newell added that it continues to be difficult to evaluate prospective benefits from R&D investments without being able to model these in much more detail than is currently possible. 21

OCR for page 20
Complementary Components As Marilyn Brown remarked, it seems to take a suite of models to examine the complexities of the numerous policy interactions. Several participants discussed early ongoing efforts to link models and address some of these issues. Three important issues that were continually cited as being critical to the ultimate success of mitigation efforts: offsets, transportation, and air quality. Offsets Brian Murray remarked that modeling has shown that offsets are critical to the cost of mitigation policies, but questioned how well offset programs were being modeled. A 2005 EPA analysis suggested that the largest source of offsets in the U.S. domestic market will come from agriculture and forestry (EPA, 2005). Analysts have attempted to incorporate some realism into their estimates, recognizing that certain activities may be slow to come to fruition. But there are several issues that bear watching, to improve understanding of the institutional realities of registering, establishing baselines, and certifying projects. He also underscored that opening up to the international market can reduce costs significantly (Figure 6)—much of this potential is in reducing deforestation in tropical countries. Tim Profeta echoed the need to understand the availability of offsets since this is a major determinant of cost, but he also advised caution when modeling international offsets, which requires an international infrastructure that is not yet ready to deliver. He also suggested that more work could be done to understand and then communicate the effect of delayed availability for domestic (U.S.) offsets. FIGURE 6. EPA estimates of GHG offset supply functions. SOURCE: EPA, 2005. Brian Murray explained that the recent surge in biofuel production, and the targets that have been set in the United States out to 2030, have fundamentally altered the forest and agriculture models—these changes are significant enough to create problems for the mathematical programming framework underpinning the models. Bob Shackleton echoed the need for better information on availability and costs of offsets from a variety of sources of non-CO2 GHGs—he stated that EPA’s cost curves, which most modelers use, are a good foundation but are insufficient. According to CBO’s recent analysis, 40 percent of GHG emission reductions were attributed to offsets, which lowered the carbon price by 30 percent, an issue that will be critical to moderating policy costs in the early years (CBO, 2008). 22

OCR for page 20
Transportation Sector Reflecting on the various workshop participants’ comments that transportation is likely to be treated differently from other energy-consuming sectors, Bob Marlay noted that transportation still seems to be stovepiped, perhaps more than other sectors, and while there has been much important work done in looking at the various components (roads, vehicles, batteries), there might be more value in viewing the sector as a system, particularly given that the sector is one of the main leverage points in achieving an emissions-free economy. Brian Murray wondered if modelers could improve how they capture the transportation sector in their models. In general, he pointed out that models tend to focus on fuel economy standards but do not generally reflect responses to a carbon price. Air Quality Bryan Hubbell explained that there are potential interactions and efficiencies to be had in integrating air quality and climate modeling. Due to the magnitude and immediacy of air-pollution-related health effects, it is also important not to forget that there are existing air quality goals that we will continue pursuing as we begin to address climate change issues. Timing and spatial location will be important considerations, because when considering GHG emission reductions, there are potential co- benefits to reducing criteria air pollutants, depending on location. EPA’s Office of Air Quality Planning and Standards (OAQPS) normally deals with sector-level models but is investigating ways to link these to macro models by developing models that communicate with one another, sending carbon prices down or sending technology/production constraints up (Figure 7). Its industrial sectors integrated solutions model (ISIS) will be linked directly to ADAGE, but also linked through MARKAL, which will act as a bridge between ADAGE and the sector-specific outputs for technology and emissions, along with the air quality impacts fed back from ADAGE’s outputs. EPA has also been developing a control strategy tool (CoST), which is a database of control strategies for criteria pollutants and toxics, along with cost curves and associated emission reductions. OAQPS is working on adding GHG control technologies and is also cooperating with the Office of Atmospheric Programs to develop closer linkages between benefits assessments and large-scale CGE modeling. Sector Scale Multiple Sector Scale Broad Regional Sector Scale IPM (EGUs) ADAGE/IGEM MARKAL SGM/MiniCAM (energy) EMPAX-CGE ISIS (cement, pulp and paper) Multi-Market Model Emissions Modeling/Inventory Existing Link Link Under Development FIGURE 7. Multiscale assessment: point-sources. SOURCE: Bryan Hubbell, U.S. Environmental Protection Agency, presentation given at the Workshop on Assessing Economic Impacts of Greenhouse Gas Mitigation, National Academies, Washington, D.C., October 2-3, 2008. 23

OCR for page 20
Data and Functional Needs Ray Kopp described a model as having three major components underpinning it: theory, data, and functional relationships. The theory, which gives rise to the structure of the model and provides consistency, coherence, and explanatory power, has made good progress over the years, likely because it is rewarded within the academic community. On the data side, there has also been progress. There has tended to be government funding available to support the compilation of databases, such as GTAP, which are valuable to economists and modelers. However, when one examines the functional relationships (e.g., utility functions, production functions), there has been substantially less progress. In many cases, modelers are left to use functions that might be 30 or 40 years old. In the 1970s when many of these functions were being developed, there was funding to support the research, and the work was getting published and thus rewarded. This is not the case today, and so modelers are using old econometric estimates, or attempting to apply estimates from one sector to another. There is a need for more empirical study to support all of these parameters, to provide insight into the important elasticities associated with factors like technological progress. Kopp emphasized that there is also a need for more attention to terrestrial carbon, and forest carbon in particular. Policymakers will be raising questions about supply curves for forest carbon, how a forest carbon market would affect food or biofuel markets, and how a global carbon market could help incentivize forest management and land-use decisions. This then highlights the importance of spatial analysis, and also the challenge of linking spatially based land-use models with larger-scale macroeconomic models, which tend to treat space abstractly. There is a lot of methodological work that needs to be done to take high-quality land-use models and link them so that large-scale macroeconomic models can reference them. Ed Rubin emphasized that a fundamental challenge continues to be how to employ models that are behaviorally realistic. He suggested that it requires beginning with observations, and then equations, and that over decades these will be refined. It is also crucial to engage a broader spectrum of disciplines beyond economics, and Rubin urged that sustained institutional support is necessary to reward interdisciplinary activity. He remarked that the ability to create and implement analytical models and theoretical constructs far outstrips the availability of empirical data to rigorously test these constructs— creative experiments and historical data analysis, which he and colleagues have done in looking at technological learning curves, will help verify functions. He also envisioned a hierarchy of models, noting that no one model is best suited to answer the diverse array of questions that policymakers and other interested parties will ask. Bryan Hubbell pointed out that EPA has done a lot of thinking about energy efficiency, and specifically why existing opportunities are not being adopted. There are clearly behavioral issues, but these need to be parsed out, and he also suggested that consideration be given to the limitations on the human capital side, like education and training and other workforce needs. Skip Laitner remarked that technology and behavioral aspects of modeling have been ignored for too long. He offered four suggested areas for improvement: (1) technology characterization on the supply and demand side; (2) capital flows that better distinguish between energy and nonenergy investments and highlight important differences between, for example, information and communication technologies versus metal foundries or papermaking; (3) modeling assumptions about consumers and firms which reflect actual behavior and shifting preferences—price elasticities are at such a high level that they can miss critical information, such as the degree to which consumers are informed or motivated, or the influence of habits and necessity on response to prices; and (4) economic accounting of investments in technologies, to highlight the significant returns on certain investments. He stressed that prices matter, but they are not all that matters and more could be done to tease out these other points to help inform policymakers. He further noted that CGE representation may be an inappropriate characterization of technology, overestimating the costs of adopting a technology. Industries have several different elasticities of substitution, far more than are represented in most models. There is also a need for investigation into behaviors such as how substitutions evolve in response to improved information and technological advances. 24

OCR for page 20
Richard Newell remarked that much of the analytical work that has been done to date has been complicated by the absence of a carbon price, making it difficult to model behavior in the absence of direct empirical evidence of how actors will respond. Thus, a carbon price should also help improve analysts’ ability to model behavior into the future. Nebojsa Nakicenovic commented that integrated assessment modeling has made huge progress over the last 20 years and has had success in integrating economics with technological perspectives, demographics, and other human dimensions, and then linking all of this to climate models. Where it has been less successful is in folding in impacts and possible adaptation measures. He outlined three areas in need of improvement: (1) dealing with uncertainty, about both technologies and policies—there are no tools to adequately consider low-probability but highly consequential events; (2) analyzing failures, which would provide valuable insight into ways to support R&D efforts⎯specifically, how does one measure the success of R&D, particularly at the early deployment stage? and (3) heterogeneity of decisionmakers—regionally and sectorally, agents will behave differently, but this heterogeneity is not well reflected in most models. Finally, as was stressed during the discussion of current analytical capabilities, workshop participants pointed out that there is a pressing need for more regional and household-level data. Regional data is essential to successfully integrating air-quality and land-use models. Recognizing that end users are increasingly requesting detailed outputs (e.g., state-level employment impacts), participants emphasized that the quality and confidence level of such outputs will depend on improved data sets. Participants also noted that international data sets can be of poor quality and difficult to obtain, but nonetheless they are crucial to global modeling efforts. Institutions and Innovation David Montgomery remarked that existing models are effective for modeling idealized policies, but these do not reflect the real world. He outlined three shortcomings: failure to consider institutions; grossly underestimating the costs of inefficient nonmarket policy initiatives that are already coming into effect; and not adequately addressing the R&D and innovation process. Processes such as institutional change and innovation are not represented structurally in the models, nor are they generally predictable or controllable, but as one models out through 2050 or 2100, these processes are almost the entire story. Montgomery contended that modeling global costs requires understanding how the institutional settings in different countries will limit the efficiency of policies and the feasibility of achieving emissions reductions—the rule of law and the existence of economic and political freedom are factors that will have an impact but are not modeled. On a related point, modeling policies in the United States, Canada, and the European Union requires also being able to model the perverse incentives and unintended consequences of command and control regulations, technology mandates, and targeted subsidies. The field of regulatory economics, however, does have a long and solid history of analyzing the implications of regulatory programs and perverse incentives, and so a dialogue between the modeling community and those who study institutions may be beneficial. Montgomery also advised that no model will ever capture all of the ways that a smart economic agent can find to circumvent regulations, which can raise costs and diminish a program’s effectiveness. He pointed to a large body of literature (e.g., Cohen and Noll, 1991) dedicated to characterizing the history of R&D and demonstration projects and the role of government. Models do not provide the kind of insight that would inform the design of R&D policies. Bill Nordhaus stressed that intellectual property rights will be another key component, and reiterated that there are limited instruments (e.g., an aging patent system) to address this. He then raised the question of whether or not climate change is somehow different from other sectors, or is it possible to look to sectors such as health or telecommunications. Marilyn Brown cited a CCTP report (Brown et al., 2006) that analyzed the disciplines that are most critical to the key technology areas looking at climate 25

OCR for page 20
solutions. She emphasized that there were no new disciplines in that list, and she suggested that climate change is perhaps marginally different from other topics of study, but does not necessarily require different disciplines or fundamentally different approaches, merely more sustained efforts at interdisciplinary work. Nordhaus also noted that more consideration must be given to the complementarities between public and private R&D, both in terms of synergy and also whether or not public R&D may crowd out some private R&D. John Weyant suggested that there may be lessons in looking at other innovation systems, such as the National Institutes of Health (NIH). Weyant also pointed out that there are certain gaps in the innovation chain that are not filled because they fall between basic science research and venture capital opportunities, since venture capitalists tend not to take large technological risks. On the subject of risk, Nakicenovic mentioned that IIASA has used its mathematical MESSAGE model to treat uncertainty explicitly with regard to technology investment risk. An assumption was that investors were willing to pay a risk premium to hedge against some of that risk—as the risk premium approaches 5 percent, the dynamics of the entire system fundamentally change. Investments in the lower-cost options start to happen earlier and there will be more diversity in terms of technologies, and the more costly and risky technologies get introduced as well. Weyant also mentioned that in the field of robotics, most big innovations are based on a series of old patents, with one or two new patents building on these to lead to breakthroughs. He related this notion to the common conception of innovation being one of bold pathbreaking changes, which overlooks the minor breakthroughs that might bridge the gap enough to make new technologies commercially viable. Bryan Hubbell relayed the anecdote of research funding for land grant universities working on genetically modified crops in the late 1980s and early 1990s—this basic research was designed to maximize spillovers and thus public benefit. However, as public funding tightened and then as public/private cooperatives emerged, the nature of the research changed to one that would minimize spillovers and thus allow private entities to capture the rents. He questioned whether or not this situation could be managed differently with regard to paradigm-changing energy and climate technologies. Adele Morris also noted the important linkage between R&D and international participation, specifically that there are potentials for international spillovers, and this information could inform international negotiations with an eye toward maximizing the global impact of such spillovers. Skip Laitner mentioned that Moore’s law is not a physical law, but an extrapolation that has become a self-fulfilling prophecy that continues to be driven by business models. Graham Pugh concurred and noted that there may be lessons from the semiconductor industry’s experience with the research collaborative Sematech. Sematech is a pre-competitive R&D consortium, whereby leading semiconductor companies pooled resources and worked collaboratively. From the industry point of view, the costs were too great to be borne by any one company, and this collaborative effort allowed them to move toward the production frontier in a pre-competitive model, driven by the perceived need for constant innovation. Communicating Results To conclude the workshop, participants discussed how to take ideas forward and improve communication channels between policymakers and the analytical community. As Francisco de la Chesnaye and others remarked, it is incumbent on analysts to spend more time comparing and synthesizing similar analyses to better communicate their insights. A participant questioned whether reduced-form models that could be operated by congressional staff or other lay people might be useful, but Dick Goettle replied that after 20 seconds, a consumer would begin asking the detailed questions that only the more-complicated models can answer. Computer time is cheap and there are good models out there, and so he advised that the full-form models be run. Ed Rubin’s simple advice to analysts was to “get the sign right,” a reference to the need to better communicate where and why there are negative cost opportunities to be had. This message seems to get lost when discussing the overall costs to the economy. 26

OCR for page 20
David Montgomery expressed concern that, since they are not taking full account of the institutional impediments and inefficiencies that may drive up costs, the models are all projecting costs on the low end of what may in fact occur. Skip Laitner agreed that the results are likely the lower bound of costs, but he also argued that there are benefits in terms of productivity gains, efficiency gains, spillover innovation, and other aspects that are also not fully accounted for. Thus, he commented, more work needs to be done on characterizing full costs and full benefits. Richard Newell offered the caution that there needs to be a distinction between hypothetical opportunities and those that can be captured given existing conditions (including institutional impediments and regulations). John Weyant described the current situation in California, where analysts are trying to reframe the notion of “cost-effective” from zero or negative cost options to least-cost options to achieve whatever objectives policymakers think they need to achieve. In other words, there is recognition that addressing climate change entails additional costs, and while there is uncertainty about those costs, this should not be a deterrent to taking early action. One additional challenge he described is the cost shock that could pose a political risk—policies could cost much more than anticipated, and rate shock for consumers could undermine further progress. 27