Click for next page ( 4

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 3
2 Policymakers’ Informational Needs To orient the workshop’s discussions toward meeting the needs of the policy community, the initial session was structured to outline the pressing issues that policymakers hope can be informed by analysis. Panelists identified the range of specific informational needs expressed and questions asked or likely to be asked in the future by policymakers regarding the design, impacts, and outcomes of climate change policies and related energy policies. Panelists included Robert Shackleton, Congressional Budget Office (CBO); Howard Gruenspecht, U.S. Energy Information Administration (EIA); Francisco de la Chesnaye, Electric Power Research Institute (EPRI); Nat Keohane, Environmental Defense Fund; and Tim Profeta, Duke University. Bob Shackleton opened the discussion by first noting that CBO pays a lot of attention to what the research community is doing, and how this activity affects policy formation, and he made the point that it was science that initiated the climate policy process decades ago. He noted, as did several other participants, that research provided valuable insights into the consequences of freely allocating carbon allowances, for example, which led policymakers to consider auctions for carbon. Research has also led to an evolution in thinking about the flexibility of specific policies (i.e., in general, greater flexibility seems to yield greater economic efficiency). This chapter summarizes the subsequent discussion, highlighting some of the main themes that emerged, including communicating results; representing behavior and technology adoption; suboptimal scenarios and quantitative targets; modeling policy interactions; and impacts, distribution, and equity considerations. Communicating with Policymakers Howard Gruenspecht remarked that the bandwidth for communication with policymakers is narrow, and thus it is useful for analysts to focus on key insights that are not dependent on particular parameters unique to an analytical framework. Like the research Bob Shackleton mentioned, which influenced policymakers’ consideration of flexible approaches (Richels et al., 1996), Gruenspecht said that these kinds of insights are more valuable than numbers that come out of one particular framework. He also emphasized that policymakers’ needs and wants should be differentiated and that the onus is on analysts to strike a balance between delivering what is asked of them, and instructing policymakers about the types of questions they should be asking. Addressing the issue Bob Marlay raised about confusing analyses of the Lieberman-Warner bill, Francisco de la Chesnaye stated that, to those familiar with the models, the disparities were largely and easily explained by differences in certain assumptions.1 Nevertheless, he stressed that analysts must do a better job of comparing analyses and communicating the insights from these comparisons. As an example, he presented a figure (Figure 1, top) that displayed several analyses of the Lieberman-Warner bill, followed by a figure (Figure 1, bottom) in which the analyses were all controlled for specific constraints. Figure 1 (bottom) shows a much tighter distribution, which leads to a more useful bounding of estimates 1 A distinction should be made between baseline assumptions (e.g., population growth), which are relatively uniform among the models, and assumptions about substitution elasticities, market clearing, foresight, and other issues that largely determine the effects of a given policy. 3

OCR for page 3
Range in 2050 is $121 to $195 350 Costs – both $t/CO2 and GDP impacts – will be 300 double unless deploy technologies, have access to offsets, etc. 250 CRA, banking $ /to n C O 2 e MIT, 15% offsets, CCS subs 200 EPA, ADAGE constrained N,B,CCS EPA, ADAGE constrained N,B,CCS,NG cartel EPA, IGEM alt ref 150 EIA, core Range in 2030 is EPA, ADAGE alt ref $46 to $89 100 50 Range in 2050 is $121 to $195 0 2010 2015 2020 2025 2030 2035 2040 2045 2050 2055 FIGURE 1. Top: Estimates of allowance costs of Lieberman-Warner bill from various analyses. Bottom: Screened marginal costs bill, highlighting the importance of modeling assumptions. NOTE: Results controlled for specified constraints on technology, assumptions on offsets, different reference cases, and bill interpretation. SOURCE: EPRI, 2008. Reprinted with permission of EPRI. on factors like the range of costs per ton of CO2. He noted that making these sorts of comparisons will require more cooperation among agencies and institutions, but that it could be done and would be of great value. Nat Keohane drew a distinction between analyses of specific bills, and what he called policy experiments whereby modelers manipulate aspects of a proposed bill in an attempt to demonstrate, for example, the effects of introducing higher CAFE standards. These experiments may increase confusion and are also not particularly useful, given that existing and proposed bills cannot be modified so easily. 4

OCR for page 3
He also emphasized, as did other participants, that analyses should be transparent to the lay reader. However, participants did not explore what was meant by “transparent” or how transparency might be accomplished. John Conti of EIA responded that in EIA’s case, its model is quite transparent with hundreds of pages of documentation easily accessible, but this does not mean that a lay reader would understand it. Keohane suggested that analysts be prepared to be able to explain key assumptions to lay readers like congressional staff. He noted that Stanford University’s Energy Modeling Forum (EMF) exercises, which engage many from the analytical community, are valuable but are still not transparent to non-analysts, or as he later put it, “The mechanics understand what is going on under the hood, and so the next step is to educate the driver.” Tim Profeta suggested that modelers consider designing sensitivities (e.g., discount rates) and certain assumptions with more stakeholders at the table, as a way to make them more transparent. Adele Morris of the Brookings Institution suggested that the media and broader public should be made more aware of these underlying assumptions as well, and she offered the example of policymakers citing the benefits to be gained from a stringent scenario and the costs arising from a lax scenario, which is of course misleading. Finally, Peter Evans of GE pointed out that although several corporations have developed their own ways to analyze and plan for climate change impacts, they are also avid consumers of formal model outputs. An important consideration for modelers, particularly when they are thinking about reporting on outputs, is thus that in addition to the policy community, the business community is quite interested in learning about analytical work on designing models to estimate economic impacts. Representing Behavior and Technology Adoption Decisionmaker behavior and its representation in models was a subject of discussion throughout the workshop. Howard Gruenspecht raised several related issues and stressed that in general, analysts need to focus more on behavior. He cautioned that simply taking underlying preferences as given was likely not a solid assumption, and that there had been too much attention to cost engineering and not enough attention to the behavior embedded in modeling assumptions. Other participants echoed this notion that economists have focused on costs whereas policymakers are perhaps more interested in feasibility. In other words, policymakers want to know how quickly energy efficiency would have to improve, and with what degree of certainty they might expect to see that happen. Some models also assume the adoption of technologies that may not be universally accepted, and this has major ramifications in many of the modeling results . As has been the case with nuclear power in the United States, community and public acceptance in general will have an influence on the siting and construction of many technologies that models assume will be adopted based on cost alone. Several participants raised questions about how nuclear power and carbon capture and sequestration (CCS) were handled in models, because these two technologies have a large influence on the costs of programs but are not certain to be widely accepted. One suggestion was that analysts should be able to separate these types of technologies out and communicate the results of not using them versus using them (e.g., estimate costs of 3 percent of GDP without such technologies, and 1.5 percent if adopted). Such reporting of results could help bound expectations of costs, as well as signal to policymakers that they may need to expend additional efforts to generate support for or reduce opposition to technologies that are otherwise cost- effective. In this regard, Peter Evans of GE noted that there appears to be a need for some sort of “grand bargain” around coal, given its importance in economic projections, and wondered how modeling results might influence public campaigns or other ways to shape national or even international interest. Dallas Burtraw of Resources for the Future (RFF) seconded this idea, and Kerry King of the University of Texas remarked that his research team is beginning to conduct survey sampling in Texas to understand the level of community acceptance of CCS, as Texas is likely to be a test bed for the new technology in the coming years. 5

OCR for page 3
Suboptimal Scenarios and Quantitative Targets Bob Shackleton described what he called the “unimportance of a quantitative target” with regard to climate policy analyses—there are so many uncertainties in terms of program implementation and climate response, that even if a specific target is achieved, such as stabilizing CO2 concentration, there is always a possibility that costs will greatly exceed what was projected, or that temperature change will be higher or lower than expected. He commented that analyses have often focused on quantitative targets and first-best scenarios, but increasingly decisionmakers are requesting suboptimal (i.e., realistic) scenarios, and more attention needs to be paid to how researchers communicate the uncertainty of their results (e.g., by bounding estimates). He further pointed out that many projections are based on concerted global action, but individual countries, particularly the large energy consumers, can substantially shift potential temperature outcomes, and thus analysts ought to pay more attention to communicating how these individual actions can shift the outcome. Howard Gruenspecht agreed that there has been too much emphasis on first best scenarios, and he emphasized, as did several subsequent participants, that policymakers are interested in the nth best scenario, not only the first or second. As noted earlier, he drew a distinction between policymakers’ interests (their “wants”) and their needs, and he encouraged economic analysts to explain the limits of existing analytical tools, but also to make efforts in their own analyses to go beyond ideal scenarios. Gruenspecht argued that analyses generally took a cost engineering approach, but like the question of technology acceptance, policymakers are just as interested in what might be feasible or expedient, even if it is not the most economically efficient. John Weyant also urged analysts to keep in mind that the models provide insights, not numbers—policymakers have sometimes had a tendency to give modelers a number, e.g., 2 percent of GDP, and then ask modelers to “fill in the rest,” but this may not be feasible or desirable. As an example, he noted that one analysis of California’s bill to regulate GHGs (AB32) concluded that a 100 percent regulatory approach could yield the same results as a cap-and-trade program (in terms of GDP), but this sort of analysis left out two of the most important benefits of a cap-and-trade approach— its ability to handle uncertainty and its flexibility in handling heterogeneous costs among sectors. Modeling Policy Interactions Bob Shackleton pointed out that complicated interactions will inevitably occur between climate policies and policies developed to address a variety of other issues. This reality signals a need to analyze complementary measures, especially in the transportation sector (e.g., a low-carbon fuel standard). He urged more attention to understanding how policies interact, because the reality is that a suite of policies and approaches will be utilized. He noted that some analysts had begun to consider interactions among price floors, price ceilings, and banking/borrowing credits (e.g., Murray et al., 2008), but that more work needs to be done in this area. He also reminded participants that policies would not be static and would be revised in light of new information, further underscoring the importance of analyzing their interactions. Participants identified several gaps between what policymakers were asking and what analysts were considering with regard to policy interactions. Nat Keohane categorized these as policy gaps and methodological gaps. Policy gaps include complementary measures (e.g., a low-carbon fuel standard), trade measures, and revenue recycling. Methodological gaps include price volatility or cost containment, energy efficiency measures as captured in models, and baselines for renewables, especially in the electricity sector. Several other participants suggested that price volatility is an important concern among policymakers, but it is difficult to capture since models tend to predict a particular price path, which may not be realistic. There was also discussion of how energy efficiency is represented in models. Economists are not necessarily as optimistic as engineers about energy efficiency opportunities (e.g., Paul et al., 2008), and Howard Gruenspecht pointed out that many efficiency improvements are assumed in the models, thus there is a danger of double counting. This factor must be teased out further so as to provide insights to policymakers about how policy can most effectively help drive improvements in energy efficiency. 6

OCR for page 3
Tim Profeta raised the issue of state programs that are already underway ahead of federal government action—these will inevitably interact and will not necessarily be preempted by a federal program. Therefore, he urged more consideration of how state programs would interact with a uniform national effort—what are the potential efficiencies or inefficiencies of a patchwork approach? He also pointed out, as did subsequent participants, that transportation is being handled differently than other sectors. In some analyses, costs to electric utilities increase if transportation is included in a cap-and-trade program. Therefore, more work is needed to understand the impacts of alternative mechanisms to handle transportation, or to model the effects of incorporating the transportation sector at a later date. Transportation may also be subject to complementary measures such as a vehicle miles traveled (VMT) reduction program, and so analyses must consider potential complementarities. Trade was another issue brought up by several participants. On the question of energy-intensive imports, Bob Shackleton noted that accounting for these will be one of the most important and persistent questions that must be addressed. Francisco de la Chesnaye urged more forward thinking on international linkages, such as how a U.S. domestic system might link to the international community, especially if a European program would not accept a safety valve (i.e. a price ceiling) for a cap and trade program. Tom Kram also noted that his colleagues at the Netherlands Environmental Assessment Agency recently released a study examining effects of border trade—this study assumes that the EU would proceed ambitiously with mitigation programs while the rest of the world lags (den Elzen et al., 2008). Impacts⎯Distribution and Equity Panelists and participants emphasized that policymakers are more interested in relative outcomes than in absolute outcomes in terms of GDP. Regional impacts are often more important, from a policymaker perspective, than an aggregated impact, and globally, countries might be more concerned with impacts vis-à-vis a neighbor or competitor and thus with relative rather than absolute gains. Bob Shackleton pointed out that this sort of information may not be as relevant to estimating temperature outcomes but is vital to getting policies in place. He also noted that there is an interest in better calculations of benefits, including damages averted—policymakers want a realistic understanding of what the likely outcomes are of various policies, or what the relative outcomes are. This is critical to mitigation policy development, policy stringency, and timing. Understanding the distribution of costs is vital to policymakers. Shackleton pointed out that policymakers are interested in more detailed analyses on impacts by sector and industry, impacts at a state level, changes in the job market, impacts at various income levels, and international material and trade flows—in short, policymakers care more about equity and distribution than efficiency. Policymakers are particularly concerned with burdens that might be imposed on low-income households, and how existing policy frameworks might be utilized to offset some of these impacts. Nat Keohane agreed that distribution and equity are primary concerns because impacts are regional as much as they are partisan. He noted, as did many participants, that not enough models can provide estimates of regional impacts. He further pointed out that equilibrium models are often based on full employment, but this assumption is not realistic, and policymakers are very concerned about impacts on jobs. Keohane also questioned how well models were able to identify “winners and losers” among industries. He stated that information on allocations and industrial performance is likely to need updating. Dimitri Zenghelis agreed that policymakers would like clarity about the amount of risk faced by energy- intensive industries in particular, and noted that a recent report by the World Resources Institute and the Peterson Institute for International Economics (Houser et al., 2008) suggests that these risks might be exaggerated in the absence of firm analysis. 7