Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 11
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program 1 Introduction The Climate Change Science Program (CCSP) and its predecessor U.S. Global Change Research Program (USGCRP) have sponsored climate research and observations for nearly 15 years. Significant scientific discoveries and beneficial applications have resulted from these programs, but their overall progress has not been measured systematically. Metrics—simple qualitative or quantitative measures of performance with respect to a stated goal—offer a tool for gauging such progress, improving program performance, and demonstrating program successes to Congress, the Office of Management and Budget (OMB), and the public. Metrics have long been used by industry to gauge the progress of research and development programs and to guide strategic planning. More recently, they have been used by universities to help make decisions on hiring and promoting faculty and by federal agencies to improve program performance and to increase public accountability. The latter was largely motivated by the Government Performance and Results Act (GPRA) of 1993, which required federal agencies to set strategic goals and to measure program performance against those goals.1 The GPRA does not apply to multiagency programs such as the CCSP or the USGCRP. However, the same motivating factors exist. CCSP agencies are striving (1) to demonstrate progress in climate change science, (2) to assess the current effectiveness of the program, and (3) to improve overall 1 Public Law 103-62.
OCR for page 12
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program program performance.2 Such an evaluation is needed to justify continued taxpayer support, especially in an era of declining budgets. Studies in industry, academia, and the government suggest that metrics can be developed to document progress from past research programs and to evaluate future research performance.3 The challenge is to create meaningful and effective metrics that accomplish the following: Convey an accurate view of scientific progress. A metric commonly used to evaluate advances in climate models, for example, is reduction of uncertainty of a projection or forecast.4 However, progress in scientific and technical understanding can both increase and decrease uncertainty estimates. Result in a balance between high-risk research, long-term gain, and success in specific applications that are more easily measured. Accommodate the long time scales necessary for achieving results in basic research. The following additional challenges are specific to the CCSP: To develop a methodology for creating metrics that can be applied to the entire CCSP. This is especially challenging because of the scope and diversity of the program. Thirteen agencies participate in the program, which encompasses a wide range of natural and social science disciplines, each of which has different approaches to and results from research, and activities ranging from observations, to basic research, to assessments and decision support (Box 1.1). To collect consistent data that can be used to assess and manage programs at the interagency level. 2 Presentation to the committee by J. Mahoney, CCSP director, on December 17, 2003. 3 Army Research Laboratory, 1996, Applying the Principles of the Government Performance and Results Act to the Research and Development Function: A Case Study Submitted to the Office of Management and Budget, 27 pp., <http://govinfo.library.unt.edu/npr/library/studies/casearla.pdf>; National Science and Technology Council, 1996, Assessing Fundamental Science, <http://www.nsf.gov/sbe/srs/ostp/assess/start.htm>; General Accounting Office, 1997, Measuring Performance: Strengths and Limitations of Research Indicators, GAO/RCED-97-91, Washington, D.C., 34 pp.; National Academy of Engineering and National Research Council, 1999, Industrial Environmental Performance Metrics: Challenges and Opportunities, National Academy Press, Washington, D.C., 252 pp.; National Research Council, 1999, Evaluating Federal Research Programs: Research and the Government Performance and Results Act, National Academy Press, Washington, D.C., 80 pp.; National Research Council, 2001, Implementing the Government Performance and Results Act for Research: A Status Report, National Academy Press, Washington, D.C., 190 pp. 4 Presentations to the committee by J. Kaye, National Aeronautics and Space Administration, on December 17, 2003, and J. Rothenberg, OMB, on March 4, 2004.
OCR for page 13
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program Box 1.1 CCSP Strategic Plan The CCSP strategic plan represents an attempt to integrate the science requirements of the USGCRP with the requirements laid out in the 2001 Climate Change Research Initiative to reduce uncertainty, improve observing systems, develop science-based resources to support policy making and resource management, and communicate findings to the broader community. The plan identifies five overarching goals that orient the research programs of 13 participating federal agencies around understanding climate change and managing its risks: Improve knowledge of the Earth’s past and present climate and environment, including its natural variability, and improve understanding of the causes of observed variability and change. Improve quantification of the forces bringing about changes in the Earth’s climate and related systems. Reduce uncertainty in projections of how the Earth’s climate and related systems may change in the future. Understand the sensitivity and adaptability of different natural and managed ecosystems and human systems to climate and related global changes. Explore the uses and identify the limits of evolving knowledge to manage risks and opportunities related to climate variability and change. The plan also identifies four core approaches for working toward these goals: Plan, sponsor, and conduct research on changes in climate and related systems. Enhance observations and data management systems to generate a comprehensive set of variables needed for climate-related research. Develop improved science-based resources to aid decision making. Communicate results to domestic and international scientific and stakeholder communities, stressing openness and transparency. Finally, the plan describes research needs in seven areas—atmospheric composition, climate variability and change, water cycle, land-use and land-cover change, carbon cycle, ecosystems, and human contributions and responses to environmental change—and specifies more than 200 milestones, products, and payoffs to be produced in these research areas within two to four years. SOURCE: Climate Change Science Program and Subcommittee on Global Change Research, 2003, Strategic Plan for the U.S. Climate Change Science Program, Washington, D.C., 202 pp.
OCR for page 14
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program COMMITTEE CHARGE AND APPROACH Given the challenges described above, how can progress in global change research be measured? At the request of James Mahoney, director of the Climate Change Science Program and chair of the Subcommittee on Global Change Research, the National Research Council convened an ad hoc committee to explore the issues and to recommend a methodology that agencies can use to demonstrate progress from past global change research investments and to institute meaningful and effective metrics for the future.5 The committee was asked to avoid recommending changes to the CCSP strategic plan. The specific charge to the committee is given in Box 1.2. The committee approached its charge first by examining what could be learned from previous efforts to develop metrics in federal government agencies, industry, and academia. Information was gathered from a literature review and briefings from agency program managers, climate change scientists, science historians, and policy experts. Based on this information, the committee identified principles for developing metrics for the CCSP. Special attention was given to issues such as peer review and reduction of uncertainty, which figure prominently in the metrics of each of these sectors as well as in the CCSP strategic plan. Next, the committee chose case studies drawn from different parts of the CCSP strategic plan. The case studies ranged from collecting the data needed to better understand solar forcing of climate to improving adaptive management of water resources. For each case study, the committee developed example metrics and assessed the difficulty of applying them to other parts of the program. This exercise led to the development of a general set of metrics that could be used for the CCSP. METRICS AND PERFORMANCE MEASURES Metrics and performance measures gauge progress with respect to a stated goal. Therefore, they address the question: Is there demonstrable advancement in reaching a goal? Metrics and performance measures tend to be simple, focusing on a number, score, or a yes or no answer, but they can also integrate several different measures.6 Because the results of science and technology are both tangible and intangible, the associated metrics and performance measures may be quantitative or qualitative. The distinction between quantitative and qualitative 5 Presentation to the committee by J. Mahoney, Climate Change Science Program, on December 17, 2003. 6 Werner, B.M., and W.E. Souder, 1997, Measuring R&D performance—State of the art, Research Technology Management, March-April, 34–42.
OCR for page 15
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program Box 1.2 Committee Charge Using the objectives of climate change and global change research as articulated in the CCSP strategic plan, the committee will develop quantitative metrics for documenting progress and evaluating future performance for selected areas of global change and climate change research. In particular, the study will Provide a general assessment of how well CCSP objectives lend themselves to quantitative metrics. Identify three to five areas of climate change and global change research that can and should be evaluated through quantitative performance measures. For these areas, recommend specific metrics for documenting progress, measuring future performance (such as skill scores, correspondence across models, correspondence with observations), and communicating levels of performance. Discuss possible limitations of quantitative performance measures for other areas of climate change and global change research. In developing its recommendations, the committee will attempt to develop processes that can be applied in both the short term (e.g., two to four years) and longer terms, and will strive to avoid possible unintended consequences of performance measurement (e.g., unbalanced research portfolios, reduced innovation). The committee will not itself apply its proposed methodology to evaluate agency research efforts, although it may include in its report a few examples of how its recommended methods could be implemented. is not always sharp, but in general, quantitative outputs (e.g., number of patents or new products) can be evaluated by direct measurement, whereas qualitative outputs (e.g., contributions to the pool of innovation, capabilities and skills of the scientific staff) require judgment to evaluate. Such judgments are subjective and lend themselves to scoring and, hence, some manipulation of quantities.7 In this report the term “metrics” is used for what some call “performance measures.” As used by government agencies, performance measures include indicators and statistics that are used to assess progress toward pre-established goals. They tend to focus on “regularly collected data on the level and type of program activities, the direct products and services delivered 7 Geisler, E., 2000, The Metrics of Science and Technology, Quorum Books, Westport, Conn., 380 pp.
OCR for page 16
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program TABLE 1.1 Example Definitions of Quantitative and Qualitative Metrics Metric Item Being Measured Unit of Measurement Inherent Value Citation analysis Scientific output Citation counts Impact of the work on the scientific community Peer review Scientific outcomes Subjective analysis Performance of scientists by the program, and the results of those activities.8 Because the results of scientific research are not easily defined in terms of performance and because a metric implies some ability to be quantitative, it seems a more apt terminology for use among scientists and by managers evaluating scientific programs. A metric is a “system of measurement that includes the item being measured, the unit of measurement, and the value of the unit.”9 Examples of the application of this definition to quantitative (citation analysis) and qualitative (peer review) metrics are given in Table 1.1. Different types of metrics are used throughout industry, academia, and government. For example, OMB differentiates between long-term and annual measures and subdivides these categories into outcome and efficiency measures.10 Academia relies on bibliometrics, which are published outputs such as number of journal articles or citations. This report focuses on five types of metrics—process, input, output, outcome, and impact—which are defined in Box 1.3. ORGANIZATION OF THE REPORT The purpose of this report is to provide a starting point for measuring progress of the CCSP and, by extension, its predecessor USGCRP. Chapter 2 describes different approaches that industry, academia, and federal agencies have taken to measure research performance. A more complete discussion of federal laws and policies driving government efforts to measure performance is given in Appendix A. Chapter 3 lays out principles for 8 General Accounting Office, 2003, Results-Oriented Government: Using GPRA to Address 21st Century Challenges, GAO-03-1166T, Washington, D.C., p. 9. 9 Geisler, E., 2000, The Metrics of Science and Technology, Quorum Books, Westport, Conn., pp. 74–75. 10 Process and output measures are also allowed in some cases. See Office of Management and Budget, 2005, Guidance for Completing the Program Assessment Rating Tool (PART), pp. 9–10, <http://www.whitehouse.gov/omb/part/fy2005/2005_guidance.doc>.
OCR for page 17
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program Box 1.3 Categories of Metrics Used in This Report Metrics can be devised to evaluate the overall process for reaching a goal, or any stage or result of the process (input, output, outcome, impact). Definitions of these categories and example metrics related to the discovery of the Antarctic ozone hole are given below. Process—a course of action taken to achieve a goal. Example metrics include existence of a project champion and length of time between starting the research and delivering an assessment on stratospheric ozone depletion to policy makers. Input—tangible quantities put into a process to achieve a goal. An example input metric is expenditures for (a) theoretical and laboratory studies on ozone production and destruction, (b) development and deployment of sensors to sample the stratosphere, (c) modeling and analysis of data, or (d) meetings and publications. Output—products and services delivered. Examples of output metrics include number of models that take into account new findings on chlorofluorocarbon chemistry or number of publications and news reports on the cause of stratospheric ozone depletion and its possible consequences. Outcome—results that stem from use of the outputs. Unlike output measures, outcomes refer to an event or condition that is external to the program and is of direct importance to the intended beneficiaries (e.g., scientists, agency managers, policy makers, other stakeholders). Examples of outcome metrics are the number of alternative refrigerants introduced to society to reduce the loss of stratospheric ozone and scientific outputs integrated into a new understanding of the causes of the Antarctic ozone hole. Impact—the effect that an outcome has on something else. Impact metrics are outcomes that focus on long-term societal, economic, or environmental consequences. Examples of impact metrics include the recovery of stratospheric ozone resulting from implementation of the Montreal Protocol and related policies and the increase in public understanding of the causes and consequences of ozone loss. developing metrics, based on the experience of industry, academia, and federal agencies. Chapter 4 focuses on the metric most commonly used to measure progress in climate science: uncertainty reduction. Chapter 5 describes the process by which the committee developed metrics and summarizes conclusions from developing metrics in case studies that appear here and in Appendix B. A set of general metrics for assessing the progress of CCSP program elements and for guiding future strategic planning is proposed and tested in Chapter 6. Additional metrics developed elsewhere for science and technology programs in general are presented in Appendix C. Finally, Chapter 7 presents answers to the questions in the committee’s charge and discusses implementation issues.
OCR for page 18
Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program This page intentionally left blank.
Representative terms from entire chapter: