Cover Image


View/Hide Left Panel
Click for next page ( 4

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 3
Summary The federal government has long sought effective tools to evaluate the performance and results of publicly funded programs, including research and development (R&D) programs, to ensure the wise use of taxpayers’ money. To that end, Congress passed the Government Performance and Results Act in 1993, and the Office of Management and Budget (OMB) designed the Program Assessment Rating Tool (PART) in 2002. Evaluation of R&D programs has proved to be challenging for federal agencies. In particular, they have experienced difficulties in complying with the PART requirements to measure the efficiency of their research, to use outcome- based metrics in doing so, and to achieve annual efficiency improvements. In 2006, the U.S. Environmental Protection Agency (EPA) asked the Na- tional Academies for independent assistance in developing better assessment tools to comply with PART. The Academies’ Committee on Science, Engineer- ing, and Public Policy (COSEPUP) and the National Research Council (NRC) Board on Environmental Studies and Toxicology (BEST) oversaw the appoint- ment of the Committee on Evaluating the Efficiency of Research and Develop- ment Programs at the U.S. Environmental Protection Agency and charged it to answer the following questions: • What efficiency measures are currently used for EPA R&D programs and other federally funded R&D programs? • Are these efficiency measures sufficient? Are they outcome-based? • What principles should guide the development of efficiency measures for federally funded R&D programs? • What efficiency measures should be used for EPA’s basic and applied R&D programs? Through a series of information-gathering steps, including discussions with OMB and EPA and a public workshop attended by representatives of re- 3

OCR for page 3
4 Evaluating Research Efficiency in EPA search-intensive agencies1 and industries, the committee evaluated how EPA and other agencies were attempting to comply with PART. The committee fo- cused its deliberations on several fundamental issues posed by the charge ques- tions, including 1. How—and why—should research be evaluated in terms of efficiency? 2. What is a “sufficient” measure of efficiency? 3. What measures of efficiency are “outcome-based,” and should they be? In its discussion the committee uses the terms inputs, outputs, and out- comes as defined by OMB, except as modified and discussed below: • Inputs are agency resources—such as funding, facilities, and human capital—that support research. • Outputs are activities or accomplishments delivered by research pro- grams, such as research findings, papers published, exposure methods developed and validated, and research facilities built or upgraded. • Outcomes are the benefits resulting from a research program, which can be short-term, such as an improved body of knowledge or a comprehensive sci- ence assessment, or long-term, such as lives saved or enhancement of air qual- ity, that may be based on research activities or informed by research but that require additional activities by many others. The committee distinguishes these two types of outcomes using the terms, intermediate outcomes and ultimate or end outcomes.2 QUESTION 1 With respect to the question, “How—and why—should research be evalu- ated in terms of efficiency?”, the committee suggests that some of the frustration expressed by federal research-intensive agencies in complying with PART de- rives from confusion over the concept of “efficiency.” From its review of the OMB PART guidance and efficiency measures used by EPA and other federal agencies, the committee concludes that two conceptually different kinds of effi- ciency are integral to the execution and evaluation of R&D programs. The committee distinguished between investment efficiency and process efficiency. Investment efficiency focuses on portfolio management, including the need to identify the most promising lines of research for achieving desired outcomes. 1 The term research-intensive is used to describe agencies for which research is an es- sential even if not necessarily dominant aspect of the mission. For example, research is important at EPA but is not its primary function, as is the case for the National Institutes of Health and the National Science Foundation. 2 The committee acknowledges that the NRC Committee for the Review of NIOSH Research Program has used the term end outcomes.

OCR for page 3
Summary 5 It is best evaluated by assessing the program’s research activities, from planning to funding to midcourse adjustments, in the framework of its strategic planning architecture. Investment efficiency concerns three questions: are the right in- vestments being made, is the research being performed at a high level of quality, and are timely and effective adjustments made in the multi-year course of the work to reflect new scientific information, new methods, and altered priorities? Because these questions cannot be addressed quantitatively, they require judg- ment based on experience and should be addressed through expert review. Process efficiency involves inputs and outputs. Its evaluation asks how well research processes are managed. It monitors activities, such as publications, grants reviewed and awarded, and laboratory analyses conducted whose results can be anticipated and can be tracked quantitatively against established bench- marks by using such units as dollars and hours. Examples may include time re- quired to conduct site assessments, average cost per measurement or analysis, and what percentage of external grants are evaluated by peer review within a given period. Whereas both kinds of efficiency are addressed in concept in the PART questions, only the questions regarding process efficiency are labeled by PART guidance as efficiency per se. Operationally, though, OMB seeks to address these process efficiency questions using measures of outcomes. QUESTION 2 In exploring the question, “What is a ‘sufficient’ measure of efficiency?”, the committee assembled a list of relevant issues and examined a number of metrics proposed or used by federal agencies. It found that none of those metrics was capable of evaluating investment efficiency, and that many of the ones that were appropriate for evaluating process efficiency were not sufficient. Many of the process-efficiency metrics proposed by agencies other than EPA have been accepted by OMB, but several similar metrics proposed by EPA have not been accepted. Metrics that typically have been proposed or used by the federal agen- cies address only a small piece of a research program, and none attempts a com- prehensive program evaluation. QUESTION 3 In addressing the question, “What measures of efficiency are “outcome- based,” and should they be?”, the committee distinguished “ultimate outcomes,” such as lives saved or clean air, from “intermediate outcomes,” such as timely submission of comprehensive science assessments for scheduled regulatory re- views. While intermediate outcomes can be useful metrics, the committee found that ultimate-outcome-based metrics cannot be used to evaluate the efficiency of research for three reasons:

OCR for page 3
6 Evaluating Research Efficiency in EPA • Ultimate outcomes usually cannot be predicted or known in advance. • Ultimate outcomes may occur long after research is completed. • Ultimate outcomes usually depend on actions taken by others. The PART guidance urges agencies to develop outcome-based efficiency metrics,3 even though the PART questions do not specifically refer to such met- rics and no agencies have been able to develop them for research programs. PART also requires that assessments be made annually. That is difficult for research managers whose long-term projects may show results only after several years of work. FINDINGS The committee identified the following findings: 1. The key to research efficiency is good planning and implementation. EPA and its ORD have a sound strategic planning architecture that provides a multi-year basis for the annual assessment of progress and milestones for evalu- ating research programs, including their efficiency. 2. All the metrics examined by the committee that have been proposed by or accepted by OMB to evaluate the efficiency of federal research programs have been based on the inputs and outputs of research-management processes, not on their outcomes. 3. Ultimate-outcome-based efficiency metrics are neither achievable nor valid for this purpose. 4. EPA’s difficulties in complying with the PART questions about effi- ciency (questions 3.4 and 4.34) have grown out of inappropriate OMB require- ments for outcome-based efficiency metrics. 5. An “ineffective”5 PART rating of a research program can have serious adverse consequences for the program or the agency. 3 For example, the PART guidance (p. 10) states, “Outcome efficiency measures are generally considered the best type of efficiency measure for assessing the program over- all.” 4 Question 3.4: “Does the program have procedures (e.g. competitive sourcing/cost comparisons, IT improvements, appropriate incentives) to measure and achieve efficien- cies and cost effectiveness in program execution?” Question 4.3: “Does the program demonstrate improved efficiencies or cost effectiveness in achieving program goals each year?” 5 OMB PART Web site ( states that “programs receiving the Ineffective rating are not using tax dollars effectively. Ineffective programs have been unable to achieve results due to a lack of clarity regarding the program’s purpose or goals, poor management, or some other significant weakness. Ineffective programs are categorized as Not Performing.”

OCR for page 3
Summary 7 6. Among the metrics proposed to measure process efficiency, several can be recommended for wider use by agencies. 7. The most effective mechanism for evaluating the investment efficiency of R&D programs is an expert-review panel, as recommended in earlier COSEPUP and BEST reports. Expert-review panels are much broader than sci- entific peer-review panels. RECOMMENDATIONS Recommendation 1 To comply with questions 3.4 and 4.3 of PART, EPA and other agen- cies should only apply quantitative efficiency metrics to measure the process efficiency of research programs. Process efficiency can be measured in terms of inputs, outputs, and some intermediate outcomes; it does not re- quire ultimate outcomes. For compliance with PART, evaluation of the efficiency of a research pro- gram should not be based on ultimate outcomes, for the reasons listed above under Question 3. Although PART guidance encourages the use of outcome- based metrics, the guidance also describes the difficulty of applying them. As stated earlier, the committee has concluded that, for most research programs, ultimate-outcome-based efficiency measures are neither achievable nor valid. Given the inability to evaluate the efficiency of research on the basis of ul- timate outcomes, the committee recommends that OMB and other oversight bodies focus on evaluating the process efficiency of research, including core research or basic research—how program managers exercise skill and prudence in using and conserving resources. For evaluating process efficiency, quantita- tive methods can be used by expert-review panels and others to track and review the use of resources in light of goals embedded in strategic and multi-year plans. Earned Value Management (EVM) is a quantitative tool that can track aspects of research programs against milestones.6 Moreover, to facilitate the evaluation process, the committee recommends modifying OMB’s framework of results to include the category of intermediate outcomes, as distinguished from ultimate outcomes. Intermediate outcomes in- clude such results as an improved body of knowledge available for decision- making, comprehensive science assessments, and the dissemination of newly developed tools and models. Those results, which might be visualized as inter- mediate between outputs and ultimate outcomes, might enhance the evaluation 6 EVM measures the degree to which research outputs conform to scheduled costs along a time line. It is used by agencies and other organizations in many management settings, such as construction projects and facilities operations, where the outcome (such as a new laboratory or optimal use of facilities) is well known in advance and progress can be plotted against milestones.

OCR for page 3
8 Evaluating Research Efficiency in EPA process by adding individual trackable items and a larger body of knowledge for decision-making. Recommendation 2 EPA and other agencies should use expert-review panels to evaluate the investment efficiency of research programs. The process should begin by evaluating the relevance, quality, and performance7 of the research. Investment efficiency is used in this report to indicate whether an agency is “doing the right research and doing it well.” The term is meant as a gauge of portfolio management to measure whether a program manager is investing in research that is relevant to the agency’s mission and long-term plans and is be- ing performed at a high level of quality. Evaluating quality and relevance re- quires expert judgment based on experience; no quantitative measures can fully capture these key items. The best mechanism for measuring investment effi- ciency is the expert-review panel. Investment efficiency may also include stud- ies that guide the next set of research projects or stepwise development of ana- lytic tools or other products. EPA should continue to obtain primary input for PART compliance by us- ing expert review, under the aegis of its Board of Scientific Counselors or its Science Advisory Board. Expert review provides an independent forum for evaluation of research and complements the efforts of program managers in re- viewing research activities and judging them against multi-year plans and an- ticipated outcomes. The expert-review panel can use intermediate outcomes to focus on key steps in the progress of any research program and to fill gaps in the spectrum of research results between outputs and ultimate outcomes. The panel’s review of quality, relevance, and performance will include judgments on process efficiency and investment efficiency that should be appropriate and suf- ficient for the annual PART process. The qualitative emphasis of expert review should not take away from the importance of quantitative metrics, which expert-review panels should use whenever possible to evaluate the efficiency of research processes. Examples of such processes are administration, construction, grant administration, and facil- ity operation, in which many activities can be measured quantitatively and linked to milestones. Process efficiency should be evaluated in the context of the expert review, but only after the relevance, quality, and effectiveness of a research program have been evaluated. 7 Performance is described in terms of both effectiveness, meaning the ability to achieve useful results, and efficiency, meaning the ability to achieve research quality, relevance, and effectiveness with little waste.

OCR for page 3
Summary 9 Recommendation 3 The efficiency of research programs at EPA should be evaluated ac- cording to the same overall standards used at other agencies. Some of the metrics proposed by EPA to comply with questions 3.4 and 4.3 of PART, such as the number of publications per full-time equivalent, have been rejected by OMB but accepted when proposed by other agencies. OMB has encouraged EPA to apply the common technique of earned value management (EVM). However, no other agency has used EVM to measure basic research. In the committee’s view, some agencies have addressed the PART ques- tions with different approaches that are often not in alignment with their long- term strategies or missions. Many of the approaches refer only to individual por- tions of programs, quantify activities that are not research activities, or review processes that are not central to an agency’s R&D programs. In short, many fed- eral agencies have addressed the relevant PART questions with responses that are not, in the wording of the charge, “sufficient.” The committee calls on EPA and other agencies to address PART through consistent government-wide standards and practices addressed in its recommen- dations above. ADDITIONAL RECOMMENDATION FOR THE OFFICE OF MANAGEMENT AND BUDGET OMB should have oversight and training programs for budget exam- iners to ensure consistent and equitable implementation of PART in the many agencies that have substantial R&D programs. Evaluating different agencies by different standards is undesirable because results are not comparable and ratings may not be equitable. OMB budget exam- iners bear primary responsibility for working with agencies in PART compli- ance and interpreting PART questions for them. Although the examiners cannot be expected to bring scientific expertise to their discussions with program man- agers, they should bring an understanding of the research process as it is per- formed in the context of federal agencies. OMB decisions about whether to accept or reject metrics for evaluating the efficiency of research programs have been inconsistent. A decision to reject a given metric proposed by one agency and to accept it when proposed by an- other agency can unfairly damage the reputation of the first agency and diminish the credibility of the evaluation process itself. Because the framework of PART is virtually the same for all agencies and because the principles of scientific in- quiry do not vary among disciplines, the implementation of PART should be both consistent and equitable in all federal research programs.

OCR for page 3
10 Evaluating Research Efficiency in EPA GENERAL CONCLUSIONS The committee concluded that at the time of this study no agency had found a method of evaluating the efficiency of research based on the ultimate- outcomes of that research. Most of the methods proposed by agencies to meas- ure efficiency addressed only particular aspects of research processes but not the research itself. In the committee’s terminology, this means that agencies are focusing on process efficiency and not on investment efficiency. The committee also concluded that sound evaluation of research should not over-emphasize efficiency, as reflected in the charge questions. The primary goal of research is knowledge, and the development of new knowledge depends on so many conditions that its efficiency must be evaluated in the context of quality, relevance, and effectiveness in addressing current priorities and antici- pating future R&D questions. The criterion of relevance and timely application of the outputs from R&D in ORD and in certain program offices to the regula- tory process is particularly important at an agency like EPA.