Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 58
Evaluating Research Efficiency in the U.S. Environmental Protection Agency 5 Findings, Principles, and Recommendations In this chapter, the committee draws together its preceding discussions in the form of findings, principles, and recommendations. The findings constitute a brief summary of major points discussed in Chapters 1-4. The principles are intended for use by both the Environmental Protection Agency (EPA) and other research-intensive federal agencies. The recommendations are intended specifically for EPA, although other agencies, including the Office of Management and Budget (OMB), may find them useful.1 To introduce this chapter, it is useful to begin with the two central issues on which the committee focused many of its discussions. The first is the emphasis on efficiency in Program Assessment Rating Tool (PART) reviews. In the planning, execution, and review of research programs, efficiency should normally be subordinate to the criteria of relevance, quality, and effectiveness for reasons explained in Chapter 3. However, all federal programs should use efficient spending practices, and the committee suggests which aspects of efficiency can be measured in research programs and how that might best be done. Two kinds of efficiency should be differentiated. The first, process efficiency, uses primarily quantitative metrics to evaluate management processes whose results are known and for which benchmarks can be defined and progress can be measured against milestones. The second, investment efficiency, measures how well a program’s resources have been invested and how well they are being managed. Evaluating investment efficiency involves qualitative measures, primarily the judgment and experience of expert-review panels, and may also draw on quantitative data. Investment efficiency is the responsibility of the portfolio manager who identifies the most promising lines of research for achieving desired outcomes. 1 It should be emphasized again that these recommendations apply only to R&D programs, not to the much broader universe of federal programs to which PART is applied.
OCR for page 59
Evaluating Research Efficiency in the U.S. Environmental Protection Agency The second central issue is the charge question of whether metrics used by federal agencies to measure the efficiency of research are “sufficient” and “outcome-based.” In approaching sufficiency, the committee gathered examples of methods used by agencies and organized them in nine categories. It found that most of the methods were insufficient for evaluating programs’ process efficiency either because they addressed only a portion of a program or because they addressed issues other than research, and all were insufficient for evaluating investment efficiency because they did not include the use of expert review. In responding to the question of whether the metrics used are outcome-based, the committee determined that ultimate-outcome-based evaluations of the efficiency of research are neither achievable nor valid. The issue is discussed in Chapter 3. Those two basic conclusions constitute the background of the major findings of this report. Findings 2, 4, 5, 6, and 7 are linked to specific charge questions, as indicated; findings 1 and 3 are more general. FINDINGS The key to research efficiency is good planning and implementation. EPA and its Office of Research and Development (ORD) have a sound strategic planning architecture that provides a multi-year basis for the annual assessment of progress and milestones for evaluating research programs, including their efficiency. All the metrics examined by the committee that have been proposed by or accepted by OMB to evaluate the efficiency of federal research programs have been based on the inputs and outputs of research-management processes, not on their outcomes. Ultimate-outcome-based efficiency metrics are neither achievable nor valid for this purpose. EPA’s difficulties in complying with PART questions about efficiency (questions 3.4 and 4.32) have grown out of inappropriate OMB requirements for outcome-based efficiency metrics. An “ineffective” (OMB 2007a)3 PART rating of a research program can have serious adverse consequences for the program or the agency. 2 Question 3.4 is “Does the program have procedures (e.g. competitive sourcing/cost comparisons, IT improvements, appropriate incentives) to measure and achieve efficiencies and cost effectiveness in program execution?” Question 4.3 is “Does the program demonstrate improved efficiencies or cost effectiveness in achieving program goals each year?” 3 OMB (2007a) states that “programs receiving the Ineffective rating are not using tax dollars effectively. Ineffective programs have been unable to achieve results due to a lack of clarity regarding the program’s purpose or goals, poor management, or some other significant weakness. Ineffective programs are categorized as Not Performing.”
OCR for page 60
Evaluating Research Efficiency in the U.S. Environmental Protection Agency Among the metrics proposed to measure process efficiency, several can be recommended for wider use by agencies (see recommendation 1). The most effective mechanism for evaluating the investment efficiency of R&D programs is an expert-review panel, as recommended in earlier reports of the Committee on Science, Engineering, and Public Policy and the Board on Environmental Studies and Toxicology. Expert-review panels are much broader than scientific peer-review panels. PRINCIPLES The foregoing findings led to a series of principles that the committee used to address the overall process of evaluating research programs in the context of agency long-term plans and missions. A central thesis of this report is that the evaluation principles can and should be applied to all federally supported research programs and can also be applied to research in other contexts. The committee hopes that these principles will be adopted by EPA and other research-intensive agencies in assessing their R&D programs. Principle 1 Research programs supported by the federal government should be evaluated regularly to ensure the wise use of taxpayers’ money. The purpose of OMB’s PART is to ensure that the government is spending taxpayers’ money wisely. This committee’s recommendations are designed to further that aim. More broadly, the committee agrees that the research programs of federal agencies should be evaluated regularly, as are other programs of the federal government. During the evaluations, efforts should be made to evaluate the efficiency of the research programs of agencies. The development of tools for doing that is still in an early stage, and agencies continue to negotiate their practices internally and with OMB. EPA’s multi-year plans, which provide an agency-wide structure to review progress and to revise annually, constitute a useful framework for organizing evaluations that serve as input into the PART process. Principle 2 Despite the wide variability of research activities among agencies, all agencies should evaluate their research efforts according to the same criteria: relevance, quality, and performance. Those criteria are defined in this report as follows:
OCR for page 61
Evaluating Research Efficiency in the U.S. Environmental Protection Agency Relevance is a measure of how well research supports an agency’s mission. Quality is a measure of the novelty, soundness, accuracy, and reproducibility of research. Performance is described in terms of both effectiveness (the ability to achieve useful results) and efficiency (the ability to achieve quality, relevance, and effectiveness in timely fashion and with little waste). The research performed by federal agencies varies widely by primary mission responsibility. The missions of the largest research-intensive agencies include defense, energy, health, space, agriculture, and the environment. Their research efforts share assumptions, approaches, and investigative procedures, so they should be evaluated by the same criteria. Research that is designed appropriately for a mission (relevance), is implemented in accordance with sound research principles (quality), and produces useful results (effectiveness) should be managed and performed as efficiently as possible. That is, research of unquestionable quality, relevance, and efficiency is effective only if the information it produces is in a usable form. The committee emphasizes that research effectiveness, in the context of PART, is achieved only to the degree that the program manager makes the most effective use of resources by allocating resources to the most appropriate lines of investigation. This integrated view is a reasonable starting point for the evaluation of research programs. Principle 3 The process efficiency of research should not be evaluated using outcome-based metrics. PART encourages the use of outcome-based metrics to evaluate the efficiency of federal programs. For many or perhaps most programs, especially those with clearly defined and predictable outcomes, such as countable services, that is an appropriate and practical approach that makes it possible to see how well inputs (resources) have been managed and applied to produce outputs. But OMB recognizes the difficulty of using outcome-based metrics to measure the efficiency of core or basic-research programs. According to PART guidance (OMB 2007b), agencies should define appropriate output and outcome measures for all R&D programs, but agencies should not expect fundamental basic research to be able to identify outcomes and measure performance in the same way that applied research or development are able to. Highlighting the results of basic research is important, but it should not come at the expense of risk-taking and innovation. For some basic research programs,
OCR for page 62
Evaluating Research Efficiency in the U.S. Environmental Protection Agency OMB may accept the use of qualitative outcome measures and quantitative process metrics (OMB 2007b, p. 76). The committee agrees with that view, as elaborated below, and finds that ultimate-outcome-based efficiency metrics are neither achievable nor valid, as explained in Chapter 3. In some instances, however, it may be useful to reframe the results of research to include the category of intermediate outcomes, the subject of Chapter 4. That category of results may include new tools, models, and knowledge for use in decision-making. Because intermediate outcomes are available sooner than ultimate outcomes, they may provide more practical and accessible metrics for agencies, expert-review panels, and oversight bodies. Principle 4 The efficiency of R&D programs can be evaluated on the basis of two metrics: investment efficiency and process efficiency. In the committee’s view, the construct presented by PART has proved unworkable for research-intensive agencies partly because of their difficulty in evaluating the “efficiency” of research. In lieu of that construct, the committee suggests that any evaluation of a research program be framed around two questions: Is the program making the right investments? Is it managing those investments well? This report has used the term investment efficiency for the first evaluation metric. Investment efficiency is determined by examining a program in light of its relevance, quality, and performance—in other words, by asking whether the agency has invested in the right research portfolio and managed it wisely. Those criteria are most relevant to research outcomes. The issue of efficiency is not the central concern in asking whether a program is making the right investments. But it is implicit in that the portfolio manager must make wise research investments if the program is to be effective and efficient; once resources, which are always finite, have been invested, they must be used to optimize results. The totality of those activities might be called portfolio management, a more familiar term that suggests linking research activities with strategic and multi-year plans. Sound portfolio management is the surest route to desired outcomes. The elements of investment efficiency are addressed in most agency procedures developed under the Government Performance and Results Act (GPRA) and in PART questions, although not in those addressing efficiency (that is, questions 3.4 and 4.3). Moreover, it is essential to correct the misunderstanding embodied in the following statement in the PART guidance: “Programs must document performance against previously defined output and outcome metrics”
OCR for page 63
Evaluating Research Efficiency in the U.S. Environmental Protection Agency (OMB 2007b, p. 76). A consistent theme of the present report is that for many research programs there can be no “outcome metrics”; that is true especially for core research, as discussed in Chapter 3. Distinct from investment efficiency is process efficiency, which has to do with how well research investments are managed. Process efficiency involves activities whose results are well known in advance and can be tracked by using established benchmarks in such quantities as dollars and hours. Process efficiency is secondary to investment efficiency in that it adds value only after a comprehensive evaluation of relevance, quality, and effectiveness. Process efficiency most commonly addresses outputs, which are the near-term results of research. It can also—like investment efficiency—make use of intermediate outcomes, which can be identified earlier than ultimate outcomes and thus provide valuable data points for reviewers. Principle 5 Investment efficiency is best evaluated by expert-review panels that use primarily qualitative measures tied to long-term plans. PART questions 3.4 and 4.3 seem to require evaluation of the efficiency of research in isolation from review of relevance and quality and thus emphasize cost and time. Agencies find that this approach may place programs at risk because the failure to satisfy PART on efficiency-related questions can increase the chances of an unacceptable rating for the total R&D program. As discussed in Chapter 3, quantitative metrics in the context of quality and relevance are important in measuring process efficiency but by themselves cannot assess the value of a research program or identify ways to improve it. A more appropriate approach is to adapt the technique of expert review, already recommended by the National Research Council for compliance with GPRA. Indeed, OMB (2007b, p. 76) specifically recommends, in its written instructions to agencies, that agency managers “make the processes they use to satisfy the Government Performance and Results Act (GPRA) consistent with the goals and measures they use to satisfy these [PART] R&D criteria.” One advantage of using an expert-review panel is its ability to evaluate both investment efficiency and process efficiency. It can determine the kind of research that is most appropriate for advancing the mission of an agency and the best management strategies to optimize the results of the research with the resources available. An expert-review panel can also identify emerging issues and their place in the research portfolio. Those would be developing fields (for example, nanotechnology a decade ago) identified by the agency for their potential importance but not mature enough for inclusion in a strategic plan. Identification of new fields might be thought of as an intermediate outcome because their value can be anticipated as a result of continuing core or problem-driven research and
OCR for page 64
Evaluating Research Efficiency in the U.S. Environmental Protection Agency through the process of long-term planning. Because they may not seem urgent enough to have a place in a current strategic plan, emerging issues often fall victim to the budget-cutter’s knife, even though an early start on a new topic can bring long-term efficiencies and strengthen research capabilities. Principle 6 Process efficiency, which may be evaluated by using both expert review and quantitative metrics, should be treated as a minor component of research evaluation. PART question 3.4, the one that addresses efficiency most explicitly, asks of every federal program whether it has procedures “to measure and achieve efficiencies and cost effectiveness in program execution”, including “at least one efficiency measure that uses a baseline and targets” (EPA 2007b, p. 41). Research programs, especially programs of core or basic research, are unlikely to be able to respond “yes” to that question, because research managers cannot set baselines and targets for investigations whose outcomes are unknown. Therefore, such programs are unlikely to gain a “yes” for the question and are less likely to receive an acceptable rating under PART. In addition, failure on the PART efficiency questions precludes a “green” score on the Budget-Perfor mance Integration initiative of the President’s Management Agenda.4 Isolating efficiency as an evaluation criterion can produce a picture that is at best incomplete and at worst misleading. It is easy to see how an effort to reduce the time or money spent on a project, in order to increase efficiency, might also reduce its quality unless this effort is part of a comprehensive evaluation. To evaluate applied research, especially in a regulatory agency, such as EPA, it is essential to understand the strategic and multi-year plans of the regulatory offices, the anticipated contributions of knowledge from research to plans and decisions, and the rather frequent modifications of plans due to intervening judicial, legislative, budgetary, or societal events and altered priorities. Some of those intervening events may be driven by new scientific findings. The efficiency of research-management processes should certainly be evaluated. They include such activities as grant administration, facility maintenance or construction, and repeated events, such as air-quality sampling. Process efficiency can be evaluated with quantitative management tools, such as earned-value management (EVM). But such evaluations should be integrated with the work of expert-review panels if they are to contribute to the larger task of program evaluation. 4 PART Guidance states, “The President’s Management Agenda (PMA) Budget and Performance Integration (BPI) Initiative requires agencies to develop efficiency measures to achieve Green status” (OMB 2007b, p. 9).
OCR for page 65
Evaluating Research Efficiency in the U.S. Environmental Protection Agency In summary, efficiency measurements should not dominate or override the overall evaluation of a research program. Parts of the program may not be amenable to quantitative metrics, and the absence of quantitative metrics should not be cause for a low rating that harms the reputation of the program or the agency. RECOMMENDATIONS The following recommendations flow from the committee’s conclusion that undue emphasis has been placed on the single criterion of efficiency. That emphasis, which is often seen for non-R&D activities throughout the main body of the PART instructions, is not explicit in the PART Investment Criteria (OMB 2007b). Rather, it has emerged during agency reviews, appeal rulings, and outside evaluations of the PART process, despite its inappropriateness for the evaluation of research programs. The issue is important because unsatisfactory responses to the two PART efficiency-focused questions have apparently contributed to a low rating for an entire program (for example, EPA’s Ecological Research Program) and later budget cuts (Inside EPA’s Risk Policy Report 2007).5 Evaluation of research should begin not with efficiency but with the criteria of relevance, quality, and effectiveness and should secondarily address efficiency only after these criteria have been reviewed. Recommendation 1 To comply with PART, EPA and other agencies should only apply quantitative efficiency metrics to measure the process efficiency of research programs. Process efficiency can be measured in terms of inputs, outputs, and some intermediate outcomes but not in terms of ultimate outcomes. For compliance with PART, evaluation of the efficiency of a research program should not be based on ultimate outcomes. Ultimate outcomes can seldom be known until considerable time has passed after the conclusion of the research. Although PART documents encourage the use of outcome-based metrics, they also describe the difficulty of applying them. Given that restriction, the committee recommends that OMB and other oversight bodies focus not on investment efficiency but on process efficiencies when addressing questions 3.4 and 4.3—the ways in which program managers exercise skill and prudence in conserving resources. For evaluating process efficiency, quantitative methods can be used by expert-review panels and others to track and review the use of resources in light of goals embedded in strategic and 5 According to EPA’s Risk Policy Report, “previous PART reviews criticized ERP [the Ecological Research Program] for not fully demonstrating the results of programmatic and research efforts – and resulted in ERP funding cuts.” (Inside EPA’s Risk Policy Report 2007)
OCR for page 66
Evaluating Research Efficiency in the U.S. Environmental Protection Agency multi-year plans. Moreover, to facilitate the evaluation process, the committee recommends including intermediate outcomes, as distinguished from ultimate outcomes. Intermediate outcomes include such results as an improved body of knowledge available for decision-making, comprehensive science assessments, and the dissemination of newly developed tools and models. The PART R&D investment-criteria document (OMB 2007b, see also Appendix G) should be revised to make it explicit that quantitative efficiency metrics should be applied only to process efficiency. Recommendation 2 EPA and other agencies should use expert-review panels to evaluate the investment efficiency of research programs. The process should begin by evaluating the relevance, quality, and performance6 of the research. OMB should make an exception when evaluating R&D programs under PART to permit evaluation of investment efficiency as well as process efficiency. This approach will make possible a more complete and useful evaluation. Investment efficiency is used in this report to indicate whether an agency is “doing the right research and doing it well.” The term is used as a gauge of portfolio management to measure whether a program manager is investing in research that is relevant to the agency’s mission and long-term plans, whether the research is being performed at a high level of quality, and whether timely and effective adjustments are being made in the multi-year course of the work to reflect new scientific information, new methods, and altered priorities. Those questions cannot be answered quantitatively; they require judgment based on experience. The best mechanism for measuring investment efficiency is the expert-review panel. The concept of investment efficiency may be applied to studies that guide the next set of research projects and stepwise development of analytic tools or other products. EPA should continue to obtain primary input for PART compliance by using expert review under the aegis of its Board of Scientific Counselors (BOSC) and Science Advisory Board (SAB). Expert review provides a forum for evaluation of research outcomes and complements the efforts of program managers in their effort to adjust research activities according to multi-year plans and anticipated outcomes. To enhance the process, consideration should be given to intermediate outcomes. As outputs and intermediate outcomes are achieved, the expert-review panel can use them to adjust and evaluate the expected ultimate outcomes (see Logic Model in Chapter 4). 6 Performance is described in terms of both effectiveness (the ability to achieve useful results) and efficiency (the ability to achieve research quality, relevance, and effectiveness with little waste).
OCR for page 67
Evaluating Research Efficiency in the U.S. Environmental Protection Agency The qualitative emphasis of expert review should not obscure the importance of quantitative metrics, which should be used whenever possible by expert-review panels to evaluate process efficiency when activities can be measured quantitatively and linked to milestones—for example, administration, construction, grant administration, and facility operation. In evaluating research at EPA, both EPA and OMB should place greater emphasis on identifying emerging and cross-cutting issues. ORD needs to be responsive to short-term R&D requests from the program offices, but it must have an organized process for identifying future research needs. BOSC and SAB should assign appropriate weight in their evaluations to forward-looking exercises that sustain the agency’s place at the cutting edge of mission-relevant research. Expert-review panels and oversight bodies should recognize that research managers need the flexibility to adapt to the realities of input changes beyond the agency’s control, especially budgeting adjustments. The most rigorous planning cannot foresee the steps that might be required to maintain efficiency in the face of recurrent unanticipated change. Recommendation 3 The efficiency of research programs at EPA should be evaluated according to the same overall standards used at other agencies. EPA has failed to identify a means of evaluating the efficiency of its research programs that complies with PART to the satisfaction of OMB. Some of the metrics it has proposed, such as the number of publications per full-time equivalent (FTE), have been rejected, although accepted by OMB for other agencies. OMB has encouraged EPA to apply the common management technique of EVM, which measures the degree to which research outputs conform to scheduled costs along a timeline, but EPA has not found a way to apply EVM to research activities themselves. No other agency has been asked to use EVM for research activities, and none has done so. Agencies have addressed PART questions with different approaches, which are often not in alignment with their long-term strategies or missions. Many of the approaches refer only to portions of programs, quantify activities that are not research activities, or review processes that are not central to R&D programs. In short, many federal agencies have addressed PART with responses that are not, in the wording of the charge, “sufficient.” ADDITIONAL RECOMMENDATION FOR THE OFFICE OF MANAGEMENT AND BUDGET OMB should have oversight and training programs for budget examiners to ensure consistent and equitable implementation of PART in the many agencies that have substantial R&D programs.
OCR for page 68
Evaluating Research Efficiency in the U.S. Environmental Protection Agency Evaluating different agencies by different standards is undesirable because results are not comparable. OMB budget examiners bear primary responsibility for working with agencies in PART compliance and in interpreting PART questions for the agencies. Although not all examiners can be expected to bring scientific training to their discussions with program managers, they must bring an understanding of the research process as it is performed in the context of federal agencies, as discussed in Chapters 1-3.7 OMB decisions about whether to accept or reject metrics for evaluating the efficiency of research programs have been inconsistent. A decision to reject the metrics of one agency while accepting similar metrics at another agency can unfairly damage the reputation of the first agency and diminish the credibility of the evaluation process itself. Because the framework of PART is virtually the same for all agencies and because the principles of scientific inquiry are virtually the same in all disciplines, the implementation of PART should be both consistent and equitable in all federal research programs. It should be noted that actual consistency is unlikely to be achieved in the vast and varied universe of government R&D programs, which fund extramural basic research, mission-driven intramural labs, basic research labs, construction projects, facilities operations, prototype development, and many other operations. Indeed, it is difficult even to define consistent approaches that would be helpful to both agencies and the OMB. But there is ample room for examiners to provide clearer, more explicit directions, understand the particular functioning of R&D programs, and discern cases when exceptions to broad requirements are appropriate. REFERENCES Inside EPA’s Risk Policy Report. 2007. Improved OMB Rating May Help Funding for EPA Ecological Research. Inside EPA’s Risk Policy Report 14(39). September 25, 2007. OMB (Office of Management and Budget). 2007a. ExpectMore.gov. Office of Management and Budget [online]. Available: http://www.whitehouse.gov/omb/expectmore/ [accessed Nov. 7, 2007]. OMB (Office of Management and Budget). 2007b. Program Assessment Rating Tool Guidance No. 2007-02. Guidance for Completing 2007 PARTs. Memorandum to OMB Program Associate Directors, OMB Program Deputy Associate Directors, Agency Budget and Performance Integration Leads, Agency Program Assessment Rating Tool Contacts, from Diana Espinosa, Deputy Assistant Director for Management, Office of Management and Budget, Executive Office of the President, Washington, DC. January 29, 2007. Attachment: Guide to the Program Assessment Rating Tool (PART). January 2007 [online]. Available: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=ADA471562&Location=U2&doc=GetTRDoc.pdf [accessed Nov. 7, 2007]. 7 Some examiners do have training and experience in science or engineering, but this is not a requirement for the position.