Skip to main content

Currently Skimming:

3 Are the Efficiency Metrics Used by Federal Research and Development Programs Sufficient and Outcome-Based?
Pages 38-51

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 38...
... ATTEMPTING TO EVALUATE EFFICIENCY IN TERMS OF ULTIMATE OUTCOMES In its guidance for undertaking Program Assessment Rating Tool (PART) evaluations, the Office of Management and Budget (OMB)
From page 39...
... When it is finally adopted, state agencies usually perform the implementation chores with their own corresponding risk-management strategies and programs. Even then, no ultimate outcomes appear until people, businesses, or other government units take action in response to the programs and their accompanying rules and incentives.
From page 40...
... actions, and the research would have been effective even though it did not produce reviewable ultimate outcomes. Thus, ultimate outcomes of research are not useful criteria for measuring research efficiency, and ultimate-outcome-based metrics proposed by federal agencies to evaluate research efficiency cannot be sufficient.
From page 41...
... PROCESS EFFICIENCY AND INVESTMENT EFFICIENCY In the committee's view, the situation described in the preceding sections presents a conundrum, as follows: • Demonstration of outcome-based efficiency of research programs is strongly urged for PART compliance. • Ultimate-outcome-based metrics of research efficiency are neither achievable nor valid.
From page 42...
... In contrast to process activities, some major aspects of a research program cannot be evaluated in quantitative terms or against milestones.2 The committee describes such aspects under the heading investment efficiency, the efficiency with which a research program is planned, funded, adjusted, and evaluated. Investment efficiency focuses on portfolio management, including the need to identify the most promising lines of research for achieving desired ultimate outcomes.
From page 43...
... The result, he recalls, was more downtime and poor quality, which required more support by indirect labor, which led directly to customer quality and delivery issues. At the end of the day, direct labor went down, but total costs increased."
From page 44...
... A CRITIQUE OF THE EFFICIENCY METRICS USED BY FEDERAL RESEARCH PROGRAMS In light of those questions, it is appropriate to ask how well the metrics used by federal research programs meet the test of sufficiency. Chapter 2 described types of efficiency metrics that have been proposed or adopted by federal agencies to comply with PART (see Appendix E for details)
From page 45...
... . Therefore, using such a metric for a program that supports research in different disciplines can provide misleading results unless the publication rates in each discipline are normalized -- for instance, by relating them to the mean rate for the discipline.6 4.
From page 46...
... The technique suffers from the same weaknesses as the metric of time to respond to research requests and is even more likely to result in diminished quality because a QA/QC function, such as consumer satisfaction, is rarely incorporated into such programs to measure the quality of responses. Responding to information requests does not account for a substantial portion of an agency's research budget, so it is unlikely to measure the efficiency of a substantial portion of its research program.
From page 47...
... Thus, it can be a sufficient metric of process efficiency so long as the underlying planning process incorporates the criteria of quality, relevance, and performance. On the basis of those evaluations, the committee concludes that there may be some utility in certain proposed metrics for evaluating the process efficiency of research programs, particularly reduction in time or cost, on the basis of milestones, and reduction in overhead rate.
From page 48...
... and portfolio health metrics (such as percentage of portfolio in short-, medium-, and longterm projects) than lower performing companies which are less likely to focus on business outcome metrics (such as margin growth or incremental market share)
From page 49...
... First, the inputs and outputs of a program can be evaluated in the context of process efficiency by using quantitative metrics, such as dollars or hours. Process-efficiency metrics cannot be applied to ultimate outcomes, but they can and should be applied to such capital-intensive R&D activities as construction, facility operation, and administration.
From page 50...
... Among the reasons is that ultimate outcomes are often removed in time from the research itself and may be influenced and even generated by entities beyond the control of the research program. They have not been sufficient, because most evaluation metrics purporting to measure process efficiency do not evaluate an entire program, do not evaluate the research itself, or fall short for other reasons explained in connection with the nine metrics evaluated above.
From page 51...
... 2007. Guide to the Program Assessment Rat ing Tool (PART)


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.