Cover Image

PAPERBACK
$35.00



View/Hide Left Panel

4
A Model for Evaluating Research and Development Programs

This report has discussed the difficulty of evaluating research programs in terms of results, which are usually described as outputs and ultimate outcomes. However, between outputs and ultimate outcomes are many kinds of “intermediate outcomes” that have their own value as results and can therefore be evaluated.

The following is a sample of the kinds of activities that might be categorized as outputs, intermediate outcomes, and ultimate outcomes:

  • Outputs include peer-reviewed publications, databases, tools, and methods.

  • Intermediate outcomes include an improved body of knowledge available for decision-making, integrated science assessments (previously called criteria documents), and the dissemination of newly developed tools and models.

  • Ultimate outcomes include improved air or water quality, reduced exposure to hazards, restoration of wetland habitats, cleanup of contaminated sediments, and demonstrable improvements in human health.

Those steps can be described in different terms, depending on the agency using them and the scope of the research involved. For the Environmental Protection Agency (EPA) Office of Research and Development (ORD), for example, results that might fit the category of intermediate outcome might be: the provision of a body of knowledge that can be used by EPA’s customers and the use of that knowledge in planning, management, framing of environmental regulations, and other activities. Intermediate outcomes are bounded on one side by outputs (such as toxicology studies, reports of all kinds, models, and monitoring activities) and on the other side by ultimate outcomes (such as protection and improvement of human health and ecosystems).



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 52
4 A Model for Evaluating Research and Development Programs This report has discussed the difficulty of evaluating research programs in terms of results, which are usually described as outputs and ultimate outcomes. However, between outputs and ultimate outcomes are many kinds of “interme- diate outcomes” that have their own value as results and can therefore be evalu- ated. The following is a sample of the kinds of activities that might be catego- rized as outputs, intermediate outcomes, and ultimate outcomes: • Outputs include peer-reviewed publications, databases, tools, and methods. • Intermediate outcomes include an improved body of knowledge avail- able for decision-making, integrated science assessments (previously called cri- teria documents), and the dissemination of newly developed tools and models. • Ultimate outcomes include improved air or water quality, reduced ex- posure to hazards, restoration of wetland habitats, cleanup of contaminated sediments, and demonstrable improvements in human health. Those steps can be described in different terms, depending on the agency using them and the scope of the research involved. For the Environmental Pro- tection Agency (EPA) Office of Research and Development (ORD), for exam- ple, results that might fit the category of intermediate outcome might be: the provision of a body of knowledge that can be used by EPA’s customers and the use of that knowledge in planning, management, framing of environmental regu- lations, and other activities. Intermediate outcomes are bounded on one side by outputs (such as toxicology studies, reports of all kinds, models, and monitoring activities) and on the other side by ultimate outcomes (such as protection and improvement of human health and ecosystems). 52

OCR for page 52
53 A Model for Evaluating Research and Development Programs As a somewhat idealized example of how EPA (or other agencies) might conceptualize and make use of these terms, the following logic model shows the sequence of research, including inputs, outputs, intermediate outcomes, and ul- timate outcomes. These stages in the model are roughly aligned with various events and users as research knowledge is developed. However, it is important to recognize that this model must be flexible to respond to rapid changes in re- search direction based upon unanticipated issues. The shift of personnel and resources to meet a new or newly perceived environmental challenge inevitably will impact the ability to complete planned R&D programs. In the top row of Figure 4-1, the logic flow begins with process inputs and planning inputs. Process inputs could include budget, staff (including the train- ing needed to keep a research program functioning effectively), and research facilities. Planning inputs could include stakeholder involvement, monitoring data, and peer review. Process and planning inputs are transformed into an array of research activities that generate research outputs listed in the first ellipse, such as recommendations, reports, and publications. The combination of re- search and research outputs leads to intermediate outcomes. A helpful feature of the model is that there are two stages of intermediate outcomes: research outcomes and customer outcomes. The intermediate research outcomes are depicted in the arrow and include an improved body of knowledge available for decision-making, new tools and models disseminated, and knowl- edge ready for application. The intermediate research outcomes in the arrow are followed by intermediate customer outcomes, in the ellipse, that describe a us- able body of knowledge, such as regulations, standards, and technologies. In- termediate customer outcomes also include education and training. They may grow out of integrated science assessments or out of information developed by researchers and help to transform the research outputs into eventual ultimate outcomes. The customers who play a role in the transformation include interna- tional, national, state, and local entities and tribes; nongovernment organiza- tions; the scientific and technical communities; business and industry; first re- sponders; decision-makers; and the general public. The customers take their own implementation actions, which are integrated with political, economic, and so- cial forces. The use of the category of intermediate outcome does not require substan- tial change in how EPA plans and evaluates its research. The strategic plan of ORD, for example, already defines the office’s mission as to “conduct leading- edge research” and to “foster the sound use of science” (EPA 2001). Those lead naturally into two categories of intermediate outcome: intermediate outcomes from research and intermediate outcomes from users of research. EPA’s and ORD’s strategic planning architecture fits into the logic dia- gram as follows: the ellipse under “Research Outputs” contains the annual per- formance metrics and the annual performance goals (EPA 2007b), the arrow under “Intermediate Outcomes from Research” contains sub-long-term goals, the ellipse under “Intermediate Outcomes from Users of Research” contains the

OCR for page 52
Intermediate Intermediate Outcomes from Users 54 Research Ultimate Outcomes from of Research Inputs Activities Outputs Outcomes Research → (transformation) (implementation) Process inputs: •Budget Customers: •Staff •EPA program offices & other •Training Research To protect federal •Facilities (intramural agencies human health •Recommendations and and the •State & local •Reports governments extramural): environment, •Improved body of knowledge •Publications -making available for decision •Guidance •Nongovern- via the following •Monitoring Planning •Workshops •New tools and models mental goals: •Regulations •Epidemiologic inputs: disseminated organizations •Databases •Clean Air & studies •Standards •Application-ready technology •Stakeholders Addressing Global •Tribes •Conferences provided • Physical studies (regulatory Climate Change •Technologies •Science & •Tools & methods •Integrated science offices- • Toxicologic •Clean & Safe Water •Education technical assessments national, state, •Best practices studies community •Land Preservation & local) •Program staff papers (EPA 2007b) Restoration •Developmental •Laboratory & •Business •Other external technologies field studies •Healthy Communities stake holders •Industry & Ecosystems (EPA 2007b) •Exposure •Program & •First measurements •Compliance & regional offices responders Environmental •Risk assessment Stewardship •State & local •Decision- (EPA 2007a) counterparts makers (EPA 2006) •Monitoring •Public data (EPA 2007b) •Risk assessments •Peer review •Expert review FIGURE 4-1 EPA research presented as a logic model. Source: Modified from NRC 2007.

OCR for page 52
55 A Model for Evaluating Research and Development Programs long-term goals (EPA 2007b), and the box under “Ultimate Outcomes” contains EPA’s overall mission (EPA 2006). In general, ultimate outcomes are evaluated at the level of the mission, intermediate outcomes at the level of multi-year plans, and outputs at the level of milestones. Specific examples of outputs, intermediate outcomes, and ultimate out- comes taken from the Ecological Research Multi-Year plan (EPA 2003),1 fit into the framework as follows: • Outputs: a draft report on ecologic condition of western states, and the baseline ecologic condition of western streams determined. • Intermediate outcome from research: a monitoring framework is avail- able for streams and rivers in the western United States that can be used from the local to the national level for statistical assessments of condition and change. • Intermediate outcome from customers: the states and tribes use a com- mon monitoring design and appropriate ecologic indicators to determine the status and trends of ecologic resources. • Ultimate outcomes: critical ecosystems are protected and restored (EPA objective), healthy communities and ecosystems are maintained (EPA goal), and human health and the environment are protected (EPA mission). Similar logic models might be drawn from EPA’s other multi-year plans, including water-quality monitoring and risk-assessment protocols for protecting children from pesticides. The use of the model can have several benefits. First, it can help to gener- ate understanding of whether and how specific programs transform the results of research into benefits for society. The benefits—for example, an identifiable improvement in human health—may take time to appear because they depend on events or trends beyond EPA’s influence. The value of a logic model is to help to see important intermediate points in development that allow for evalua- tion and, when necessary, changes of course. Second, the model can help to “bridge the gap” between outputs and ulti- mate outcomes. For a project that aims to improve human health through re- search, for example, there are too many steps and too much time between the research and the ultimate outcomes to permit annual evaluation of the progress or efficiency of a program. The use of intermediate outcomes can add results that are key steps in its progress. The use of intermediate outcomes can also give a clearer view of the value of negative results. Such results might seem “ineffective and inefficient” to an evaluator, perhaps on the grounds that the project produced no useful practice or product. Making use of intermediate outcomes in the reviewing process, how- 1 Note that p. 14 (EPA 2003) shows a logic diagram of how all the sub-long-term goals connect to feed into the long-term goal.

OCR for page 52
56 Evaluating Research Efficiency in EPA ever, may clarify that a negative result is actually “effective and efficient” if it prevents wasted effort by closing an unproductive line of pursuit. Intermediate outcomes are already suggested by the section of the 2007 PART guidance entitled Categories of Performance Measures (OMB 2007, p. 9). The guidance acknowledges the difficulty of using ultimate outcomes to measure efficiency, and proposes the use of proxies when difficulties arise, as in the following example: Programs that cannot define a quantifiable outcome measure—such as programs that focus on process-oriented activities (e.g., data collection, administrative duties or survey work)—may adopt a “proxy” outcome measure. For example, the outcomes of a program that supplies forecasts through a tornado warning system could be the number of lives saved and property damage averted. However, given the difficulty of measuring those outcomes and the necessity of effectively warning people in time to react, prepare, and respond to save lives and property, the number of min- utes between the tornado warning issuance and appearance of the tornado is an acceptable proxy outcome measure. Identification of intermediate steps brings into the PART process an im- portant family of existing results that may lend themselves to qualitative and sometimes quantitative assessment, which can provide useful new data points for reviewers. The terms in which those steps are described depend on the agency, its mission, and the nature and scope of its work. SUMMARY Although the task of reviewing research programs is complicated by the limitations of ultimate-outcome-based metrics, the committee suggests as a par- tial remedy the use of additional results that might be termed intermediate out- comes. This class of results, intermediate between outputs and ultimate out- comes, could enhance the evaluation process by adding trackable items and a larger body of knowledge for decision-making. The additional data points could make it easier for EPA and other agencies to see whether they are meeting the goals they have set for themselves, how well a program supports strategic and multi-year plans, and whether changes in course are appropriate. Using this class of results might also improve the ability to track progress annually. REFERENCES EPA (U.S. Environmental Protection Agency). 2001. Strategic Plan. EPA/600/R-01/003. Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. January 2001 [online]. Available: http://www.epa.gov/osp/ strtplan/documents/final.pdf [accessed Nov. 13, 2007].

OCR for page 52
57 A Model for Evaluating Research and Development Programs EPA (U.S. Environmental Protection Agency). 2003. Sub-long-term goals, annual per- formance goals and annual performance measures for each long term goal. Appen- dix 1 of the Ecological Research Multi-Year Plan. Office of Research and Devel- opment, U.S. Environmental Protection Agency. May 29, 2003 Final Version [online]. Available: http://www.epa.gov/osp/myp/eco.pdf [accessed Nov. 1, 2007]. EPA (U.S. Environmental Protection Agency). 2006. EPA Strategic Plan 2006-2011: Charting Our Course. U.S. Environmental Protection Agency. September 30, 2006 [online]. Available: http://www.epa.gov/cfo/plan/2006/entire_report.pdf [accessed Nov. 13, 2007]. EPA (U.S. Environmental Protection Agency). 2007a. Research Programs. Office of Research and Development, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/ord/htm/researchstrategies.htm [accessed Nov. 13, 2007]. EPA (U.S. Environmental Protection Agency). 2007b. Research Directions: Multi-Years Plans. Office of Science Policy, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/osp/myp.htm [accessed Nov. 13, 2007]. OMB (Office of Management and Budget). 2007. Guide to the Program Assessment Rat- ing Tool (PART). Office of Management and Budget. January 2007 [online]. Available: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=ADA471562&Location= U2&doc=GetTRDoc.pdf [accessed Nov. 7, 2007]. NRC (National Research Council). 2007. Framework for the Review of Research Pro- grams of the National Institute for Occupational Safety and Health. Aug. 10, 2007.