Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 52
Evaluating Research Efficiency in the U.S. Environmental Protection Agency 4 A Model for Evaluating Research and Development Programs This report has discussed the difficulty of evaluating research programs in terms of results, which are usually described as outputs and ultimate outcomes. However, between outputs and ultimate outcomes are many kinds of “intermediate outcomes” that have their own value as results and can therefore be evaluated. The following is a sample of the kinds of activities that might be categorized as outputs, intermediate outcomes, and ultimate outcomes: Outputs include peer-reviewed publications, databases, tools, and methods. Intermediate outcomes include an improved body of knowledge available for decision-making, integrated science assessments (previously called criteria documents), and the dissemination of newly developed tools and models. Ultimate outcomes include improved air or water quality, reduced exposure to hazards, restoration of wetland habitats, cleanup of contaminated sediments, and demonstrable improvements in human health. Those steps can be described in different terms, depending on the agency using them and the scope of the research involved. For the Environmental Protection Agency (EPA) Office of Research and Development (ORD), for example, results that might fit the category of intermediate outcome might be: the provision of a body of knowledge that can be used by EPA’s customers and the use of that knowledge in planning, management, framing of environmental regulations, and other activities. Intermediate outcomes are bounded on one side by outputs (such as toxicology studies, reports of all kinds, models, and monitoring activities) and on the other side by ultimate outcomes (such as protection and improvement of human health and ecosystems).
OCR for page 53
Evaluating Research Efficiency in the U.S. Environmental Protection Agency As a somewhat idealized example of how EPA (or other agencies) might conceptualize and make use of these terms, the following logic model shows the sequence of research, including inputs, outputs, intermediate outcomes, and ultimate outcomes. These stages in the model are roughly aligned with various events and users as research knowledge is developed. However, it is important to recognize that this model must be flexible to respond to rapid changes in research direction based upon unanticipated issues. The shift of personnel and resources to meet a new or newly perceived environmental challenge inevitably will impact the ability to complete planned R&D programs. In the top row of Figure 4-1, the logic flow begins with process inputs and planning inputs. Process inputs could include budget, staff (including the training needed to keep a research program functioning effectively), and research facilities. Planning inputs could include stakeholder involvement, monitoring data, and peer review. Process and planning inputs are transformed into an array of research activities that generate research outputs listed in the first ellipse, such as recommendations, reports, and publications. The combination of research and research outputs leads to intermediate outcomes. A helpful feature of the model is that there are two stages of intermediate outcomes: research outcomes and customer outcomes. The intermediate research outcomes are depicted in the arrow and include an improved body of knowledge available for decision-making, new tools and models disseminated, and knowledge ready for application. The intermediate research outcomes in the arrow are followed by intermediate customer outcomes, in the ellipse, that describe a usable body of knowledge, such as regulations, standards, and technologies. Intermediate customer outcomes also include education and training. They may grow out of integrated science assessments or out of information developed by researchers and help to transform the research outputs into eventual ultimate outcomes. The customers who play a role in the transformation include international, national, state, and local entities and tribes; nongovernment organizations; the scientific and technical communities; business and industry; first responders; decision-makers; and the general public. The customers take their own implementation actions, which are integrated with political, economic, and social forces. The use of the category of intermediate outcome does not require substantial change in how EPA plans and evaluates its research. The strategic plan of ORD, for example, already defines the office’s mission as to “conduct leading-edge research” and to “foster the sound use of science” (EPA 2001). Those lead naturally into two categories of intermediate outcome: intermediate outcomes from research and intermediate outcomes from users of research. EPA’s and ORD’s strategic planning architecture fits into the logic diagram as follows: the ellipse under “Research Outputs” contains the annual performance metrics and the annual performance goals (EPA 2007b), the arrow under “Intermediate Outcomes from Research” contains sub-long-term goals, the ellipse under “Intermediate Outcomes from Users of Research” contains the
OCR for page 54
Evaluating Research Efficiency in the U.S. Environmental Protection Agency FIGURE 4-1 EPA research presented as a logic model. Source: Modified from NRC 2007.
OCR for page 55
Evaluating Research Efficiency in the U.S. Environmental Protection Agency long-term goals (EPA 2007b), and the box under “Ultimate Outcomes” contains EPA’s overall mission (EPA 2006). In general, ultimate outcomes are evaluated at the level of the mission, intermediate outcomes at the level of multi-year plans, and outputs at the level of milestones. Specific examples of outputs, intermediate outcomes, and ultimate outcomes taken from the Ecological Research Multi-Year plan (EPA 2003),1 fit into the framework as follows: Outputs: a draft report on ecologic condition of western states, and the baseline ecologic condition of western streams determined. Intermediate outcome from research: a monitoring framework is available for streams and rivers in the western United States that can be used from the local to the national level for statistical assessments of condition and change. Intermediate outcome from customers: the states and tribes use a common monitoring design and appropriate ecologic indicators to determine the status and trends of ecologic resources. Ultimate outcomes: critical ecosystems are protected and restored (EPA objective), healthy communities and ecosystems are maintained (EPA goal), and human health and the environment are protected (EPA mission). Similar logic models might be drawn from EPA’s other multi-year plans, including water-quality monitoring and risk-assessment protocols for protecting children from pesticides. The use of the model can have several benefits. First, it can help to generate understanding of whether and how specific programs transform the results of research into benefits for society. The benefits—for example, an identifiable improvement in human health—may take time to appear because they depend on events or trends beyond EPA’s influence. The value of a logic model is to help to see important intermediate points in development that allow for evaluation and, when necessary, changes of course. Second, the model can help to “bridge the gap” between outputs and ultimate outcomes. For a project that aims to improve human health through research, for example, there are too many steps and too much time between the research and the ultimate outcomes to permit annual evaluation of the progress or efficiency of a program. The use of intermediate outcomes can add results that are key steps in its progress. The use of intermediate outcomes can also give a clearer view of the value of negative results. Such results might seem “ineffective and inefficient” to an evaluator, perhaps on the grounds that the project produced no useful practice or product. Making use of intermediate outcomes in the reviewing process, how- 1 Note that p. 14 (EPA 2003) shows a logic diagram of how all the sub-long-term goals connect to feed into the long-term goal.
OCR for page 56
Evaluating Research Efficiency in the U.S. Environmental Protection Agency ever, may clarify that a negative result is actually “effective and efficient” if it prevents wasted effort by closing an unproductive line of pursuit. Intermediate outcomes are already suggested by the section of the 2007 PART guidance entitled Categories of Performance Measures (OMB 2007, p. 9). The guidance acknowledges the difficulty of using ultimate outcomes to measure efficiency, and proposes the use of proxies when difficulties arise, as in the following example: Programs that cannot define a quantifiable outcome measure—such as programs that focus on process-oriented activities (e.g., data collection, administrative duties or survey work)—may adopt a “proxy” outcome measure. For example, the outcomes of a program that supplies forecasts through a tornado warning system could be the number of lives saved and property damage averted. However, given the difficulty of measuring those outcomes and the necessity of effectively warning people in time to react, prepare, and respond to save lives and property, the number of minutes between the tornado warning issuance and appearance of the tornado is an acceptable proxy outcome measure. Identification of intermediate steps brings into the PART process an important family of existing results that may lend themselves to qualitative and sometimes quantitative assessment, which can provide useful new data points for reviewers. The terms in which those steps are described depend on the agency, its mission, and the nature and scope of its work. SUMMARY Although the task of reviewing research programs is complicated by the limitations of ultimate-outcome-based metrics, the committee suggests as a partial remedy the use of additional results that might be termed intermediate outcomes. This class of results, intermediate between outputs and ultimate outcomes, could enhance the evaluation process by adding trackable items and a larger body of knowledge for decision-making. The additional data points could make it easier for EPA and other agencies to see whether they are meeting the goals they have set for themselves, how well a program supports strategic and multi-year plans, and whether changes in course are appropriate. Using this class of results might also improve the ability to track progress annually. REFERENCES EPA (U.S. Environmental Protection Agency). 2001. Strategic Plan. EPA/600/R-01/003. Office of Research and Development, U.S. Environmental Protection Agency, Washington, DC. January 2001 [online]. Available: http://www.epa.gov/osp/strtplan/documents/final.pdf [accessed Nov. 13, 2007].
OCR for page 57
Evaluating Research Efficiency in the U.S. Environmental Protection Agency EPA (U.S. Environmental Protection Agency). 2003. Sub-long-term goals, annual performance goals and annual performance measures for each long term goal. Appendix 1 of the Ecological Research Multi-Year Plan. Office of Research and Development, U.S. Environmental Protection Agency. May 29, 2003 Final Version [online]. Available: http://www.epa.gov/osp/myp/eco.pdf [accessed Nov. 1, 2007]. EPA (U.S. Environmental Protection Agency). 2006. EPA Strategic Plan 2006-2011: Charting Our Course. U.S. Environmental Protection Agency. September 30, 2006 [online]. Available: http://www.epa.gov/cfo/plan/2006/entire_report.pdf [accessed Nov. 13, 2007]. EPA (U.S. Environmental Protection Agency). 2007a. Research Programs. Office of Research and Development, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/ord/htm/researchstrategies.htm [accessed Nov. 13, 2007]. EPA (U.S. Environmental Protection Agency). 2007b. Research Directions: Multi-Years Plans. Office of Science Policy, U.S. Environmental Protection Agency [online]. Available: http://www.epa.gov/osp/myp.htm [accessed Nov. 13, 2007]. OMB (Office of Management and Budget). 2007. Guide to the Program Assessment Rating Tool (PART). Office of Management and Budget. January 2007 [online]. Available: http://stinet.dtic.mil/cgi-bin/GetTRDoc?AD=ADA471562&Location=U2&doc=GetTRDoc.pdf [accessed Nov. 7, 2007]. NRC (National Research Council). 2007. Framework for the Review of Research Programs of the National Institute for Occupational Safety and Health. Aug. 10, 2007.