Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 29
--> Chapter 3 Measuring and Evaluating Federally Funded Research Measuring Research The unique characteristics of research activities, particularly those whose ultimate practical outcomes cannot be known, present challenges to research agencies seeking to implement GPRA, but COSEPUP believes that research programs, no matter what their character and goals, can be evaluated meaningfully on a regular basis in accordance with the spirit and intent of GPRA. To accomplish that evaluation, methods must be chosen to match the character of the research. Results of applied research can often be evaluated in quantitative terms according to specific timelines; basic research in science and engineering cannot always be evaluated in quantitative terms but can be assessed against carefully designed measures that serve as guides to research direction, funding allocations, and policy decisions. In applied research programs of mission agencies, specific practical outcomes can be documented and progress evaluated annually. For example, if the Department of Energy (DOE) adopted the goal of producing cheaper solar energy, it could measure the results of research designed to decrease the cost of solar cells. In this situation, an applied research program can be evaluated against specific measurable milestones annually. Other programs that could be evaluated in similar fashion are efforts to build an optical computer, breed drought-resistant or saline-tolerant crops, assemble a prototype for a walking robot, devise a prototype DNA-sequencing machine, use vitrification for storage of nuclear and hazardous waste, and adapt fiber-optic laser surgery for treatment of prostatic cancer.
OCR for page 30
--> Speech Recognition The Drug Information Association (DIA), founded in 1964, is a nonprofit, multidisciplinary, member-driven scientific association with more than 20,000 members. Its membership consists primarily of individuals from regulatory agencies; academia; contract support organizations; pharmaceutical, biological and device industries; and from other health care organizations. DIA provides a neutral global forum for the exchange and dissemination of information on the discovery, development, evaluation, and utilization of medical products and related health care technologies. The meetings, workshops, and training courses sponsored by DIA are responsive to the rapidly evolving, multidisciplinary needs of its international membership. Basic research programs can be evaluated meaningfully on a regular basis, but as explained in Chapter 2, ultimate outcomes of research into fundamental processes are seldom predictable or quantifiable in advance. It is normal and necessary for basic research investigators to modify their goals, change course, and test competing hypotheses as they move closer to the fundamental understandings that justify public investment in their work. Therefore, it is necessary to evaluate the performance of basic research programs by using measures not of practical outcomes but of performance, such as the generation of new knowledge, the quality of research, the attainment of leadership in the field, and the development of human resources. Historical evidence shows us unmistakably that by any measure, the benefit of leadership in science and engineering to the United States is extremely high. Many agree on this point.10
OCR for page 31
--> History also shows us how often basic research leads to outcomes that were unexpected or whose emergence took place over many years or even decades after the basic research was performed. For example, pre-World War II basic studies of research on atomic structure contributed, after decades of work, to today's Global Positioning System, an outcome of great practical and economic value. Attempts to evaluate a year's worth of that early research would have contained no hint of this particular outcome, but annual evaluations would have demonstrated the continuing high quality of the research being performed and continuing U.S. leadership in the field—a result that is traditionally followed by great practical, intellectual, and economic benefits. Investing in Basic Research: Atomic Physics Federal investments in basic research can sustain long-term work that can lead to technologies unimagined when the research was initiated. It was impossible to guess the far-reaching ramifications of I.I. Rabi's research on molecular-beam magnetic resonance in the late 1930s or Norman Ramsey's invention of a separated oscillatory-field resonance method in 1949. Yet the research of Rabi and Ramsey constitute the scientific basis for modern-day atomic clocks (accurate to within 1 second in 100,000 years) and global positioning systems (GPS). With the declassification of the GPS in 1993, this grandchild of atomic physics has become an innovation of great economic and practical importance. Installed in automobiles, GPS can tell drivers not only where they are, but how to get to their destination. Thanks to the GPS, soldiers stranded behind enemy lines can be rescued with surgical precision; backpackers, firefighters, and people in sailboats, crap-dusters, and automobiles can all be confident of their exact location. The worldwide market far Positioning systems is expected to surpass $30 billion in the next decade. Annual evaluations of quality and leadership give a strong indication of the likelihood of important long-term practical outcomes of basic research, but a historical review can provide reality.
OCR for page 32
--> Not every specific basic research program can be expected to have a practical outcome, so the backward look must extend over a diverse array of programs. Also, because the interval between basic research progress and practical outcomes can be decades, the view back must also be long. It should not consist of asking for the practical outcomes of research conducted in the previous year. Federal agencies support a great number of long-term investigations that have extremely valuable outcomes that are unknown at the start of the investigations. These projects include explorations of the evolution of the universe, of the chemistry of photosynthesis, of the dynamics of plate tectonics, of the composition of Earth's core, and of how language is acquired. The appropriate measure of each such programs is the quality, relevance, and leadership of the research. Using Expert Review to Evaluate Research Programs Because of the nature of the research process, assessing its results requires an evaluation technique of breadth and flexibility. During the course of this study, COSEPUP assessed a number of methods used to evaluate research, including economic-impact studies, citation analyses, and patent analyses. Each of those methodologies might have merit, but COSEPUP concluded that they do not provide the rigor of expert review (although when appropriate they should be used by experts to complement their review). For example, economic-impact studies conducted annually are useful for applied research but inappropriate for basic research, although they can be useful in the retrospective review of the practical outcomes of basic research; citation analyses require expert evaluation of the content, quality, and relevance of citations; patent analyses also can provide useful information, especially in applied research programs, but require expert evaluation of patent quality and relevance. COSEPUP recognizes the legitimate concerns that have been raised about expert review (such as conflict of interest, independence, and elitism) but believes that, when
OCR for page 33
--> implemented with careful planning and design, various kinds of expert review are the most rigorous and effective tools for evaluating basic and applied research. The best-known form of expert review is peer review, developed from the premise that a scientist's or engineer's peers have the essential knowledge and perspective to judge the quality of research and are the best qualified people to do so. Peer review is commonly used to make many kinds of judgments: about the careers of individual researchers, about the value of their publications, about the standing of research institutions, and about the allocation of funds to individuals and to fields of research (COSEPUP, 1982). A second form of expert review is relevance review, in which a panel is composed of potential users of the results of research, experts in fields related to the field of research, and scientists or engineers from the field itself. The goal of relevance review is to judge whether an agency's research programs are relevant to its mission. Expert researchers are essential to this process because of their perspective on the field and their knowledge of other research projects in the field or in similar fields. Relevance review should not be confined to applied research, in which desired outcomes are defined. Relevance review should also consider basic research projects funded by federal agencies. Although the ultimate practical outcomes of basic research cannot be predicted, it is important to ascertain whether a given line of research is likely to contribute to an agency's mission. For example, if a goal of DOE is to produce cheaper solar energy, it is consistent with the agency's mission to understand the physical properties that determine the ability of materials to convert solar radiation into electrical energy. A careful relevance review could indicate the most promising directions for future research, both basic and applied. A third form of expert review is benchmarking, which evaluates the relative international standing of U.S. research efforts. International benchmarking by panels of international experts
OCR for page 34
--> evaluates the relative leadership among nations in fields of science and engineering. Benchmarking exercises have already been conducted by COSEPUP (in mathematics, material science and engineering, and immunology) and by the National Science Foundation (in mathematics). Those exercises have demonstrated that benchmarking can be an effective means of determining leadership in a field. Although the principal reliance is on the judgment of experts, quantitative measures can also be used for confirmation. Leadership positions in fields of science and engineering are a result of substantial infrastructures of people and facilities built over several years; they generally do not shift annually. Thus, international benchmarking reviews, every few years, can provide adequate information. Agencies can still report annually on the U.S. leadership position by observing major discoveries or other changes that have occurred in the preceding year. Important changes can occur whenever programs are being dismantled or reduced. The impact of these reductions on U.S. leadership positions should be noted in annual reports. Assembling a panel of people who have sufficient breadth and depth to make sound assessments is the responsibility of agency management. The competence and dedication of review managers can substantially enhance the value of reviews. Expert review is not effective without proper planning and guidance, and it should always be viewed as a management tool rather than as a substitute for vision, planning, and decisionmaking. Enhancing the Expert Review Process Because of the great variation in structure and mission of federal agencies that support research, the ways in which various agencies review their research will inevitably differ. Each agency must develop the approach that serves best as a management and reporting vehicle. However, additional actions can enhance the
OCR for page 35
--> implementation of GPRA to the mutual benefit of agencies and communities that provide or depend on agency funding. It is common and useful for multiple agencies to approach similar fields of research from different perspectives. Indeed, such pluralism is a major strength of the U.S. research enterprise. However, better communication among agencies would enhance opportunities for collaboration, help prevent important questions from being overlooked, and reduce instances of inefficient duplication of effort. According to the comments in our workshops, present coordination mechanisms need strengthening. The review process could be made more effective through the greater involvement of the research community at large. COSEPUP members, on the basis of their own experience and of the workshops and research conducted for this report, have been struck by the small number of researchers who are aware of the intent of GPRA and its relevance and importance both to their work and to the procedures of federal agencies that support research. The researchers who work in agency, university, and industrial laboratories are the people who perform and best understand the research funded by the federal government. The research community should be involved in developing the processes that agencies will use to measure and evaluate the results of research. The agencies should encourage comment from the research community. Members of the research community also must be part of the expert-review process of measuring and evaluating results of research programs. The research community is essential to measuring and evaluating quality, leadership, and, in some cases, relevance of research programs. Summary COSEPUP believes that results of federal research programs can be evaluated meaningfully on a regular basis in accordance with the spirit and intent of GPRA. However, the methods
OCR for page 36
--> of evaluation must be chosen to match the character of research and its objectives. Furthermore, the committee believes that expert review is the most effective mechanism for evaluating the quality, leadership, and relevance of research (especially basic research) performed and funded by federal agencies. Ultimately, decisions regarding the selection and funding of research programs must be made by agency managers informed by expert review. Note 10. See Landau, Ralph, Technology, Economics, and Public Policy. In Landau, Ralph and Dale W. Jorgensen, eds. Technology and Economic Policy. Cambridge, Ballinger Publ. Co., 1986; Carnegie Commission on Science, Technology, and Government. Enabling the Future: Linking Science and Technology to Societal Goals (Carnegie Commission: New York, NY 1992); Nadiri, M. Ishaq. "Innovations and Technological Spillovers," Working Paper No. 4423 (National Bureau of Economic Research: Cambridge, MA, August 1993).
Representative terms from entire chapter: