Skip to main content

Currently Skimming:

5 Evaluation of Center Programs
Pages 106-123

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 106...
... Research programs may also undergo retrospective evaluation. Each ongoing as well as proposed center program must justify itself in the annual planning and budgeting process.
From page 107...
... considered undertaking an impact evaluation. They obtained funding through the Office of Evaluation in the Office of the Director of NIH, worked closely with the program evaluation staff in the office of the NICHD director, compared notes with center program staff in other institutes, and consulted with evaluation experts in academia.
From page 108...
... That is, they were not a result of a regular preplanned process for periodic evaluation of the center program in question. Many were apparently initiated in response to a perception on the part of the institute direc 1In some cases, evaluations leading to major program changes were not available on the institute website at the time this report was written, for example, the May 2000 report of the Midstream Evaluation Committee for the Comprehensive Sickle Cell Centers cited in the RFA for Comprehensive Sickle Cell Centers released in December 2000 (RFA-HL-01-015)
From page 109...
... Report of the Demo graphic and Behavioral Sciences Branch Population Centers Review. National Heart, Lung, and Blood Institute · Committee to Redefine the Specialized Centers of Research Programs, 2001.
From page 110...
... CHALLENGES IN EVALUATION OF CENTER PROGRAMS The program evaluations noted in Box 5-1 and summarized in Appendix F were all carried out by groups of highly reputable individuals, predominantly accomplished scientists, written up in a formal report, and posted on an institute website. Despite the shortcomings and limitations of the reports listed in the previous section, the committee commends the authors for taking on the task of evaluation at all.
From page 111...
... Several center program evaluations cited the fact that the centers in question had succeeded in attracting additional research and infrastructure support from other institutes, agencies, foundations, and industry, as well as from their own parent institution. All those sources of support no doubt claim the very same publications and outcomes of center activities as results of their largesse.
From page 112...
... A very common reason given for starting center programs is a perceived need for multidisciplinary collaboration. Centers are seen as a way to attract established scientists from many disciplines to a common problem area and a common locale, where their increased interaction will promote the desired interdisciplinary studies.
From page 113...
... The potential reviewers of the center program may all be leading scientists in individual centers or be affiliated with an institution that is the recipient of a center award. In the case of new center programs trying to implement a new research thrust (moving discoveries about disease etiology toward new diagnostics or treatments, for example)
From page 114...
... . In January 1992 NSF's program evaluation staff convened a workshop to devise and sharpen methods for evaluating outcomes of research center programs.
From page 115...
... These included: · As outcomes to be measured are made more and more specific, and hence more easily measured, they also become less generalizable to other centers and center programs. A common collection of outcome measures might be possible but the elements might have to be weighted differently depending on the program being evaluated.
From page 116...
... ERCs annually collect and report data on several performance outputs, such as number of publications, student enrollments, patents, and interactions with industrial partners. These data, along with site visits by external reviewers, are used by NSF in periodic reviews used to determine continued funding and midcourse changes, as needed, in research priorities and administrative arrangements.
From page 117...
... Committee member Myron Weisfeldt recounted an unpublished comparison of center grants and multiproject P01 grants in which he participated as a member of the NHLBI's Cardiology Advisory Committee in the late 1980s. The study compared NHLBI's Specialized Centers of Research Excellence in Acute Myocardial Infarction to P01-supported research on the same topic.
From page 118...
... Indicators Under this heading the committee includes numerous, often quantitative, measures of center program activities. Sometimes, but not always, they are generated in the course of program or center operations and therefore do not impose a major additional burden on either center or program staff.
From page 119...
... Goal: Increased attention to program's area of focus by centers' home institutions, scientific community, and general public. Indicators: Increased institutional support for center operations (space, fac ulty and staff, recognition on institutional organizational charts and publications)
From page 120...
... Nearly every program evaluation combines these indicators with first-hand observations or other site-specific efforts to gather relevant information. Given the prominence in descriptions of centers of such intangibles as synergy and facilitation, any assessment of a center program should strongly consider inclusion of site visits to centers; interviews with center staff and other members of the institutions in which the centers are embedded; and systematic mail or phone surveys of program and center staff and, especially in the case of center infrastructure or core grants, systematic mail or phone surveys of the independent investigators whom the centers were designed to support.
From page 121...
... NIH does not have formal regular procedures or criteria for evaluating center programs. From time to time, institutes conduct internal program reviews or appoint external review panels, but these ad hoc assessments are usually done in response to a perception that the program is no longer effective or appropriate rather than part of a regular evaluation process.
From page 122...
... d) A program evaluation plan should be developed as part of the design and implementation of new center programs, and data on indi cators used in the evaluation plan should be collected regularly and systematically.
From page 123...
... 1992. Draft Report of the NSF/Program Evaluation Staff Workshop on Meth ods for Evaluating Programs of Research Centers, January 1992.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.