Skip to main content

Currently Skimming:

2. Productivity
Pages 41-74

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 41...
... The committee's more explicitly reported concerns with such measures as success in obtaining research grants, citation counts that ignore differences among and possibly within disciplines, and studies that fail to consider work environments suggest that the real problem lies not in the measures of productivity per se that have been used, but in how the measures have been used -- that is, in the designs of assessments of training support programs.
From page 42...
... . The most recent burst of research activity relevant to the assessment of productivity began when Martin and Irvine (1983)
From page 43...
... SUGGESTED OUTLINE FOR PLANNING STUDIES OF PRODUCTIVITY OR QUALITY The proposed UNCSD process serves as a useful framework in which to present some thoughts about planning studies focusing on the assessment of productivity. The following list draws heavily and directly on Moravcsik's report: 43
From page 44...
... Such discrepancies must, therefore, be given careful attention in planning studies of program outcomes. Since the mid '70s, NIH training programs in general have focused specifically and exclusively on research training.
From page 45...
... if in planning a study of the effectiveness of a training program, it was decided that pursuit of a research career in the private sector was a favorable outcome but that assessing the performance of former trainees who followed that path was not feasible, they could be explicitly excluded from potential comparison cohorts; (2) if research administration is deemed a favorable outcome, those research administrators could be excluded from comparisons in which research publications were used as indicators and included where other measures of productivity, more suitable to their employment, were used.
From page 46...
... For example, if teaching undergraduate students is judged to be an acceptable outcome of research training, the productivity of an individual whose primary activity is teaching will not be appropriately assessed by counting that individual's production of research papers -- but consideration might be given to using the production of review papers as one of several measures of performance in the educational domain. However, for some outcomes regarded as suitable expressions of the goals of an enterprise, no suitable approach to assessment "measurement" is available to evaluators.
From page 47...
... In simplest terms, publication counts are no longer acceptable as a measure of productivity unless at least the following potential sources of error or misinterpretation are controlled or accounted for: o differences among disciplines of cohort members, o differences among journals in terms of measured influence (see section on journals page 131) , o o differences in "quality" or "impact" as measured by citations or peer assessment (or journal influence)
From page 48...
... , numbers of publications by faculty and staff in universities and hospitals were shown to be extremely highly correlated with NIH funding (r = .90 to 95~; and there were no economies or diseconomies of scale in the funding of research grants. Funding and publication relationships may appear to break down, however, when small aggregates of researchers or disciplines are assessed and especially when basic and clinical research publications are intermixed.
From page 49...
... (CHI) , determines journal influence weights by the weighted number of citations each journal receives over a given period of time.
From page 50...
... This rather preliminary study, which was focused on change in publication behavior following the introduction of minimal criteria for promotion warranted no conclusions; but it suggested to this writer the possibility that some measures of these types might be useful in considering criteria suitable for assessing the productivity of individuals whose careers, though academic, are not directly focused on the production of original research. Activity Indexes: In recent years the utility of a new approach to using publication counts, the "activity index," has been demonstrated, particularly in studies conducted by CHI for NIH e Activity indexes are ratios that make use of publication counts in a relational context, thus allowing comparisons to be made among groups while allowing each group to be described within its own context.
From page 51...
... citations were made solely for professional reasons-that is, in a literature review for "completeness" or because the current work was based at least in part on the cited work, the cited work confirms or supports the work in the citing paper, or the cited work is criticized or refuted (at one of three levels)
From page 52...
... As with publication counts, the day has long since passed that simple citation counts would be regarded as acceptable measures of performance. Even average numbers of citations per paper are useful measures only when all of the precautions cited for paper counts are observed -- that is, controls are exercised for sources of difference, such as discipline and time (both publication date and citation count period)
From page 53...
... Although the overwhelming maj ority of studies that have compared subjective ratings with citation counts have yielded strong positive correlations, not all have. Where the correspondence is weak, the data often serve to reveal characteristics of the peer judgments rather than indicating deficiency in the citation evidence (see, for example, Anderson et al., 1978~.
From page 54...
... The most widely used are impact factors, influence measures, and relative citation rate and publication impact: Impact Factors: The Institute for Scientific Information, publisher of the Science Citation Index, also publishes Journal Impact Factors, based on a 2-year accumulation of citations received divided by the number of papers published in the target year. These measures, while correcting for journal size, do not correct for characteristic differences in referencing and citation practice and, therefore, reflect different dimensions of citation behavior in different disciplines.
From page 55...
... Total influence scores are, however, the scores most highly correlated with subjective judgmental ratings of university program quality (see Anderson et al., 1978~. The influence measures offer a clear advantage over impact factors: the measures are determined within each of the fields of science, thus correcting for differences in citation practices and providing comparability across fields of science.
From page 56...
... It is also something of an anomaly that we treat peer assessment as one among several different types of criteria that might be used to assess productivity, when in fact, almost all likely criteria are, at bottom, different representations of peer judgment. For the most part the different measures represent collections of judgments that are separated in time, in focus, and in method of combination.
From page 57...
... . Citations and Peer Assessment: Most studies that have involved both peer judgment and bibliometrics have been aimed at validating the utility of the bibliometric measures.
From page 58...
... The other, in sharply contrasting peer group performance, applied 16 "measures" -- 4 based on peer ratings and 12 on records of program composition, support, and faculty publication performance -- applied to 32 disciplines in 200 doctoral degree-granting institutions .3 In the four biological science areas of primary concern to biomedical research, total journal influence ratings of faculty publications accounted for 50-70 percent of the variation in subjective judgmental ratings of faculty scholarly quality and 40-60 percent of program educational effectiveness. Notably, no attempt was made to combine the different types of information; rather, each of the items was reported for each institution.
From page 59...
... , the investigator would be at least as well served with publication data as with peer judgments. GRANTS AND GRANT APPLICATIONS NIH and the Veterans' Administration are probably the only two federally supported agencies that maintain data bases suitable for the analyses of scientist's grant application and award behavior.
From page 60...
... This is not to say that grant applications and awards are not an important source of information about, for example, the success of training programs. It only cautions against its use exclusively, and without consideration of such limitations as disciplinary differences and the availability of funding.
From page 61...
... HONORS AND AWARDS It is intuitively desirable to be able to give "credit'' for having won honors or awards. While information about them is generally available from the individuals who have received them, access to the individuals is rarely available in connection with federally supported studies because of the continuing effort of the Office of Management and Budget to restrict data gathering.
From page 62...
... But the current "state of the science" of many areas of research in the biomedical sciences, which has produced rapidly expanding opportunities for producing scientific advances that have significant commercial potential, is surely not without effect. Among scientists employed in the commercial sector, patents and salary are probably the two best potential measures of productivity available.
From page 63...
... ; o comparisons among technical fields (classification problems in attempting to relate patenting to rates of technical innovation, citation rates of "significant" patents, links between patents and scientific literature, technical profiles of industrial firms) ; comparisons among industrial firms (relations between R&D and patenting, skewed distribution of value of patents and propensity to patent, inverse relation between propensity to patent and size of R&D programs)
From page 64...
... ; o the propensity to patent the results of innovative activities: in particular, sector specific factors related to the effectiveness of patenting as a barrier to imitation, compared to alternatives; firm-specific factors related to perceptions of the costs and benefits of patenting; and country-specific factors redating to the costs and benefits of patenting; and o the judgment of technological peers on the innovative performance of specific firms and countries, and on the relative rate of technological advance in specific fields: in particular, the degree to which these judgments are consistent with the patterns shown by patent statistics. Finally, Pavitt calls for improved classification schemes, such that established patent classes can be matched more effectively, on the one hand to standard industrial and trade classifications and, on the other, to technically coherent fields of development.
From page 65...
... that bibliometric measures are most appropriately employed in group comparisons in which aggregates of publications are Jarge -- just how large depends on how closely comparison groups can be matched. Correspondingly, peer assessments are most appropriately employed when peers are equally informed about all of the assessment targets and when self-serving competitive interests are absent.
From page 66...
... 1983. Scientists' publication productivity Social Studies of Science 13~2~:298-329.
From page 67...
... 1983. Validity of citation criteria for assessing the influence of scientific publications: New evidence with peer assessment.
From page 68...
... Bibliometric Assessment of Biomedical Research Publications (NIH Program Evaluation Report)
From page 69...
... Schubert, One more version of the facts and figures on publication output and relative citation impact of 107 Countries, 1978-1980, Scientometrics 11~12)
From page 70...
... 1986. Annotated Bibliography of Publications Dealing with Qualitative and Quantitative Indicators of the Quality of Science (A technical memorandum of the quality indicators project)
From page 71...
... The study was significant in employing publication and citation measures as correlates of peer assessments of productivity and in recognizing the importance of investigating differences among subdisciplines and of taking into account variations in background, social, and psychological characteristics as correlates and potential predictors of eventual professional accomplishment and status. The study was also noteworthy in its use of computer-implemented quantitative methods to describe and compare the most productive with other members of the profession.
From page 72...
... : Narin cited 140 papers in providing a brief historical account of the development of techniques of measuring publications and citations, in reviewing a number of empirical investigations of the validity of bibliometric analyses, and in presenting details of the characteristics of and differences among scientific fields and subdisciplines. (The Annual Review of Information Science and Technology published a bibliography entitled "Bibliometrics" by Narin and Moll (1977)
From page 73...
... Educ. Funding of Research Information Exchange National Comparisons Paradigm Characteristics Performance of research Productivity Productivity - age Professional Associations Publication practices Recognition and reward Social stratification Structure of the literature Structure of literature-Specialty groups Citation rates Journal influence University Ratings *
From page 74...
... While no country exceeds the United States in number of papers listed, the total number of foreign papers, not including Canada and the United Kingdom, was nearly twice the number of United States publications.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.