Skip to main content

Currently Skimming:

II. Methodology
Pages 13-32

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 13...
... Pirsig Zen and the Art of , Motorcycle Maintenance Both the planning committee and our own study committee have given careful consideration to the types of measures to be employed in the assessment of research-doctorate programs.) The committees recognized that any of the measures that might be used is open to criticism and that no single measure could be expected to provide an entirely satisfactory index of the quality of graduate education.
From page 14...
... a load of irrelevant superfluities, "extra baggage" unrelated to the outcomes under study. By the use of a number of such measures, each contributing a different facet of information, we can limit the effect of irrelevancies and develop a more rounded and truer picture of program outcomes.2 Although the use of multiple measures alleviates the criticisms directed at a single dimension or measure, it certainly will not satisfy those who believe that the quality of graduate programs cannot be represented by quantitative estimates no matter how many dimensions they may be intended to represent.
From page 15...
... 07 Fraction of FY197S-79 program graduates who at the time they completed requirements for the doctorate reported that they had made definite commitments for postgraduation employment in Ph.D.-granting universities. Reputational Survey Results4 08 Mean rating of the scholarly quality of program faculty.
From page 16...
... However, not all of the measures may be viewed as "global indices of quality." Some, such as those relating to program size, are best characterized as "program descriptors" which, although not dimensions of quality per se, are thought to have a significant influence on the effectiveness of programs. Other measures, such as those relating to university library size and support for research and training, describe some of the resources generally recognized as being important in maintaining a vibrant program in graduate education.
From page 17...
... Unfortunately, reliable information on the subsequent employment and career achievements of the graduates of individual programs is not available. In the absence of this directly relevant information, the committee has relied on four indirect measures derived from data compiled in the NRC's Survey of Earned Doctorates.6 Although each measure has serious limitations (described below)
From page 18...
... recipients were also used in determining the identity of program graduates. It is estimated that this matching process provided information on the graduate training and employment plans of more than 90 percent of the FY1975-79 graduates from the mathematical and physical science programs.
From page 19...
... It also should be noted parenthetically that unemployment rates for doctoral recipients are quite low and that nearly all of the graduates seeking jobs find positions soon after completing their doctoral programs.9 Furthermore, first employment after graduation is by no means a measure of career achievement, which is what one would like to have if reliable data were available. Measure 07, a variant of measure 06, constitutes the fraction of FY1975-79 program graduates who indicated that they had made firm commitments for employment in Ph.D.-granting universities and who provided the names of their prospective employers.
From page 20...
... The evaluators were selected from the faculty lists furnished by the study coordinators at the 228 universities covered in the assessment. These evaluators constituted approximately 13 percent of the total faculty population -- 13,661 faculty members -- in the mathematical and physical science programs being evaluated (see Table 2.3~.
From page 21...
... 2 Also added was a question on the evaluator's field of specialization -- thereby making it possible to compare program evaluations in different specialty areas within a particular discipline. A total of 1,155 faculty members in the mathematical and physical sciences -- 6S percent of those asked to participate -- completed and returned survey forms (see Table 2.3~.
From page 22...
... Indeed, this dissatisfaction was an important factor in the Conference Board's decision to undertake a multidimensional assessment, and some faculty members included in the sample made known to the committee their strong objections to the reputational survey. TABLE 2.3 Survey Response by Discipline and Characteristics of Evaluator Total Program Faculty N Discipline of Evaluator Survey Sample Total Respondents N N % Chemistry3,339435301 69 Computer Sciences923174108 62 Geosciences1,419273177 65 Mathematics3,784348223 64 Physics3,399369211 57 Statistics/Biostatistics797189135 71 Faculty Rank Professor8,1331,090711 65 Associate Professor3,225471293 62 Assistant Professor2,120216143 66 Other183118 73 Evaluator Selection Nominated by Institution3,7511,461971 66 Other9,910327184 56 Survey Form With Faculty NamesN/A*
From page 23...
... The evaluators were asked to judge programs in terms of scholarly quality of program faculty, effectiveness of the program in educating research scholars/scientists, and change in program quality in the last five years. The mean ratings of a program on these three survey items constitute measures 08, 09, and 10.
From page 24...
... For all four survey measures, standard errors of the mean ratings are reported; they tend to be larger for the lesser known programs. The frequency of response to each of the survey items is discussed in Chapter IX.
From page 25...
... Data from another NRC survey suggest that the actual fraction of scientists employed outside academia may be significantly higher. The committee recognized that the inclusion of nonacademic evaluators would furnish information valuable for assessing nontraditional dimensions of doctoral education and would provide an important new measure not assessed in earlier studies.
From page 26...
... Since these awards have been made on the basis of peer judgment, this measure is considered to reflect the perceived research competence of program faculty. However, it should be noted that significant amounts of support for research in the mathematical and physical sciences come from other federal agencies as well, but it was not feasible to compile data from these other sources.
From page 27...
... PUBLICATION RECORDS Data from the 1978 and the 1979 Science Citation Index have been compiled22 on published articles associated with research-doctorate programs. Publication counts were associated with programs on the basis of the discipline of the journal in which an article appeared and the institution with which the author was affiliated.
From page 28...
... Although consideration was given to reporting the number of published articles per faculty member, the committee concluded that since the measure included articles by other individuals besides program faculty members, the aggregate number of articles would be a more reliable measure of overall program quality. It should be noted that if a university had more than one program being evaluated in the same discipline, it is not possible to distinguish the relative contribution of each program.
From page 29...
... While the reporting of values in standardized form is convenient for comparing a particular program's standing on different measures, it may be misleading in interpreting actual differences in the values reported for two or more programs 26Since the scale used to compute measure 16 -- the estimated "influence" of published articles -- is entirely arbitrary, only standardized values are reported for this measure.
From page 30...
... Thus, the reader is urged to take note of the raw values before attempting to interpret differences in the standardized values given for two or more programs. The initial table in each chapter also presents estimated standard errors of mean ratings derived from the four survey items (measures 08-111.
From page 31...
... -- i.e., the observed difference in mean ratings is too large to be plausibly attributable to sampling error.30 The final chapter of this report gives an overview of the evaluation process in the six mathematical and physical science disciplines and includes a summary of general findings. Particular attention is given to some of the extraneous factors that may influence program ratings of individual evaluators and thereby distort the survey results.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.