National Academies Press: OpenBook

Assessing Research-Doctorate Programs: A Methodology Study (2003)

Chapter: Appendix C: Meetings and Participants

« Previous: Appendix B: Program-Initiation Consultation with Organizations
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 83
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 84
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 85
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 86
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 87
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 88
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 89
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 90
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 91
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 92
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 93
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 94
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 95
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 96
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 97
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 98
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 99
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 100
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 101
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 102
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 103
Suggested Citation:"Appendix C: Meetings and Participants." National Research Council. 2003. Assessing Research-Doctorate Programs: A Methodology Study. Washington, DC: The National Academies Press. doi: 10.17226/10859.
×
Page 104

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Appendix C Meetings and Participants 1. Program Initiation: Planning Meeting and Participants, June 1999 2. Meeting Schedule of Parent Committee and Four Panels - Agenclas of All Committee and Panel Meetings 83

APPENDIX C PLANNING MEETING PROGRAM Tuesclay, June 22, 1999 8:30 AM Welcome M. R. C. Greenwood, University of California, Santa Cruz 8:45 AM Introductory Remarks 9:00 AM A. Why Do Another Research-Doctorate Study 85 The reactions to the past two studies of Research-Doctorate Programs by the NRC were both positive and negative. Some institutions were critical of the objective and/or subjective measures used to characterize their programs, but at the same time they found the data from the studies and the rankings of their programs useful in conducting their own analyses and assess- ments. The rationale for doing another study lies in finding better measures to describe doctoral programs and to collect data on these measures that will better serve institutions and doctoral education. Some general issues that should guide the design and implementation of the next study are: · The value derived by educational institutions from the results of past studies. · The relevance of the data from past studies to the mission of educational programs. · The use of objective measures to support the reputational ratings and vise versa. · Balancing the measurement of the quality of the research faculty and the effectiveness of the educational program. This opening session will address these and other general issues. Presenters: Jules LaPidus, Council of Graduate Schools Stanley Ikenberry, American Council for Education Joseph Bordogna, National Science Foundation 10:45 AM B. Value and Purpose of Program Assessments: Users and Insight The audience for research-doctorate reports has expanded over time. While it is primarily used by universities in planning their academic programs, researchers have used the ratings and the objective measures to analyze different aspects of doctoral education, government agencies used the data to develop programs and allocating resources, and students use the rankings to select programs for graduate study. Understanding how the study is used and what measures are of interest to different groups would assist in guiding the design of the next study. Presenters: Lawrence Martin, State University of New York, Stony Brook Lesley Lydell, University of Minnesota Gary Walters, Ohio State Board of Regents 1:00 PM C. Assessing Quality: Validity and Importance of Reputational Measures Some of the criticisms of past research-doctorate studies have been directed at the over emphasis placed on the reputational measures and inconsistencies between these measures and objective measures. Neither changing the reputational measures for the last study nor using a different methodology for the Survey of Graduate Faculty were considered in order to gain consistency with the 1982 study. This session will try to place in perspective the role of reputational measures in measuring the quality of programs and to determine ways in which this measure can be enhanced. Presenters: Brendan Maher, Harvard University Jonathan Cole, Columbia University

86 2:00 PM D. Assessing Quality: Through Objective Measures APPENDIX C In addition to reputational measures, data on publication, research grants, and awards have also been used in past study to assess the quality of programs. The measures for assessing faculty quality have been improved over successive studies, but they are still over shadowed by the reputational measures. How can these measures be improved to better represent the quality of the research-doctoral programs, and how can these measures, together with the reputational measures, be analyzed and presented to give a more informed estimate of program quality? The introduction of new measures will also be a majorfocus of the next study. Presenters: Stephen Stigler, University of Chicago Hugh Graham, Vanderbilt University 3:15 PM E. Finding the Right Field Taxonomy . ... .. Since the basis for the study is the identification of programs within an institution, independent of the academic unit which houses the program, it is important to find a taxonomy and a means for identifying programs that are consistent across all institutions. Not having a well defined taxonomy results in incomplete faculty rosters, misinterpretations of program content, incompletefield coverage, and in general the possible assessment of a program that has no relationship to the actual program. For the last study this was particularly true for the biological sciences, since the descriptive names did not match programs at their institutions. In addition to the problems with the biological sciences, there were instances at some institutions where a field may had multiple programs that were identified separately or jointly, depending on the Institutional Coordinator's interpretation of the taxonomy. There were also instances when the study field name did not fit the terminology used at the institution and a program was submitted which was inconsistent with other programs in thefields. For the next study it may be appropriate to revisit the taxonomy used in the biological sciences and in some other broadfields. Presenters: Thomas Fox, Harvard University Norman Bradburn, National Opinion Research Center 4:30 PM F. Expanding the Focus to Industry and Government Finding ways to factor the assessment of employment sectors outside academe into the study was of interest to the last study committee, but they did not have the time or the resources to find appropriate measures. One measure might be the identifica- tion of programs that industry or government looks to for recruiting graduates or another could be a measure industry/ university research cooperation. Another topic of interest might be an exploration of new measures that would better suit the needs of the non-academic sector. Presenters: Stephen Lukasik, Independent Consultant 5:30 PM Adjournment for the Day

APPENDIX C Wecinesclay, June 23, 1999 8:30 AM G. Incorporating Interdisciplinary Programs, Emerging Fields and Other Fields 87 The interdisciplinary sciences are playing a more and more important role in graduate education and new fields are develop- ing that in ten years may be larger than some that are now part of the study. Finding ways to identify these programs and collect consistent information across institutions would enhance the next study. In addition, some disciplines not included in the last study, since they did not meet the degree production conditions specified by the study committee, have asked to be considered for the next study. With the understanding that interdisciplinary, emerging or smallerfields may not have the critical mass to provide valid reputational measures, is it possible to include them and still obtain meaningful evaluations. Presenters: Debra Stewart, North Carolina State University 9:30 AM H. Outcomes and Process of Graduate Education One of the main deficiencies of past studies has been the inability to measure the effectiveness of graduate education. The effectiveness question on the National Survey of Graduate Faculty does not provide useful information, since very few individu- als in the survey have direct knowledge of the graduate programs at a range of institutions. Finding objective measures that will provide this information is an important goal for the next study. Another aspect of graduate education, aside from the scholarship of the faculty and the outcomes of graduates, are the activities within graduate programs that can greatly enhance its quality, such as counseling, teaching instruction, and internship programs. Is it possible to measure these activities? Presenters: Joseph Cerny, University of California, Berkeley John Wiley, University of Wisconsin 10:45 AM I. Matching Measures to Program Missions Within a given field, all research-doctoral programs do not have the same mission or educational philosophy. The purpose of some may be the education offuture faculty at institutions similar to their own and others may focus their educational program on serving local industries or government facilities. The mission of programs at an institution is also tied to that of their pier institutions. Measuring dissimilar programs in afield against the same standards may not provide useful information. Can measures be found that match the mission of a program and in particular, customize reputational measures that correctly reflect the mission? Presenters: John Vaughn, Association of American Universities Cora Marrett, University of Massachusetts 1:00 PM I. Customizing Measures to Field Characteristics Some of the measures used in past studies did not provide relevant information or sufficient information to characterize programs in specific fields. This was especially true in Arts and Humanities in the 1995 study. It is not essential that uniform measures be used across allfields. Finding the appropriate measures will be a critical element in the next study. Agricultural and Nutritional Sciences Presenter: Patricia Swan, Iowa State University Arts and Humanities Presenter: John D'Arms, American Council of Learned Societies

88 Biological Sciences Presenter: Robert Thach, Washington University Engineering Presenter: Leonard Peters, Virginia Polytechic Institute and State University Physical Sciences and Mathematics Presenter: Ronald Douglas, Texas A & M Social and Behavioral Sciences Presenter: Brian Foster, University of Nebraska 3:00 PM Adjournment APPENDIX C

APPENDIX C Study of Research-Doctorate Programs in the Unitecl States Office of Scientific and Research Personnel National Research Council Planning Meeting June 22-23, 1999 Washington, D.C. PARTICIPANTS Richard Anderson Somat Engineering, Inc. Marilyn Baker National Research Council Joseph Bordogna National Science Foundation Norman Bradburn National Opinion Research Center Joseph Cerny University of California, Berkeley Jonathan Cole Columbia University E. William Colglazier National Research Council Olga Collazos Intern, National Research Council John D'Arms American Council of Learned Societies Donna Dean National Institutes of Health Nancy Diamond Goucher College Ronald Douglas Texas A&M Brian Foster University of Nebraska Thomas Fox Harvard University 89 Hugh Graham Vanderbilt University M.R.C. Greenwood University of California, Santa Cruz Jong-on Hahm National Research Council Peter Henderson National Research Council Stanley Ikenberry American Council on Education Ruth Kirschstein National Institutes of Health Charlotte Kuh National Research Council Jules LaPidus Council of Graduate Schools Stephen Lukaski Independent Consultant Lesley Lydell University of Minnesota Brendan Maher Harvard University Cora Marrett University of Massachusetts Lawrence Martin State University of New York, Stony Brook David Meyer University of California-Los Angeles

9o Maresi Narad University of California-Berkeley Leonard Peters Virginia Polytechnic Institute and State University George Reinhart National Research Council Debra Stewart North Carolina State University Stephen Stigler University of Chicago Jennifer Sutton National Research Council Patricia Swan Iowa State University APPENDIX C Peter Syverson Council of Graduate Schools Orlando Taylor Howard University Robert Thach Washington University John Vaughn Association of American Universities Jim Voytuk National Research Council Garry Walters Ohio State Board of Regents John Wiley University of Wisconsin

APPENDIX C SCHEDULE FOR COMMITTEE AND PANEL ACTIVITIES 91 Date Committee/Panel Place April 15-16, 2002 1st Full Committee Meeting Washington, D.C. June 6-7, 2002 1St Panel on Student Processes and Outcomes Meeting June 17, 2002 Panel on Quantitative Measures Washington, D.C. New York University Torch Club, NYC June 20-21, 2002 Panel on Taxonomy and Interdisciplinarity Washington, D.C. 5th St. Bldg July 22, 2002 Panel on Reputational Measures and Data Presentation Washington, D.C. 5th St. Bldg August 1-2, 2002 2nd Full Committee Meeting Woods Hole Study Center, Woods Hole, MA September 5-6, 2002 2n~ Panel on Student Processes and Outcomes meeting September 11-12, 2002 2n~ Panel on Taxonomy & Interdisciplinarity meeting Washington, D.C. Washington, D.C. September 18, 2002 2n~ Panel on Reputational Measures and Data Presentation Washington, D.C. meeting September 19, 2002 2n~ Panel on the Review of Quantitative Measures meeting New York University Torch Club, NYC September30- 3rdFull Committee Meeting Washington, D.C. October 1, 2002 March 26-28, 2003 4th Full Committee Meeting Beckman Center, Irvine, CA July 31- 5th Full Committee Meeting Woods Hole Study Center, August 1, 2003 Woods Hole, MA

92 Committee to Examine the Methoclology for the Assessment of Research Doctorate Programs First Meeting: April 15-16, 2002 Washington, D.C. Agencla Monday, April 15 Green 104 9:15-9:45 AM Bias Discussion - C. Kuh 10:00-11:00 AM Key issues: Sponsors Betsey Kuhn - United States Department of Agriculture Judith Ramelly - National Science Foundation Wendy Baldwin - National Institutes of Health 11:00 AM -12:00 PM Key Issues: Conference Board of Associated Research Councils Bruce Alberts - National Research Council David Ward - American Council on Education 1:00-2:00 PM Key issues: Higher Education organizations Debra Stewart - Council of Graduate Schools Peter McGrath - National Association of State Universities and Land-Grant Colleges Nils Hasselmo - Association of American Universities 2:00-3:30 PM Key issues: Other Interested Groups Phyllis Franklin - Modern Language Association Sidney Golub - Federation of American Societies for Experimental Biology Frank Huband - American Society for Engineering Education Robert Townsend - American Historical Association. Howard Silver - Consortium of Social Science Associations EXECUTIVE SESSION 3:45-5:00 PM Committee discussion of key issues and study organization Tuesday, April 16 EXECUTIVE SESSION 8:00-10:00 AM Study Organization and NRC Report Review 10:00 AM-1:00 PM Panel Tasks APPENDIX C

APPENDIX C Panel on Student Processes and Outcomes First panel meeting: June 6-7, 2002 Washington, D.C. Agencla Committee Statement of Task and Charge to Student Processes and Outcomes Panel Issues Sample Survey Instruments The National Doctoral Program Survey Survey on Doctoral Education and Career Preparation Ph.D.' s Ten Years Later National Survey of Student Engagement Graduate Student Exit Questionnaires Articles 93 "National Survey of Student Engagement: Conceptual Framework and Overview of Psychometric Properties" George D. Kuh Re-envisioning the Ph.D., "What Concerns Do We Have?" Jody D. Nyquist and Bettina J. Woodford "The National Doctoral Program Survey: Executive Summary" National Association of Graduate-Professional Students

94 Panel on Review of Quantitative Measures First panel meeting: June 17, 2002 New York, NY Agencla Monday, June 17, 2002 9:00-9:30 AM 9:30-10:15 AM 10:45 AM-12:00 PM 12:00-2:30 PM 2:30-3:30 PM 4:00-5:00 PM APPENDIX C Introduction and Bias Discussion Faculty, Student, and Institutional Characteristics Measures of Productivity Field Specific Data Data Sources and Data Collection Issues Wrap-up and Issues for Investigation

APPENDIX C Panel on Taxonomy and Interclisciplinarity First panel Meeting: June 20-21, 2002 Washington, DC Agencla Thursday, June 20, 2002 9:00-9:30 AM 9:30-10:30 AM 10:45 AM -12:00 PM 12:45-3:30 PM Bias Discussion Study Fields Selection: Taxonomy Study Fields Selection: New Fields Program Specific Issues 3:45-5:00 PM Interdisciplinarity Friday, June 21, 2002 8:00-9:45 AM 10:00 AM -12:00 PM 95 Small Field/Program Issues Wrap-up and Items for Additional Investigation

96 Panel on Reputational Measures and Presentation of Data July 22, 2002 Washington, D.C. Agencla 9:00-10:00 AM 10:00-10:45 AM 11:00 AM -12:30 PM 12:30-1:30 PM Introductions and bias discussion Measuring the Scholarly Reputation of Programs Alternative Approaches to Measuring Reputation Working lunch: Recommendations for Pilot Testing 1:30-2:45 PM Data Presentation 3:00-4:00 PM Data Presentation Alternatives 4:00-5:00 PM Recommendations to Full Committee APPENDIX C

APPENDIX C Committee to Examine the Methoclology for the Assessment of Research Doctorate Programs Second meeting: August 1-2, 2002 Woocis Hole, Massachusetts Agencla August 1 CLOSED SESSION ALL DAY 8:15-8:30 AM Minutes and Summary of Last Meeting Ostriker 9:00-10:00 AM Panel on Student Educational Processes and Outcomes Lorden 10: 15-11:15 AM Panel to Review Quantitative Measures Stimpson 11:15 AM-12:15 PM Panel on Taxonomy and Interdisciplinarity Solomon 1:15-2:15 PM Panel on Reputational Measures and Data Presentation Cole, Holland 2:15-3:15 PM Open Issues (examples: GRE scores, report format, Ostriker nonacademic constituencies) 3:30-5:00 PM Outreach, participants for next meeting August 2 Ostriker OPEN SESSION 8:15-9:15 AM Non-academic employers Ostriker Guest: Paula Stephan, Georgia State University CLOSED SESSION 9:15-10:15 AM Pilot site strategy Ostriker 10:30 AM-12:00 PM Draft Report Outline Ostriker 97

98 Panel on Student Processes and Outcomes September 5-6, 2002 Washington, D.C. Agencla APPENDIX C The entire meeting will be held in Executive Session, since its primary business is to develop recommendations for the full committee. Thursday, September 5 9:00-9:30 AM Minutes and Discussion of comments on panel recommendations from the full committee. 9:30-10:30 AM Who are the audiences for this information? What do they need to know? 11:00 AM -12:00 PM Programmatic data Much of the descriptive data that the Panel has discussed has also been mentioned by the Panel on Quantitative Measures. Are there particular measures of effectiveness of the graduate program that we want to be sure is included? 1:00-2:30 PM Rationale for surveying students A. Current students 1. How many years past enrollment? Why? B. Recent graduates 1. How many years past graduation? Why? 2. What does such a survey tell us about the current program? C. Verification of program-provided information 3:00-5:00 AM Pilot sites. What we want to learn from them. Each pilot site is a different kind of institution. Do we want to customize questions according to "mission" (determined either empirically or ex ante). For example, do we want to ask students from programs whose graduates go predominantly to academic employment different questions from those whose graduates go primarily to industrial employment? Friday, September 6 S:30-10:00 AM 10:15AM-12:00 PM Prioritization of respondents, questions. What questions are key indicators of the quality and effectiveness of a Ph.D. program? Should they be customized by field? Summary of Recommendations and Rationales

APPENDIX C Panel on Taxonomy and Interclisciplinarity September 11-12, 2002 Washington, D.C. Agencla 99 Because the entire purpose of the meeting is to draft recommendations for consideration of the full Committee, the entire meeting will be held in executive session. Wednesday, September 11, 2002 9:00-9:30 AM Goals of this Meeting. Summary of the Full Committee Discussion in Woods Hole 9:30-10:30 AM Principles for Including and Excluding Programs Should there be a distinction between listing a program and ranking it? Is there any reason to list programs that don't grant degrees? How can we identify them? 10:45 AM-12:00 PM 1:00-3:00 PM 3:45-4:30 PM Thursday, September 12 Identifying Programs in Professional Schools The Panel has made the distinction between programs that primarily educate practitioners and those that primarily educate researchers. Does this distinction permit us to identify programs in profes- sional schools that should appear in the final study? Which programs should be included (Staff will prepare a list of all Ph.D. degrees granted in professional schools.) Revisiting the Taxonomy Given the morning's discussion, how comfortable is the Panel with the taxonomy it organized at its last meeting? What should be changed? What recommendations does the Panel have about treat- ment of faculty who teach in more than one program? Are we comfortable with how we have addressed interdisciplinarity? Do we need to address issues of multi-university centers or facilities? Structuring Pilot Site Trials The pilot site trials will tell us how well the taxonomy fits each institution. If there are problems with fit, how do we design consistent rules for adjustment? 9:00-10:00 AM Additional Sources to Test the Taxonomy The AAU has agreed to test a draft taxonomy with its chief academic officers. Are there other organizations we should ask? What kind of feedback should we request? 10:15AM-12:00 PM Recommendations for the full committee

100 Panel on Reputational Measures and Presentation of Data September 1S, 2002 Washington, D.C. Agencla APPENDIX C Because the entire purpose of the meeting is to draft recommendations for consideration of the full Committee, the entire meeting will be held in executive session. Wednesday, September 18, 2002 8:30-9:00 AM Goals of this Meeting. Minutes of the last meeting. Summary of the Full Committee Discussion in Woods Hole 9:00-10:30 AM Possible Approaches to a Reputational Measure At the first panel meeting there was agreement that program reputation should be measured, and that efforts should be made to better inform the raters of program characteristics. However, the proce- dures for conduction a reputational survey were not formulated, and some open questions are: Who should be surveyed? What program information should be available to the raters? What is the format of the survey form? What questions should be asked? Should multiple indicators be used to describe program quality? 11:00 AM-12:00 PM Special Issues in addition to the above issues, there are some special concerns, such as: Can meaningful measures of reputation be generated for the lower half of the ratings? Should all programs be rated? How can niche programs or programs in subfields be rated? 12:00-1:00 PM Working Lunch: Measuring Reputation in the Non-Academic Sector For some fields, such as those in Engineering, a large number of program graduates find employ- ment in industry and government. Can ways be found to assess the quality of these programs from the viewpoint of their non-academic "customers"? 1:00-3:00 PM Presentation of Reputational Data The panel and full committee agreed that no single ordinal ranking reflects the quality of programs in a field, and other methods should be found to represent reputational data. Several methods have been proposed, including random halves, bootstrap, and a Bayesian approach. Some of these meth- ods are illustrated in this agenda book using data fro English programs from the 1995 study. Other approaches are also described in a memo from Paul Holland, included under tab - IV. Committee Discussion. 3:15-4:00 PM Pilot Site Trials While testing different approaches to reputational ratings of a program is limited by the nine pilot institutions and the number of Ph.D. programs they offer, it might be possible to develop some trials that will assist in answering some procedural questions. 4:00-5:00 PM Recommendations to the Full Committee.

APPENDIX C Panel on Review of Quantitative Measures September 19, 2002 New York, NY Agencla 101 Because the entire purpose of the meeting is to draft recommendations for consideration of the full Committee, the entire meeting will be held in executive session. Thursday, September 19, 2002 9:00-9:30 AM Goals of this Meeting. Minutes of the last meeting. Summary of the Full Committee Discussion in Woods Hole 9:30-10:15 AM Mission, Institution, and Broad Field Data The full Committee and the Panel on Student Outcomes and Processes thought that there are some relevant data at the institutional level (e.g. endowment; student health benefits, housing, and avail- ability of childcare; and unionization of graduate students). What data should be collected at the institutional level? At the field level (e.g. humanities, social sciences, etch? 10:15-11:00 AM Programmatic Data The Committee at its last meeting encouraged the Panel to develop a large number of program characteristics that are useful and updateable. Also, the Panel on Student Outcomes and Processes referred a number of program measures to this Panel, since they would be collected through a questionnaire for program administrators. 11:15 AM-12:00 PM Special Issues Measures that need special attention are: How to measure Time-to-Degree? Completion rates? How should data on minority students should be collected and presented? GRE scores: the Committee recommended consideration of a mean or median measure and a measure of variability (interquartile range or variance). 12:00-1:00 PM 1:00-3:00 PM 3:15-4:00 PM 4:00-5:00 PM Working Lunch While munching, it might be useful to think about the significance of the measures we are request- ing. Are they indicators of quality? Of climate? Of affluence (or lack thereof)? How might we guide students and administrators to make sense of all these data? Measures of Faculty Characteristics These include publications and citations, but do we want measures of faculty demographics, origins, structure? Who shall we count as faculty? How do we deal with faculty who teach or supervise dissertations in more than one program? Pilot Site Trials Should all pilot sites be asked to answer the same questions stated the same way? Do we want to try different questions out on different sites? Recommendations to the Full Committee. Prioritization and Categorization of Measures

102 Committee to Examine the Methoclology for the Assessment of Research-Doctorate Programs Thircl Meeting: September 30-October 1, 2002 Washington, D.C. Agencla September 30, 2002 EXECUTIVE SESSION 8:45-9:45 AM General Issues PUBLIC SESSION 10:00-11:15 AM 11:15 AM - 12:00 PM APPENDIX C Jeremiah O striker Memo from Brendan Maher Principle Diversity Fields and Disciplines American Society for Theatre Research and Association for Theatre in Higher Education Arts National Communication Association American Society for Microbiology Association to Advance Collegiate Schools of Business Diversity in Doctoral Education Thomas Postlewait Bill Balthrop Gail Cassell Dan LeClair · Hispanic Association of Colleges and Universities Gumecindo Salas · Council of Historically Black Graduate Schools Irene Johnson · National Black Graduate Student Association Theodore Bunch, Jr. EXECUTIVE SESSION 12:30 - 1:30 PM Report of the Panel on Taxonomy and Interdisciplinarity Walter Cohen, Panel Co-chair Frank Solomon, Panel Co-chair 1:30-2:30 PM Report of the Panel on Student Processes and Outcomes Joan Lorden, Panel Chair 2:30-3:30 PM Report of the Panel on Quantitative Measures Catharine Stimpson, Panel Chair 3:30-4:30 PM Report of Panel on Reputational Measures and Jonathan Cole, Panel Co-chair Data Presentation Paul Holland, Panel Co-chair Tuesday, October 1, 2002 EXECUTIVE SESSION 8:30-10:00 AM 10:15-10:45 AM Outreach Pilot Site Strategy 10:45 AM - 12:00 PM Further discussion of issues arising from the Panel reports

APPENDIX C Committee to Examine the Methoclology for the Assessment of Research-Doctorate Programs March 26-2S, 2003 Irvine, California Agencla The entire meeting will be held in Executive Session Wednesday, March 26, 2003, Newport Room 2:10 - 2:20PM Minutes of Sept. 30- Oct. 1, 2002, Meeting 2:20 - 2:45 PM Bias Discussion 2:45 - 5:00 PM Findings from the Pilot Trials Charlotte Kuh, Jim Voytuk Thursday, March 27, 2003 8:15 - 10:30 AM Discussion of Preliminary Findings and Recommendations 10:45 AM- 12:00 PM Writing Groups: 1. Student Outcomes/Quantitative Measures 2. Reputational Measures 3. Taxonomy 4. Extra Breakout Room 1:00 - 5:00 PM Friday, March 28, 2003 8:15 - 11:30 AM 103 Newport Room Crystal Cove Room Laguna Room Emerald Bay Room Writing Groups: Breakout Room Reconvene in the Newport Room · Discussion of report text · Remaining tasks · Next steps

104 Assessment of Research-Doctorate Programs J. Erik Jonnson Woocis Hole Study Center July 31 - August 1, 2003, Meeting Agencla Thursday, July 31, 2003 8:45 AM 9:00 AM 9:30 AM Friday, August 1, 2003 8:30 AM 10:00 AM 12:00 PM APPENDIX C Minutes of March 26-27, 2003, Meeting NRC Review Process Next Steps Discussion on "Response to Report Review" Chapter-by-Chapter Findings since the Last Meeting a. Relating Qualitative to Quantitative Measures b. Student Questionnaires c. Outside Raters · Fields for Outside Raters · What Kind of People Are We Looking for? The Next Committee · Proposal Major Points · Possible Committee Members Adjourn.

Next: Appendix D: Sample Questionnaires »
Assessing Research-Doctorate Programs: A Methodology Study Get This Book
×
Buy Paperback | $48.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

How should we assess and present information about the quality of research-doctorate programs? In recommending that the 1995 NRC rankings in Assessing the Quality of Research-Doctorate Programs: Continuity and Change be updated as soon as possible, this study presents an improved approach to doctoral program assessment which will be useful to administrators, faculty, and others with an interest in improving the education of Ph.D.s in the United States. It reviews the methodology of the 1995 NRC rankings and recommends changes, including the collection of new data about Ph.D. students, additional data about faculty, and new techniques to present data on the qualitative assessment of doctoral program reputation. It also recommends revision of the taxonomy of fields from that used in the 1995 rankings.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!