Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 33
6 Conclusions and Recommendations Institutions that have developed programs and support structures to enable faculty in engineering and other disciplines to improve their teaching skills have found that instructional enhancement programs on campuses often have high enrollments, and are sometimes even over- subscribed. However, because these activities are optional and because of limited institutional resources/capacity, participation among engineering faculty is relatively low and uneven. Faculty in all disciplines must continually prioritize their time to reflect their most urgent faculty obligations, but also to prioritize their efforts in ways that will improve the prospects of career advancement. The current perception is that research is the most important aspect in faculty promotion and tenure decisions, because research contribution drives the market for faculty hiring and advancement. In addition, methods for assessing research accomplishments are well established—even though imperfect, whereas metrics for assessing teaching, learning, and instructional effectiveness are not as well defined or well established. The development of a thoughtfully designed and agreed-upon method of evaluating teaching effectiveness—based on research on effective teaching and learning (NRC, 1999)—would provide administrators and faculty members with the wherewithal to use quantitative metrics in promotion and tenure decisions. Such metrics would also provide individual faculty members with an incentive to invest time and effort in developing their instructional skills, because they would be favorably reflected in advancement decisions. Developing metrics for evaluating instructional effectiveness should be undertaken with the understanding that all faculty and the administration will have significant input into the design of the evaluation system, as well as feedback from the results. The assumptions, principles, and expected outcomes of the evaluation method should be explicit (and repeated frequently) to those who will be subject to evaluations, as well as those who will participate in administering the evaluations. The model in which the department chair serves as both the first-line administrative evaluator and primary faculty development officer is not tenable and generally not recommended (Arreola, 1997). For that reason, the gathering process and use of the assembled information for administrative (tenure and promotion) evaluations should be decoupled from information gathered for use in professional development (although it is recognized that the types of information for each purpose are likely to have significant overlap) in order to foster an atmosphere where faculty can engage in professional development activities free of concerns that identifying weaknesses could reflect negatively during administrative evaluations. The development of agreed-upon metrics will also provide accrediting agencies with an added means of assessing instruction. Information for evaluations of teaching should not be limited to student ratings, which address only the aspects of teaching students can observe. Other methods of evaluation, such as peer reviews of the quality of instructional design and content (along with self-evaluations and 33
OCR for page 33
evaluations by department heads of those same items) can lead to a fuller understanding and more useful assessment of instructional effectiveness. Specific metrics and procedures are outlined in Chapters 4 and 5 of this report. The following recommendations provide guidelines and specific actions to assist institutions and other stakeholders in developing and deploying metrics for instructional evaluations that will be widely accepted and relevant to engineering faculty. Recommendations for Institutional Action Institutions, engineering deans and department heads should: 1. Use multidimensional metrics that draw upon different constituencies to evaluate the content, organization, and delivery of course material and the assessment of student learning. 2. Take the lead in gaining widespread acceptance of metrics for evaluating scholarly instruction in engineering. Their links to faculty and institutional administrators give them the authority to engage in meaningful dialogue in the college of engineering and throughout the larger institution. 3. Seek to ensure appropriate quantities of evaluators who have the knowledge, skills, and experience to provide rigorous, meaningful assessments of instructional effectiveness (in much the same way that those institutions seek to ensure the development of the skills and knowledge required for excellent disciplinary research). 4. Seek out and take advantage of external resources, such as associations, societies, and/or programs focused on teaching excellence (e.g., Carnegie Academy for the Scholarship of Teaching and Learning, Higher Education Academy (U.K.), and Professional and Organizational Development Network), as well as on campus teaching and learning resource centers and organizations focused on engineering education (e.g., International Society for Engineering Education [IGIP] and the Foundation Engineering Education Coalition’s web site on Active/Cooperative Learning: Best Practices in Engineering Education http://clte.asu.edu/active/main.htm). Recommendations for External Stakeholders Leaders of the engineering profession (including the National Academy of Engineering, the American Society for Engineering Education, ABET, Inc. The American Association of Engineering Societies, the Engineering Deans' Council, and the various engineering disciplinary societies) should: 1. Continue to promote programs and provide support for individuals and institutions pursuing efforts to accelerate the development and implementation of metrics for evaluating instructional effectiveness. 2. Seek to create and nurture models of metrics for evaluating instructional effectiveness. Each institution, of course, will have particular needs and demands; however, nationally known examples of well informed, well supported, and carefully developed instructional evaluation programs will benefit the entire field. 34