Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
3 ASSUMPTIONS The basic, critical assumption that underlies this report is that a well developed, meaningful mechanism for evaluating instructional effectiveness will improve both teaching and learning. This assumption is based on the common understanding that faculty (like most individuals) respond in accordance with how well their efforts are rewarded. As stated earlier, the perception is that the current system for evaluating faculty for promotion and tenure is heavily weighted in favor of research (scholarly and creative activities) with a relative low weight given to teaching. This imbalance reflects that in âthe marketâ in higher education, effective teaching, unlike research, is not rewarded with advancement and prestige. Another reason for the imbalance might be that the methods used to evaluate teaching effectiveness are not well developed or widely understood and, in most cases, have not been adopted at the institutional level. Under these circumstances, administrators may be understandably reluctant to give significant weight to an assessment whose validity and accuracy may be uncertain, or even suspect. Another significant underlying assumption is that all faculty members are capable of improving their teaching. Just as researchers must constantly update their knowledge and methodologies, instructors should also continue to âupdateâ their teaching practices based on both developments in learning and pedagogy and feedback on their teaching skills. Also, an assumption which is closely linked to the preceding assumption is that many faculty members are intrinsically motivated to improve their teaching. Therefore, they may welcome feedback, both formative and summative, if it is believed that it will improve their teaching effectiveness. Of course, the committee is aware that priorities among demands on faculty for research, service, and personal life, as well as teaching, differ among types of universities, from university to university within a type, from department to department, from individual to individual, and even from time to time. Some people may question whether all, or even most, engineering educators have an intrinsic desire to improve their teaching. Certainly, the responses of some faculty members to teaching evaluations seem to exhibit more cynicism than intrinsic motivation. However, faculty members are typically high achievers and are concerned with how they would be ranked in comparison with their peers being similarly evaluated. Therefore, we assume that when faculty members feel that the information they receive from teaching evaluations is appropriately informative, they will use that information to improve their teaching. Thus the crucial factor is that faculty members must believe that an evaluation system is appropriately informative. Although it may appear that some faculty would not welcome feedback on their teaching, it is likely they are reacting within the context of current promotion and tenure and evaluation systems. Any performance evaluation must be perceived to be accurate and fair in order for the individuals being evaluated to welcome the experience and to try to improve their performance by changing their teaching practice. Of course, even if a system is perceived to be âunfair,â it 12
may still lead to changes in behavior, provided the outcome of the evaluation is sufficiently threatening. However, we are more interested in developing an evaluation system that motivates changes because the system is fair and informative, rather than because it is threatening. While the issue of accuracy of such instruments is a subject that is broadly understood and does not warrant in-depth description in this text, the issue of fairness will be defined more thoroughly. The perception of fairness cannot be separated from the egocentrism of the person being evaluated. A study by Paese, Lind and Kanfer (1988) found that pre-decision input from those who will be judged in the evaluation process will lead to their judging the system to be procedurally fair. However, many other investigators have demonstrated that, even for those who have had input into developing the process, perceptions of fairness are linked, consciously or not, to an individualâs interests and needs (Van Prooijen, 2007). Thus a sense of fairness is significantly affected by whether an individual believes he or she may benefit from an action, or, even more important, whether he or she will be disadvantaged by it. Thus all individuals, even those who had input into the development of a process of evaluation, may eventually or initially consider the system unfair, depending upon how the system influences decisions that affect them. With respect to implementing a more effective and valuable assessment program, we might adapt to instruction a practice commonly used to increase competence in the evaluation of research proposals and journal articles. That is, we can systematically engage graduate students and junior faculty in evaluating the various types and aspects of teaching effectiveness. Their reviews of teaching are then evaluated by senior faculty as a way of providing valuable feedback and constructive criticism on the quality and comprehensiveness of the reviews. The time and effort of graduate students and junior faculty pay off by raising the level of their understanding of the research, teaching, and reporting process as a whole. At the same time, their efforts ensure that future cadres of effective reviewers and researchers will be available. Similar efforts could be made to increase competency in instructional evaluation by enlisting senior faculty with expertise in teaching along with the participation of graduate students and junior faculty to increase their capabilities as evaluators of instructional effectiveness. Such an investment would utilize the approach used to foster continuous improvement in research techniques through advising and mentoring of graduate students and junior faculty not only to ensure that more, and more capable, individuals had some experience of assessing instructional effectiveness, but also to create a large cadre of faculty with exposure to the concepts of instructional design and delivery and a better understanding of the fields of instructional research. Our final assumption is that administrators and campus reviewers will do their jobs fairly and objectively, including making appropriate assignments, communicating university and program expectations, and using the data collected from evaluations to make fair and accurate judgments of performance, both to encourage professional development and to inform job- advancement decisions. This assumption assumes a great deal of trust and requires some further explanation. The ultimate goal of evaluating teaching is to provide feedback to individuals (in both formative and summative formats) as a basis for gauging their effectiveness in meeting institutional and program expectations and then continuously improving their teaching performance to satisfy their intrinsic desire for excellence. To accomplish this goal, the 13
individuals being evaluated must depend upon a team of people to gather and analyze data in a way that they trust will produce accurate and fair results. As Lencioni (2002) points out, no team can function effectively without trust. In university settings, administrators cannot create an environment of trust by themselves, but they can be crucial players in maintaining trust. Some of the things administrators and campus reviewers should do to engender trust in the teaching evaluation process are listed below: 1. They must assign faculty to teach only in areas in which they have, or can readily develop, the expertise to teach at an appropriate level. 2. They must ensure that an evaluation of an individualâs teaching performance is considered in the correct context, such as expected outcomes for student learning, the level of students in the course, whether a course is required or elective, the size of the classes and the nature of the available facilities, and the past experience of the instructor in this teaching situation. 3. Complex social data, such as teaching evaluations, must be used in accordance with well documented social science practices that have established appropriate interpretations and limitations for deriving results. 4. Administrators and reviewers must show that they are using the evaluation process to develop and advance faculty members fairly. 14