National Academies Press: OpenBook
« Previous: 3 ASSUMPTIONS
Suggested Citation:"4 What To Measure." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 15
Suggested Citation:"4 What To Measure." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 16
Suggested Citation:"4 What To Measure." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 17
Suggested Citation:"4 What To Measure." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 18
Suggested Citation:"4 What To Measure." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 19
Suggested Citation:"4 What To Measure." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 20
Suggested Citation:"4 What To Measure." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 21
Suggested Citation:"4 What To Measure." National Academy of Engineering. 2009. Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved. Washington, DC: The National Academies Press. doi: 10.17226/12636.
×
Page 22

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

4 What To Measure The assumptions and governing principles discussed in chapters 2 and 3 provide a framework for developing detailed procedures for determining what to measure in evaluating teaching and how to measure it. The faculty must be engaged not only in determining what to measure, but also in how to “weight” each measure. Thus faculty values and priorities must be taken into account, as well as the mission and goals of the larger institution. Any evaluation system is predicated on a set of values. That is, a set of desirable conditions is defined and then measurements are made to determine whether those conditions have been met. However, the determination as to what constitutes a desirable condition is dependent upon the values held by those interested in developing the evaluation system. Thus in designing a faculty evaluation system the “desirable” conditions to be met must be expressed in terms of the “value” that faculty place on teaching, research productivity, service, and other faculty activities. For example, if research productivity is to be valued more than teaching effectiveness, then a greater weight must be placed on the metric resulting from the measurement of research productivity as compared to the weight placed on the metric resulting from the measurement of teaching performance. Combining the weighted measures of the various faculty roles produces an overall evaluation metric that reflects the “faculty value system” and is thus more likely to be seen by the faculty as being a valid system. The process involves at least four major steps: 1. Define and clarify the underlying terms and assumptions on which the evaluation system is based. 2. Define the value system of the faculty by systematically engaging faculty in defining the following conceptions (which have expanded discussion in later sections of the report): • the forms of teaching in engineering education • the characteristics (or performance elements) of effective teaching in engineering • the value, or “weight” of various characteristics (or performance elements) in the overall evaluation of teaching performance • the appropriate sources of information to be included in the evaluation 3. Integrate faculty values and institutional values to ensure that engineering faculty will be able to compete fairly for institutional promotions and tenure. 4. Develop and/or select appropriate tools for measuring the performance elements of effective teaching as determined by the faculty. 15

The remainder of this chapter describes steps 1 and 2 which address the broad question of what to measure. Steps 3 and 4 which relate to how to measure, are addressed in Chapter 5. STEP 1: BASIC TERMS AND UNDERLYING ASSUMPTIONS The purpose of this step in the development process, which takes place before the faculty become involved, is to define the basic terms, such as measurement and evaluation, and clarify the underlying assumptions of the evaluation, such as that the goal is to design an evaluation system that will be objective and fair. Definitions of Terms In the physical sciences, the term measurement is generally defined as the numerical estimation and expression of the magnitude of one quantity relative to another (Michell, 1997). However, this definition makes sense only for measuring physical and observable objects or phenomena. When measurement is used in the context of an evaluation of teaching, it takes on a somewhat different meaning, because the “things” being measured do not have readily observable, direct, physical manifestations. For example, an evaluation that measures the impact of a faculty member’s teaching on students’ cognitive skills and/or attitudes may be desired. Although there may be some direct external evidence of these, such as student performance on examinations, this measurement will likely involve gathering certain types of data (e.g. student ratings, peer opinion questionnaires) as a basis for inferring a measurement of an internal cognitive or affective condition. The terms measurement and evaluation are not synonymous. A measurement is as objective and reliable as possible. Whereas measurement involves assigning a number to an observable phenomenon according to a rule, evaluation is defined as the interpretation of measurement data by means of a specific value construct to determine the degree to which the data represent a desirable condition (Arreola, 2007). Thus the result of an evaluation is a judgment, which, by definition, is always subjective. A specialized field of psychology, called psychometrics, has been developed to perform the kinds of measurements used in evaluations. Psychometrics is discussed in greater detail in the next chapter on how to measure the performance elements of teaching. The Assumption of Objectivity When an institution undertakes to develop a faculty evaluation system, the goal is to ensure that the system is as objective as possible. However, total objectivity in a faculty evaluation system is an illusion, because the term evaluation, by definition, involves judgment, which means that subjectivity is an integral component of the evaluative process. In fact, the term objective evaluation is an oxymoron. Even though the measurement tools used in a faculty evaluation system (e.g., student ratings, peer observation checklists, etc.) may achieve high levels of objectivity, the evaluation process is, by definition, subjective. However, the underlying rationale for wanting an “objective” faculty evaluation system is to ensure fairness and to reduce or eliminate bias. Ideally, in a fair, unbiased evaluation system 16

anyone examining a set of measurement data will arrive at the same evaluative judgment. In other words, such an evaluation system would produce consistent outcomes in any situation. Definition of Controlled Subjectivity Since a completely “objective” evaluation is not possible, however, the goal must be to achieve consistent results from a necessarily subjective process. That is, we must design a process that provides the same evaluative judgment based on a data set, regardless of who considers the data. This can be done through a process called controlled subjectivity. Psychometric methods can be used to create tools for measuring faculty performance (e.g., observation checklists, student- and peer-rating forms) in a way that produces reliable data (i.e., measurements) that are as objective as possible. However, because we know that an evaluation must be subjective, the problem is how to achieve the characteristic of objectivity (i.e., consistency of conclusions based on the same data regardless of who considers them) in a necessarily subjective process. Because subjectivity in a faculty evaluation system is unavoidable, the goal should be to limit or control its impact. To accomplish this we use a process called controlled subjectivity, which is defined as the consistent application of a predetermined set of values in the interpretation of measurement data to arrive at an evaluative judgment (Arreola, 2007). In other words, subjectivity in an evaluation system can be controlled when an a priori agreement has been reached on the context and (subjective) value system that will be used to interpret the objective data. Thus, even though the evaluation process involves subjectivity, we can still ensure consistency in outcomes, thus approximating a hypothetical (although oxymoronic) “objective” evaluation system. STEP 2. DETERMINING THE VALUE SYSTEM Every evaluation rests upon an implicitly assumed value or set of values. An evaluation provides a systematic observation (measurement) of the performance of interest and a judgment as to whether that performance conforms to the assumed values. If there is a good match, the performance is judged desirable and is generally given a positive or “good” evaluation. If there is a discrepancy, the performance is judged to be undesirable and is generally given a negative or “poor” evaluation. As was noted earlier, the evaluation process implies the existence and application of a contextual system, or structure, of values associated with the characteristic(s) being measured. Thus before an evaluation system can be developed, the values of those who intend to use it must be defined and should be carefully developed to reflect the values of the institution where they will be applied. For a faculty evaluation system to reflect the values of the institution correctly, we must not only determine those values and have them clearly in mind, but we must also express them in such a way that they may be applied consistently to all individuals subject to the evaluation process. The “Faculty Role” Model The value system of a faculty evaluation for a unit in a larger institution must be in basic agreement with the larger value system of the institution. The first step, therefore, must be to 17

ascertain the institution’s “faculty role” model, that is, the various professional roles faculty are expected to play and how much weight is given to performance in each role in the overall evaluation of the faculty—especially as that evaluation impacts decisions about tenure and promotion. The faculty role model, often described in a faculty handbook or other personnel manual, generally specifies the traditional roles of teaching, research, and service. Recently, however, many institutions have adopted a more comprehensive faculty role model—teaching, scholarly and creative activities, and service; in addition, service is described in more detail as service to the institution, the profession, and the community. Whichever faculty role model the institution has adopted must be the starting point in the development of a faculty evaluation system. In an evaluation system, the institution’s mission, goals, priorities, and values may be expressed as “weights” assigned to the performance of each role. Traditionally, the faculty role model was weighted as follows: teaching 40 percent; research 40 percent; and service 20 percent. However, the consensus opinion of workshop participants indicated that faculty often perceive that the “actual weighting” is skewed toward research and does not adhere to the nominal weightings in the model. Today, many institutions are adopting a more flexible faculty role model in which the research component has been expanded to include scholarly and creative activities (e. g., consulting and practice, generalization and codification of knowledge to give deeper insights, serving on national boards and agencies, translating basic research results into practical products or services, and even creative new approaches to education), and the weights have been adjusted to reflect the complexity of faculty work assignments. Thus some current faculty role models may look more like the one shown in Table 4.1. TABLE 4.1 Faculty Role Model with Value Ranges Minimum Faculty Responsibilities Maximum Weight Weight 20% Teaching 60% 30% Scholarly/Creative Activities 70% 10% Service 15% As Table 4.1 shows, research has been redefined as scholarly/creative activities, and the weights are expressed as ranges rather than fixed values. In this example, the weight assigned to teaching in the evaluation ranges from 20 percent to 60 percent. The range-of-values approach is useful in that it reflects the diversity of faculty assignments in the institution, or even in a single department. An instructional unit must base its faculty evaluation system on whichever type of faculty role model the institution has adopted. Thus, if the model includes ranges, the unit must weight its evaluation of teaching in a way that corresponds to, or falls within, the ranges adopted by the institution. In short, the faculty evaluation system of the unit must adhere to the governing 18

principle described in Chapter 1, of being compatible with the mission, goals, and values of the larger institution. In the event that an institution has not adopted a faculty role model that specifies weights or weight ranges, a unit might develop its own weighting scheme. The unit might then be in a position to take the lead in working with the institutional administration to clarify the values, and thus the operational weights, for evaluations of faculty for determining promotions and tenure. Faculty Participation Faculty must be systematically involved in determining and defining the faculty role model as it relates to the institutional mission and values since this process is a necessary first step. Because the evaluation of teaching requires gathering various measures and then interpreting them by means of a value construct, determining and specifying the institutional values is a continuous process. Although it is advisable to establish a coordinating committee or task force to carry out this process, it is also critical that the larger faculty be engaged in the discussions to determine their values about the professional execution of their teaching roles. Faculty may be engaged in many ways. One that has been found to be effective is by scheduling a series of dedicated departmental or college faculty meetings in which faculty members are asked to discuss and come to a consensus about the following issues: • Agreement on a value, or range of values, assigned to the teaching role in the overall evaluation of a faculty member. Even if values are already specified in the institution’s faculty role model, it is important that the engineering faculty clarify the value system for engineering in terms of its congruence (or non-congruence) with the institutional faculty value system. ◦ The result might be expressed in a statement similar to the following example: In the College of Engineering, the weight assigned to teaching in the faculty evaluation system must reflect the type and amount of teaching a faculty member is required to do in a given academic year and may take on a value within the range of 20 percent to 60 percent in the overall evaluation. • Agreement on a list of types of teaching situations that should be included in the evaluation (e.g., standard classroom teaching, large lectures, online teaching, laboratory teaching, project courses, and/or mentoring). ◦ The result might be expressed in a statement similar to the following example: When one is evaluating teaching, only data from the following teaching environments shall be considered: standard classroom teaching: large lectures, laboratory courses, online courses, project courses, and assigned mentoring. Mentoring graduate student research, which can be categorized as “creative or scholarly activity,” and serving as an advisor to student organizations, which can be categorized as “service,” shall not be considered evidence of teaching effectiveness for the purposes of a formal evaluation. 19

• Agree on the characteristics or performance elements (e.g., organization of material, clarity in lecturing, timely replies to e-mail in teaching online courses) that faculty consider necessary for teaching excellence in each type of teaching situation. ◦ The result of this effort might be expressed in a substantial report. The underlying problem in the evaluation of teaching has been that the professoriate has not reached a consensus on a definition of what constitutes an excellent teacher. Although considerable research on teacher characteristics and performances that positively influence learning has been done, no universally accepted definition or list of qualities can be found in the lexicon of higher education. If there were such a definition or list, the evaluation of teaching would be relatively easy. Many faculty members and academic administrators consider the main component of teaching excellence to be content expertise. Others argue that teaching excellence is an ephemeral characteristic that cannot be measured but results in long-term, positive effects on student lives, of which the instructor may never be aware. The differences between these two opinions (and many others) may never be resolved to everyone’s satisfaction. Nevertheless, the process of designing an effective learning experience is, to some extent, familiar to engineers, who are adept, or at least familiar, with design processes and the iterations necessary to deliver a product. Designing and delivering an excellent course or learning experience can be thought of in much the same way. First, he or she must identify the requirements (e.g., the learning outcomes for the course, what the student needs for learning are, what the profession defines as competencies in knowledge and skills). The instructor must have sufficient expertise in the disciplinary content, as well as in the learning process, to ensure that all students learn. He or she must also establish and refine learning outcomes for students and create learning experiences that are likely to achieve the desired results. Once the instructor has designed the course, he or she must deliver the course (i.e., implement the design) and continually evaluate not only student learning outcomes, but also the success of the design. A well designed course may not have the desired effects if other components (e.g., course management) are not handled well. Like all engineering designs, the evaluation of an engineer’s work requires input from both customers (i.e., students) and experts in the field (e.g., peers). • Agree on the most qualified or appropriate sources of information on various characteristics or performance elements in each teaching situation and specify how much weight should be placed on that information. ◦ The result of this should be the identification of multiple data sources. At the very least, data from students, peers, and department chairs (or other supervisors) should have input into an evaluation. However, it is important to determine which of these 20

(or other) sources should provide information on the performance of specific elements of teaching in each identified environment, as well as how that information should be weighted. Table 4.2 shows an example how a faculty member might determine sources of information and how those data sources should be weighted. In this example, input from students counts for 25 percent, from peers 45 percent, from the department chair or supervisor 20 percent, and from the subject of the evaluation 10 percent. The “X’s” indicate the appropriate performance elements for which each source should provide information; cells highlighted in gray indicate that no data are to be gathered. The table also indicates the previously determined range (20 percent to 60 percent) for weighting teaching in the overall faculty evaluation. TABLE 4.2 Example of Data Sources and Weights Minimum 20% TEACHING Maximum 60% Sources of Measurement Data Performance Department. Students Peers Chair/ Self Component1 Supervisor (25%) (45%) (10%) (20%) Content expertise2 X X 3 Instructional design X X X Instructional delivery4 X X X 5 Instructional assessment X X X X Course management6 X X 1 The performance components addressed in this table are commonly discussed topics. Additional source material that discusses these items can be found in the following report: National Research Council. 1999. How People Learn: Brain, Mind, Experience and School, Washington, DC.: National Academy Press. 2 Instructors must be knowledgeable in their specific fields of engineering. However, considerable research has shown that content expertise, although necessary, is not sufficient to ensure teaching excellence. The concept of pedagogical content knowledge [as described by Shulman, L. (1987). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57, 1-22.] describes the connection between discipline content knowledge and pedagogic knowledge that leads to improved teaching and learning. 3 Instructional design requires planning a logical, organized course that aligns objectives/outcomes, learning experiences (content and delivery), and assessments based on sound principles from the learning sciences. 4 For effective delivery (implementation), the instructor must use a variety of methods, activities, and contexts to achieve a robust understanding of material, as well as relevant, varied examples of the material and activities that provide meaningful engagement and practice, all of which are aligned with outcomes and assessment methods. 5 Assessment requires that the instructor design and use valid, reliable methods of (1) measuring student learning of the established objectives and (2) providing meaningful feedback to students. 6 Course management is judged on how well the learning environment is configured, including equipment, resources, scheduling, and procedures necessary to student learning. 21

Note that the decisions, made in consultation with faculty, may be entirely subjective. Nevertheless, because this value system will remain constant for all faculty members whose teaching is being evaluated, the subjectivity will be controlled, thus guaranteeing the consistency and comparability of outcomes. Lengthy discussions and vigorous debate may be necessary for faculty to come to agreement on these parameters. However, agreement is necessary for faculty to feel confident that the evaluation system reflects and respects their conception of excellence in teaching as well as their values and priorities in evaluating teaching. Once the tasks listed in this section have been completed, the process can move to the next stage—determining how to measure the performance elements of teaching and how to combine these measures into an overall evaluation of teaching. 22

Next: 5 Measuring Teaching Performance »
Developing Metrics for Assessing Engineering Instruction: What Gets Measured Is What Gets Improved Get This Book
×
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Faculty in all disciplines must continually prioritize their time to reflect the many demands of their faculty obligations, but they must also prioritize their efforts in ways that will improve the prospects of career advancement. The current perception is that research contributions are the most important measure with respect to faculty promotion and tenure decisions, and that teaching effectiveness is less valued--regardless of the stated weighting of research, teaching and service. In addition, methods for assessing research accomplishments are well established, even though imperfect, whereas metrics for assessing teaching, learning, and instructional effectiveness are not as well defined or well established.

Developing Metrics for Assessing Engineering Instruction provides a concise description of a process to develop and institute a valid and acceptable means of measuring teaching effectiveness in order to foster greater acceptance and rewards for faculty efforts to improve their performance of the teaching role that makes up a part of their faculty responsibility. Although the focus of this book is in the area of engineering, the concepts and approaches are applicable to all fields in higher education.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!