3
Surveying Promising Practices
PROMISING PRACTICES FOR FACULTY AND INSTITUTIONS AND PREDICTING SUCCESS IN COLLEGE SCIENCE
Moderator Melvin George (University of Missouri) introduced three panelists to discuss a range of promising practices. Each panelist was asked to address the following questions:
-
How would you categorize the range of promising practices that have emerged over the past 20 years? Consider practices that are discipline-specific as well as those that are interdisciplinary.
-
What types of categories do you find are most useful in sorting out the range of efforts that have emerged? Why did you choose to aggregate certain practices within a category?
-
As you chose exemplars for your categories, what criteria did you use to identify something as a promising practice?
Jeffrey Froyd (Texas A&M University) began by describing a framework that he developed to categorize promising undergraduate teaching practices in science, technology, engineering, and mathematics (STEM).1 The framework begins with a set of decisions that faculty members must make in designing a course:
1 |
For more detail about this framework, see the workshop paper by Froyd (see http://www.nationalacademies.org/bose/Froyd_Promising_Practices_CommissionedPaper.pdf). |
-
Expectations decision: How will I articulate and communicate my expectations for student learning?
-
Student organization decision: How will students be organized as they participate in learning activities?
-
Content organization decision: How will I organize the content for my course? What overarching ideas will I use?
-
Feedback decision: How will I provide feedback to my students on their performance and growth?
-
Gathering evidence for grading decision: How will I collect evidence on which I will base the grades I assign?
-
In-classroom learning activities decision: In what learning activities will students engage during class?
-
Out-of-classroom learning activities decision: In what learning activities will students engage outside class?
-
Student-faculty interaction decision: How will I promote student-faculty interaction?
The next component of Froyd’s framework relates to two types of standards against which faculty members are likely to evaluate a promising practice: (1) implementation standards and (2) impact standards. Implementation standards include the relevance of the promising practice to the course, resource constraints, faculty comfort level, and the theoretical foundation for the promising practice. Student performance standards relate to the available evidence on the effectiveness of the promising practice, which may include comparison studies or implementation studies.
Froyd then identified eight promising practices related to teaching in the STEM disciplines and analyzed each in terms of his implementation and student performance standards (see Table 3-1).
Jeanne Narum (Project Kaleidoscope) identified three characteristics of institutional-level promising practices in STEM, noting that they (1) connect to larger goals for what students should know and be able to do upon graduation, (2) focus on the entire learning experience of the student, and (3) are kaleidoscopic (Narum, 2008). She explained that promising practices can focus on student learning goals at the institutional level, the level of the science discipline, and the societal level. To illustrate these points, Narum described examples of institutional transformation at the University of Maryland’s Baltimore Campus, Drury University, and the University of Arizona. As she explained, each institution set specific learning goals, designed learning experiences based on the goals, and assessed the effectiveness of the learning experiences. Narum also provided examples of other institutions engaged in promising practices related to assessment and pedagogies of engagement. In closing, Narum said that the best institutional practices arise when administrators and faculty share a common
TABLE 3-1 Summary of Promising Practices
vision of how the pieces of the undergraduate learning environment in STEM fit together and a commitment to work together as an institution to realize that vision.
Philip Sadler (Harvard University) focused on lessons from pre college science education. He described a large-scale survey that he and his colleagues conducted of students in introductory biology, chemistry, and physics courses at 57 randomly chosen postsecondary institutions. The focus of the study was on certain aspects of high school STEM education (e.g., advanced placement courses, the sequencing of high school science courses) that predict students’ success or failure in their college science courses. Sadler reported that 10 percent of students in introductory science courses had previously taken an advanced placement (AP) course in the same subject in high school, and those students performed only slightly better in their introductory college courses than non-AP students. Moreover, AP students who took introductory (101-level) courses did better in 102-level courses than AP students who began with 102-level courses. These findings led Sadler to recommend against AP courses for most high school students.
Next, Sadler discussed the effect of high school science-course taking on students’ performance in introductory college science courses. Overall, students who took more mathematics in high school performed better in all of their science courses than students who took fewer mathematics courses. Moreover, students who took multiple high school courses in a given science discipline performed better in college science courses in that
discipline. However, Sadler and his colleagues found no cross-disciplinary effects, meaning that students who took multiple chemistry courses did not perform significantly better in college biology; students who took multiple high school physics courses did not perform better in college chemistry; and so on. Sadler also reported that the use of technology in high school science classes did not predict success in college science; however, experience in solving quantitative problems, analyzing data, and making graphs in high school did seem to predict success in college science courses.
SMALL-GROUP DISCUSSIONS AND FINAL THOUGHTS
In small groups, participants identified what they considered to be the most important promising practices in undergraduate STEM education. The following list emerged from the small-group reports:
-
Teaching epistemology explicitly and coherently.
-
Using formative assessment techniques and feedback loops to change practice.
-
Providing professional development in pedagogy, particularly for graduate students.
-
Allowing students to “do” science, such as learning in labs and problem solving.
-
Providing structured group learning experiences.
-
Ensuring that institutions are focused on learning outcomes.
-
Mapping course sequences to create a coherent learning experience for students.
-
Promoting active, engaged learning.
-
Developing learning objectives and aligning assessments with those objectives.
-
Encouraging metacognition.
-
Providing undergraduate research experiences.
To close the workshop, steering committee members reflected on the main themes that were covered throughout the day. Susan Singer focused on the question of evidence and observed that the workshop addressed multiple levels of evidence. Explaining that assessment and evidence are not synonymous, she pointed out that classroom assessment to inform teaching generates one type of evidence that workshop participants discussed. Another type of evidence is affective change, and she observed that some people gather evidence to convince their colleagues to change their practice. Singer said the workshop clearly showed that scholars in some disciplines have given careful thought to the meaning of evidence and have begun to gather it to build a general knowledge base.
Melvin George began his reflections by asking, “Why do we need any evidence at all?” He noted that one reason for gathering evidence is to discover what works in science education, but he said that evidence alone does not cause faculty members to change their behavior. Suggesting that the problem might lie with ineffectual theories of change rather than a lack of evidence, George proposed that it might be more productive to direct more attention and resources to making change happen.
David Mogk (University of Montana) observed that the participants discussed a continuum of promising practices ranging from individual classroom activities to courses to curricula to departments to institutional transformation. Discussing the day’s themes, Mogk described a desire to identify promising practices that promote mastery of content and skills while addressing barriers to learning, and he recalled discussions about the difficulty of articulating and assessing some of those skills. He identified the use of technology as a promising practice that cuts across disciplines and suggested a need to examine the cognitive underpinnings of how people learn in each domain. Mogk called for better alignment of learning goals, teaching and learning activities, and assessment tools.
William Wood reflected on the issue of domain-specific versus generic best practices. He noted that many of the practices discussed during the workshop seem universally applicable across disciplines and even across different levels, such as the classroom, department, and institution as a whole. He also suggested that university faculty might apply some of these principles when encouraging their colleagues to transform their teaching practice. Rather than transmitting the evidence in a didactic manner and expecting colleagues to change, Wood proposed taking a more con structivist approach to build their understanding of promising practices.
Kenneth Heller remarked on the different grain sizes of the promising practices that the participants discussed. He noted that the different goals and different kinds of evidence associated with each grain size present a challenge to generating useful evidence about promising practices. He agreed with previous speakers that evidence is important but not sufficient to drive change. Heller concluded by using a quote from the poet Voltaire as a cautionary message about gathering more evidence instead of putting existing research into practice: “The best is the enemy of the good.”