ses must be paid to whether the curriculum serves the diverse needs of students of all ability levels, all language backgrounds, and all cultural roots. Most of the analyses remained at the level of attention to the use of names or pictures portraying occupations by race and gender (AAAS, 1999b; Adams et al., 2000; U.S. Department of Education, 1999). The cognitive dimensions of these students’ needs, including remediation, support for reading difficulties, and frequent assessment and review, are less clearly discussed.

Support for diversity in content analyses represents the biggest challenge of all. Scientific approaches have relied mostly on our limited understanding of individual learning and age-dependent cognitive processes. Moreover, efforts to understand these processes have focused at the level of the individual (the “immunology” of learning), while the impact of population forces (i.e., that is, the extrapolation of individual processes at a higher level) on learning is poorly understood (girls, as a group, in 7th and 8th grades are inadequately encouraged to excel in mathematics). Population-level processes can enhance or inhibit learning. These processes may be the biggest obstacle to learning, and curriculum implementations that do not address these forces may fail regardless of the quality discipline-based dimensions of the content analysis, hence the need for learner- and teacher-based dimensions in our framework. The grand challenge is that models that rely solely on traditional scientific approaches may not be successful if the goal is to promote learning in a highly heterogeneous (at many levels) society. Innovative scientific approaches that attend to the big picture and the impact of nonlinear effects at all levels must be adopted.

Within the second dimension (Engagement, Timeliness and Support for Diversity, and Assessment), the final criterion concerns how one determines what students know, or assessment. An essential part of examining a curriculum in relation to its effects on students is to examine the various means of assessment. Examining these effects often reveals a great deal about the underlying philosophy of the program.

The quality of attention to assessment in these content analyses is generally weak. In the Mathematically Correct Reviews (Clopton et al., 1998, 1999a, 1999b, and 1999c), assessment is referred to only in terms of “support for student mastery.” In the Adams report, which was quite strong in most respects, only two questions are discussed: Does the curriculum include and encourage multiple kinds of assessments (e.g., performance, formative, summative, paper-pencil, observations, portfolios, journals, student interviews, projects)? Does the curriculum provide well-aligned summative assessments to judge a student’s attainment? The responses were cursory, such as “This principle is fully met. Well-aligned summative assessments are given at the end of each unit for the teacher’s use.” The exception to this was found in AAAS’s content analyses, where three differ-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement