content strands and disaggregated by effects on subpopulations of students, and the extent to which these effects can be convincingly or causally attributed to the curricular intervention through evaluation studies using well-conceived research designs. Describing curricular effectiveness involves the identification and description of a curriculum and its programmatic theory and stated objectives; its relationship to local, state, or national standards; subsequent scrutiny of its program contents for comprehensiveness, accuracy and depth, balance, engagement, and timeliness and support for diversity; and an examination of the quality, fidelity, and character of its implementation components.
Effectiveness can be defined in relation to the selected level of aggregation. A single study can examine whether a curricular program is effective (at some level and in some context), using the standards of scientifically established as effective outlined in this report. This would be termed, “a scientifically valid study.” Meeting these standards ensures the quality of the study, but a single, well-done study is not sufficient to certify the quality of a program. Conducting a set of studies using the multiple methodologies described in this report would be necessary to determine if a program can be called “scientifically established as effective.” Finally, across a set of curricula, one can also discern a similarity of approach, such as a “college preparation approach,” “a modeling and applications approach,” or a “skills-based, practice-oriented approach,” and it is conceivable that one could ask the question of whether an approach is effective, and if so, producing an approach that’s “scientifically established as effective.” The methodological differences among these levels of aggregation are critical to consider and we address the potential impact of these distinctions in our conclusions.
Efficacy is viewed as considering issues of cost, timeliness and resource availability relative to the measure of effectiveness. Our charge was limited to an examination of effectiveness, thus we did not consider efficacy in any detail in this report.
Our framework merged approaches from method-oriented evaluation (Cook and Campbell, 1979; Boruch, 1997) that focus on issues of internal and external validity, attribution of effects, and generalizability, with approaches from theory-driven evaluations that focus on how these approaches interact with practices (Chen, 1990; Weiss, 1997; Rossi et al., 1999). This permitted us to consider the content issues of particular concern to mathematicians and mathematics educators, the implementation challenges requiring significant changes in practice associated with reform curricula, the role of professional development and teaching capacity, and the need for rigorous and precise measurement and research design.