Index

A

AAAS. See American Association for the Advancement of Science

Ability to Do Quantitative Thinking (ITED-Q), 107–108

Accuracy, of content analyses, 78–79

Achieved curriculum, 38

Achievement, importance of social class to, 110

Advanced mathematics at the research level, 13

Advanced Placement (AP) courses, 52

exams in, 49

Alternative experimental approaches, 64

agent-based models, 64

dynamical systems, 64

game theory, 64

large-scale simulations, 64

American Association for the Advancement of Science (AAAS), 69–70, 89

Project 2061, 74

American Mathematical Association of Two-Year Colleges, 123

An Incremental Development, 21

Analysis of Covariance (ANCOVA), 127–128, 157, 166

Analysis of Variance (ANOVA), 127, 166

Anchor items, 106

ANCOVA. See Analysis of Covariance

ANOVA. See Analysis of variance

AP. See Advanced Placement courses

ARC Implementation Center study, 100, 105

Askey, Richard, 24, 79–82, 88

Assessment of existing studies, 2–3

case studies, 3, 5

comparative studies, 2–4

content analysis, 2, 5, 90–91

final report, 5

synthesis studies, 3

Assignment. See Random assignment

Attrition, indications of, 51

Authors’ backgrounds

in case studies, 32

in comparative studies, 32

in content analysis, 32

qualifications of, 43

single vs. teams of, 55

by study type, 32

in synthesis studies, 32

Automaticity, associated with mastery of standard algorithms, 160



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations Index A AAAS. See American Association for the Advancement of Science Ability to Do Quantitative Thinking (ITED-Q), 107–108 Accuracy, of content analyses, 78–79 Achieved curriculum, 38 Achievement, importance of social class to, 110 Advanced mathematics at the research level, 13 Advanced Placement (AP) courses, 52 exams in, 49 Alternative experimental approaches, 64 agent-based models, 64 dynamical systems, 64 game theory, 64 large-scale simulations, 64 American Association for the Advancement of Science (AAAS), 69–70, 89 Project 2061, 74 American Mathematical Association of Two-Year Colleges, 123 An Incremental Development, 21 Analysis of Covariance (ANCOVA), 127–128, 157, 166 Analysis of Variance (ANOVA), 127, 166 Anchor items, 106 ANCOVA. See Analysis of Covariance ANOVA. See Analysis of variance AP. See Advanced Placement courses ARC Implementation Center study, 100, 105 Askey, Richard, 24, 79–82, 88 Assessment of existing studies, 2–3 case studies, 3, 5 comparative studies, 2–4 content analysis, 2, 5, 90–91 final report, 5 synthesis studies, 3 Assignment. See Random assignment Attrition, indications of, 51 Authors’ backgrounds in case studies, 32 in comparative studies, 32 in content analysis, 32 qualifications of, 43 single vs. teams of, 55 by study type, 32 in synthesis studies, 32 Automaticity, associated with mastery of standard algorithms, 160

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations B Balance, in content analyses, 83–85 Balanced assessment, of outcome measures, 116 “Between” comparisons, 157 Bias evaluator, 138 randomization to avoid, 63 reducing, 110 Bonferroni method, 111 C Calculators, allowing during test taking, 53–54 Case studies, 28, 30, 60, 167–180. See also Comparative studies; Content analyses; Synthesis studies assessment of, 3, 5 authors’ backgrounds in, 32 comments on, 178–180 criteria for inclusion, 168–169 differential impact on different student populations, 172–175 in establishing curricular effectiveness, 8–9 findings, 171 interactions among curricula and common practices, beliefs, and understandings, 176–177 patterns in findings, 172 professional development, 177–178 school location, by study type, 33 the studies, 169 time management, 178 Case studies methodology, 60, 170–171 backing claims by evidence and argument, 170 defining the case, 170 “minimally methodologically adequate” studies, 97, 101–103, 115, 118–119, 136–137, 150, 155, 164 replicability of design, 170–171 revealing mechanisms at play during implementation of a curriculum, 171 triangulation of evidence from multiple sources, 60 Catalytic programs, 53 Chi-square tests, 128, 157 Claims, backing with evidence and argument, 170 Clarity of objectives, of content analyses, 77–78 Classroom observations, 114 Classroom teachers. See Teachers CMP. See Connected Mathematics Project Commercial publishers. See Publishers Commercially published (non-NSF-funded) curricula, 15, 20–22, 97, 99–100, 105, 120, 142–143, 145, 149, 152–153, 156, 158–159, 162–164, 168, 198 for elementary school, 21, 29, 169 and the filters, studies of, 142 for high school, 22, 29, 169 major textbook publishers, 20–21 market studies not useful in evaluating curricular effectiveness, 28 for middle school, 21, 29, 169 secrecy with which market share data are held, 20 Community factors, 44 Comparative analyses, 7–8 appropriate statistical tests, 7 constraints as to generalizability of study, 7 disaggregated data, 7, 158, 200 in establishing curricular effectiveness, 7–8 extent of implementation fidelity, 7 outcome measures that can be disaggregated, 7 random assignment, 7 Comparative curricula, for content analyses, selection of, 74–75 Comparative research designs, 58–59 Comparative studies, 2–4, 28, 30, 57–58, 96–166 assessment of, 2–4 authors’ backgrounds in, 32 “between” comparisons, 157 comparability of samples, 3 conclusions from, 164–166 defining, 97 description of comparative studies database on critical decision points, 104–164 an evolving methodology, 96 implementation fidelity, 3

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations “minimally methodologically adequate,” 97, 101–103, 115, 118–119, 136–137, 150, 155, 164 multiple outcome measures, 3, 5 professional development activity, 3 results disaggregated by content strands or by performance by student subgroups, 3 school location, by study type, 33 “within” comparisons, 157 Comparative studies database, description on critical decision points, 104–164 Comparativeness, 132 Comprehensiveness of content analyses, 78 of outcome measures, 9 Conceptions of mathematics, studies of, 102 Connected Mathematics Project (CMP), 19, 74, 78, 88–89, 99–100, 118–119, 121–122, 133, 172, 175, 177 Connoisseurial assessments, 197 Conservative test scores, 124 Contemporary Mathematics in Context (Core-Plus) (CPMP), 20, 80–81, 88, 100, 107, 123, 129, 175, 177–178 Content, compatible with all students’ abilities, 65 Content analyses, 6–7, 57 disciplinary perspectives, 6 in establishing curricular effectiveness, 6–7 learner-oriented perspectives, 7 resource-oriented perspectives, 7 teacher-oriented perspectives, 7 Content analysis, 28, 30, 41–43, 65–95 assessment of, 2, 5 authors’ backgrounds in, 32 as connoisseurial assessment, 197 dimensions of content analyses, 71–95 the discipline, the learner, and the teacher as dimensions of, 77 inclusion of content and/or pedagogy, 75–76 increasing sophistication of, 95 literature review, 68–71 needing definition, 24 participation in content analyses, 72–74 selection of standards or comparative curricula, 74–75 Content strands, 149–153 Control groups, using comparative curricula with, 166 “Controlled” experiments, 62 Core Content for Assessment, 71 Core-Plus. See Contemporary Mathematics in Context (CPMP) “Corruptibility of indicators,” 51 CPMP. See Contemporary Mathematics in Context (Core-Plus) Criteria for inclusion, of case studies, 168–169 Critical decision points in comparative studies, 104–164 alternative hypotheses on effectiveness, 137–139 analysis by test type, 148 choosing statistical tests, 127–132, 199 commercial materials studies and the filters, 142 content strand, 149–153 defining the unit of analysis, 112–114, 128–130, 147 equity analysis, 153–158 experimental or quasi-experimental design, 75, 104–108, 165, 199 filtering studies to increase rigor, 139–142, 199 impact of generalizability on probabilities, 146–147 impact of identification of curricular program on probabilities, 143–145 impact of treatment fidelity on probabilities, 143, 147 impact of units of analysis on probabilities, 140, 146, 165 using the wrong unit, 138 implementation components, 114–127 interactions among content and equity, by grade band, 159–164 NSF studies and the filters, 141–142 random assignment studies not using, 108–112 results and limitations to generalizability resulting from design constraints, 132–134, 140 results of filtering on evaluations of NSF-supported curricula, 142 summary of results by student achievement among program types, 134–137 Cultural factors, 44

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations Curricula alignment with systemic factors, 125 ambiguity in use of term, 38 defining, 38–39 in educational practice, 1 guidelines for implementation, 4 Curricula under review, 19–22 commercially published non-NSF-funded curricula, 15, 20–22, 97, 99–100, 105, 120, 142–143, 145, 149, 152–153, 156, 158–159, 162–164, 168, 198 curricula programs supported by the NSF, 19–20, 97, 99–100, 105, 120, 142–144, 146, 149, 151–153, 156, 158–159, 162–164, 171, 180, 198, 202 “hybrid” between NSF-supported and commercially generated curricular programs, 22 Curricular approaches, 37 “college preparation approach,” 37 “modeling and applications approach,” 37 “skills-based, practice-oriented approach,” 37 Curricular effectiveness alternative hypotheses on, 137–139 complexity and urgency of establishing, 10 defining, 36–37 difficulty determining, 3 efficacy, 37 establishing, 4–9 framework for establishing, 37–38 weaker findings about, 8 Curricular options decisions that involve multiple groups of decision makers, 96 value of diverse, 9 D Dahl, Terri, 46 Data gathering, 22–24 Decision makers, 1 expressed needs or preferences of, 43 providing information to, 18 Design principles, guidelines for, 4 Design replicability, 170–171 Dimension One of content analyses, 77–86 accuracy, 78–79 balance, 83–85 clarity of objectives, 77–78 comprehensiveness, 78 mathematical inquiry and mathematical reasoning, 79–82 organization, 82–83 Dimension Three of content analyses, 92–93 pedagogy, 92 professional development, 92 resources, 92–93 Dimension Two of content analyses, 86–91 assessment, 90–91 student engagement, 86–88 timeliness and support for diversity, 88–90 Disaggregating data from comparative analyses, 7, 158, 200 in common content strands, 50, 147 by gender, 7, 158, 200 by performance levels, 7, 158, 200 by race/ethnicity, 7, 158, 200 by socioeconomic status, 7, 158, 200 Disciplinary perspectives, in content analyses, 6, 77 District curriculum specialists, as decision makers, 1 Diverse curricular options, value of, 9 Diversity, support for in content analyses, 88–90 E Educator independence, 61 Effect size, in statistical tests, 127–132, 199 Effectiveness. See Curricular effectiveness Elementary school curricula, 19, 21, 29, 169 Everyday Mathematics, 19, 83, 100, 107, 174, 176, 181 Harcourt Math, 21 Investigations in Number, Data and Space, 19 Math K-5, 21 Math Trailblazers, 19, 100 Eligibility, 111 EM. See Everyday Mathematics Embedded assessment, 47 Enacted curriculum, 38

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations Engagement. See Student engagement Equity analysis, of comparative studies, 153–158 Errors mathematical, 79 Type I, 62 Establishing curricular effectiveness, 4–9 case studies, 8–9 comparative analyses, 7–8 content analyses, 6–7 scientific, 5, 14, 19 Ethnographic evaluation, 60 Evaluation of curricular effectiveness, 11, 50, 54–64, 190 accumulation of knowledge and the meta-analysis, 61–64 articulation of program theory, 54–56 controversy surrounding, 204–205 cost-efficiency, 11 credibility, 11 educator independence, 61 ethnographic perspectives, 60 including representative samples, 155 informativeness, 11 selection of research design and methodology, 57–60 time elements, 61 validity, 11 Evaluator bias, 138 Everyday Mathematics (EM), 19, 83, 100, 107, 174, 176, 181 example of synthesis studies, 181 Existing studies, assessment of, 2–3 Expectations, standardizing, 156–157 Experimental approaches, 63 alternative, 64 randomization to avoid bias, 63 Experimental vs. quasi-experimental design, 75, 104–108, 165, 199 “Extended students’ thinking,” 176 Exxon Education Foundation, 182 F Federally funded curricula, 4 Filtering studies by critical decision points to increase rigor, 139–142, 199 results on evaluations of NSF-supported curricula, 142 Findings in case studies, 171 inconclusive, 3 Fisher, R. A., 62 Formative assessment, 47 Framework for evaluating curricular effectiveness, 36–64 evaluation design, measurement, and evidence, 54–64 guidelines for future evaluations, 4 implementation components, 43–48 intervention strategies, 52–53 measures of student outcomes, 49–51 primary components, 40–51 program components, 40–43 secondary components, 52–54 systemic factors, 52 unanticipated influences, 53–54 G Gagne-type hierarchical structure, 82 Game theory, 64 Gender, disaggregated data by, 7, 158, 200 Generalizability associated with mastery of standard algorithms, 160 in comparative analyses, constraints on, 7 impact on probabilities, 146–147 limitations on, 132–134, 140, 200 results and limitations resulting from design constraints, 132–134, 140 of results to future circumstances, 56, 132 Generic controls, 58 Group work, 175 Guidelines for future evaluations, 4 curricular implementation, 4 outcomes of student learning over time, 4 program materials and design principles, 4 Gutstein, Eric, 24 H Harcourt Brace, 23 Harcourt Math, 21 Hawthorne effect, 138

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations Heath Mathematics, 174 Hierarchical linear modeling, 128 Hierarchical structure, Gagne-type, 82 High school curricula, 20, 22, 29, 169 Contemporary Mathematics in Context (Core-Plus) (CPMP), 20, 80–81, 88, 100, 107, 123, 129, 175, 177–178 Integrated Mathematics, 22, 66, 87, 180 Interactive Mathematics Program, 20, 91, 100, 108 Larson Series, 22 MATH Connections, 20 Mathematics: Modeling Our World, 20, 86 Systemic Initiative for Montana Mathematics and Science, 20, 84, 177, 182 University of Chicago School Mathematics Project, 97–100, 105, 115, 120, 123–125, 130, 136–137, 142–143, 146–147, 164, 168, 198, 202 High school graduates, with adequate levels of mathematical knowledge, 13 High School Subject Tests—Geometry Form B, 124 Hirsch, Christian, 88 Home schooling, 43 Howe, Roger, 24, 44, 76 “Hybrid” curricula, between NSF-supported and commercially generated curricular programs, 22 I IAAT. See Iowa Algebraic Aptitude Test Identification of curricular program, impact on probabilities, 143–145 Illinois Goal Assessment Program, 181 IMP. See Interactive Mathematics Program Implementation components, 43–48, 114–127 appropriate assignment of students, 44 assessment, 47–48 ensuring adequate professional capacity, 44–46 identification of a set of outcome measures and forms of disaggregation, 120–127, 140 implementation fidelity, 114–118, 139 instructional quality and type, 47 “opportunity to learn,” 47, 124, 194 parental influence and special interest groups, 48 professional development, 118–119, 139 teacher effects, 119–120, 140 Implementation fidelity, 3 in comparative studies, 7, 114–118, 139 Implementation of a curriculum development of a community of practitioners for, 185–186 factors undercutting, 138 mechanisms at play during, 171 trustworthiness of, 8–9, 56 Indicators, “corruptibility of,” 51 Instructional quality and type, 47 “Integrated Mathematics Project,” 182 Intended curriculum, 38 Interactive Mathematics Program (IMP), 20, 91, 100, 108 International tests, 49 Third International Mathematics and Science Study, 49, 72, 92, 106, 108 Investigations in Number, Data and Space, 19 Iowa Algebraic Aptitude Test (IAAT), 132 Iowa Test of Basic Skills (ITBS), 49, 116, 158 Iowa Tests of Education Development, 107 ITBS. See Iowa Test of Basic Skills ITED-Q. See Ability to Do Quantitative Thinking J Joint Committee on Standards for Educational Evaluation, 109, 193 K Kentucky Middle Grades Mathematics Teacher Network, 71 L Large-scale assessments, 49, 121 Large-scale simulations, 64 Larson Series, 22 Learner-oriented perspectives, in content analyses, 7, 77

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations Lehrer, Richard, 43 Literature of content analysis, 68–71 American Association for the Advancement of Science, 69–70, 74, 89 Core Content for Assessment, 71 Kentucky Middle Grades Mathematics Teacher Network, 71 Mathematically Correct website, 70–71 Middle School Mathematics Comparisons for Singapore Mathematics, Connected Mathematics Program, and Mathematics in Context, 71, 85 Robinson and Robinson, 70 U.S. Department of Education, 68–69 Longitudinal evaluation, 58, 106–107, 195 of individual student learning, 48, 50 M “Major content strands,” defining, 149 “Major portion,” defining, 39 MANOVA. See Multiple Analysis of Variance Market share data, held in secrecy, 20 Market studies, not useful in evaluating curricular effectiveness, 28 Matched comparison groups, 59 Math 65, 82 MATH Connections, 20 Math K-5, 21 Math Trailblazers, 19, 100 “Mathematical empowerment,” rhetoric of, 175 Mathematical inquiry and mathematical reasoning, in content analyses, 79–82 Mathematical Science Education Board, 14 Mathematical sciences careers in, 163 intensive careers in technology fields, 13 Mathematical scientists, 192 Mathematically Correct website, 70–71 reviews on, 90 Mathematics: Modeling Our World (MMOW), 20, 86 Mathematics educators, 192 Mathematics in Context (MiC), 20, 74, 78, 89, 182 example of synthesis studies, 182–183 Mathematics teaching, in U.S., extreme limits of, 47 MathScape, 20 MathThematics (STEM), 20 McCallum, William, 24, 43, 73, 76 McGraw-Hill, 21 Measures of student outcomes, 49–51 international tests, 49 large-scale assessments, 49, 121 national standardized tests, 49 Meta-analysis, accumulation of knowledge and, 61–64 Methodology call for increasing rigor, 8 in case studies, 170–171 standardizing, 156–157 MiC. See Mathematics in Context Middle school curricula, 19–20, 21, 29, 169 An Incremental Development, 21 Applications and Connections, 21 Connected Mathematics Project, 19, 74, 78, 88–89, 99–100, 118–119, 121–122, 133, 172, 175, 177 Mathematics in Context, 20, 74, 78, 89, 182 MathScape, 20 MathThematics (STEM), 20 Middle School Mathematics Through Applications Project, 20 Middle School Mathematics Comparisons for Singapore Mathematics, Connected Mathematics Program, and Mathematics in Context, 71, 85 Middle School Mathematics Through Applications Project (MMAP), 20 Milgram, R. James, 24, 73, 76 “Minimally methodologically adequate” studies, 97, 101–103, 115, 118–119, 136–137, 150, 155, 164 MMAP. See Middle School Mathematics Through Applications Project MMOW. See Mathematics: Modeling Our World Multiple Analysis of Variance (MANOVA), 127–128, 157, 166 Multiple methodologies, 8, 37, 50, 191 Multiple outcome measures, 3, 5 Multiple regressions, 128

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations N NAEP. See National Assessment of Educational Progress (Nation’s Report Card) National Assessment of Educational Progress (Nation’s Report Card) (NAEP), 13, 49, 106–108 National Center for Education Statistics (NCES), 45, 202 National Commission on Teaching and America’s Future, 46 National Council of Teachers of Mathematics (NCTM), 8, 69, 181 Curriculum and Evaluation Standards for School Mathematics, 69 Principles and Standards for School Mathematics 2000, 71, 197 revised standards written by, 74 standards written by, 12, 52, 98 National decline, blaming curricula for, 188 National policy makers as decision makers, 1 need for sound evaluation of curricular developments, 11 National Research Council (NRC), 1, 19, 112, 167, 186 National Science Foundation (NSF), 1, 3, 168, 187 Implementation Centers, 23 Request for Proposals, 55, 153, 160–161 National Science Foundation (NSF)-supported mathematics curriculum materials, 7–8, 12, 19–20, 66, 97, 99–100, 105, 120, 142–144, 146, 149, 151–153, 156, 158–159, 162–164, 171, 180, 198, 202 design specifications shared by, 7–8 for elementary school, 19, 29, 169 and the filters, 141–142 for high school, 20, 29, 169 for middle school, 19–20, 29, 169 results of filtering on evaluations of, 142 reviews available on, 203 written primarily by university faculty, 25, 28 National standardized tests, 49, 162, 177 AP exams, 49 Iowa Test of Basic Skills, 49 National Assessment of Educational Progress, 49 not sensitive to curricular approaches, 138, 148 SAT, 49 NCES. See National Center for Education Statistics NCTM. See National Council of Teachers of Mathematics No Child Left Behind Act of 2001, 14, 164, 196 NRC. See National Research Council NSF. See National Science Foundation O Open-ended tasks, measures of, 50 “Opportunity to learn,” 47, 124, 194 Organization, of content analyses, 82–83 Orleans-Hanna Algebraic Prognosis Test, 124 Ortiz-Franco, Luis, 24 Outcome measures, 165–166, 259 careful attention to, 126 and forms of disaggregation, 120–127, 140 inadequate, 138 that can be disaggregated in comparative analyses, 7 Outcomes of student learning over time, 4 changes in, 138 P Parents as decision makers, 1 expressing their needs or preferences, 43 fears concerning change, 138 influence of, 48 Participation, in content analyses, 72–74 Patterns of results in case studies, 172 inferences to be drawn from, 15 separating issues of method from, 7 Pearson, 21 Pedagogy, in content analyses, 92 Performance levels, disaggregated data by, 7, 158, 200 Performance monitoring, 43 of students at all levels of achievement, 51, 194 Pilot sites, 140

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations Preliminary Scholastic Assessment Test (PSAT), 162, 182 Prior knowledge, 139 measuring from school databases, 50 Problem-based mathematics, 175 Problem sets, 56n Process evaluation, 43 Process variables, 44 Professional capacity, ensuring adequate, 44–46 Professional development activity, 3 in case studies, 177–178 in comparative studies, 118–119, 139 in content analyses, 92 different types of, 46 Program monitoring, 43 Program theory, articulation of, 54–56 PSAT. See Preliminary Scholastic Assessment Test Public discourse, 175 Publishers need for sound evaluation of curricular developments, 11 pressures on, 52 Q “Quasi-experiments,” 58–59 generic controls, 58 longitudinal studies, 58 matched comparison groups, 59 statistically equated control, 58 R Race/ethnicity, disaggregated data by, 7, 158, 200 Random assignment, 108 to avoid bias, 63 in comparative analyses, 7 studies not using, 108–112 Randomized experiments, 62 Randomized field trials, 59 Recommendations, 9–10, 185–205 at district and local levels, 10 to federal and state agencies and publishers, 9–10, 201–205 framework and key definitions, 189–190 regarding quality of the evaluations, 188–189 scientifically establishing curricular effectiveness, 191–193 Recommended practices for evaluators, 6, 193–201 case studies, 200–201 comparative studies, 198–200 content analyses, 197–198 curricular validity of measures, 6, 9, 49, 122, 126, 195 documentation of implementation, 6 implementation components, 165, 194 multiple student outcome measures, 6 outcome measures, 194–197 representativeness, 6 Reed Elsevier, 21 Reform Practices, 116–117 “Reform school” evaluation, 111 Reliability, of treatment administration, 108 Remedial mathematics activities, 13 Replicability of design, 170–171 Reporting the data, varied methods of, 50 Research design and methodology, 57–60 case studies, 60 comparative designs, 58–59 comparative studies, 57–58 content analyses, 57 Resource-oriented perspectives, in content analyses, 7, 44, 92–93 Results, disaggregated by content strands or by performance by student subgroups, 3 Reviewer’s expertise, 73 Reviews available, on curricula programs supported by the NSF, 203 Robinson, Eric, 81–82 S Sample populations, 166 comparability of, 3 size of, 140 SAT, 49 preparation courses for, 52 Saxon materials, 98–100, 112, 143, 147, 164 pedagogical approach, 56, 82, 87, 112, 125 Schifter, Deborah, 24, 76 School boards, as decision makers, 1 School location, by study type, 33–34 rural area, 34

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations suburban area, 34 wealthy area, 137 School scheduling, importance to administrators, 109 Scientific method, limitations of, 64 Scientific Research in Education, 57, 186–187 Scientific validity, 4, 190, 193 Second International Mathematics Study (SIMS), 127 SES. See Socioeconomic status Silver Burdett, 112 SIMMS. See Systemic Initiative for Montana Mathematics and Science SIMS. See Second International Mathematics Study Single authors, 55 Socioeconomic status (SES), 112, 139, 141, 175 disaggregated data by, 7, 158, 200 importance to achievement, 110 Sophistication of content analysis, increasing, 95 Special interest groups, 48 Standardized tests, 49 Standards, for content analyses, selection of, 74–75 State accountability systems, 49 State adoption boards as decision makers, 1 expressed needs or preferences of, 43 Statistical significance, 127–132, 199 Statistical tests in comparative studies, 7, 127–132, 199 Analysis of Covariance, 127–128, 157, 166 Analysis of Variance, 127, 166 Chi-square tests, 128, 157 hierarchical linear modeling, 128 Multiple Analysis of Variance, 127–128, 157, 166 multiple regression, 128 t-tests, 127, 157 Statistically equated control, 58 STEM. See MathThematics Strong-implementing teachers, 116 Student achievement, summary of results among program types, 134–137 Student affect, studies of, 102 Student engagement, in content analyses, 86–88 Student-generated reasoning, 160 Student populations, differential impact on, 172–175 Students. See also Performance monitoring appropriate assignment of, 44 top-performing, 138 variation in learning by, 48 Study characteristics, 25–30 for categories 1 through 4, 30–35 Study matrix, 24–25 Study types case studies, 28, 30, 167–180 comparative studies, 2–4 content analysis, 28, 30, 41–43, 65–95 synthesis studies, 28, 30, 180–184 Subtest scores, 195 Supplemental curricular materials, 138 Synthesis studies, 28, 30, 180–184 assessment of, 3 authors’ backgrounds in, 32 examples of, 181–183 Systemic Initiative for Montana Mathematics and Science (SIMMS) Integrated Mathematics: A Modeling Approach Using Technology, 20, 84, 161, 177, 182 T t-tests, 127, 157 Teacher data, by study type, 34–35 expressed needs or preferences of, 43 volunteer teachers, 35 Teacher effects, 119–120, 140 in comparative studies, 119–120, 140 strong- vs. weak-implementing teachers, 116 Teacher feedback, 114 Teacher-oriented perspectives, in content analyses, 7 Teacher preference importance to administrators, 109 self-selecting, 138 Teachers as decision makers, 1 a dimension of content analysis, 77 Teaching techniques, new, 138 Teams of authors, 55 TerraNova, 176 Test taking, allowing calculators during, 53–54

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations Test type, analysis by, 148 Textbook publishers, 20–21 McGraw-Hill, 21 Pearson, 21 Reed Elsevier, 21 Vivendi, 21 Third International Mathematics and Science Study (TIMSS), 49, 72, 92, 106, 108 Time elements, 61 Time management, 178 Timeliness, in content analyses, 88–90 TIMSS. See Third International Mathematics and Science Study Traditional curricula, 106, 123 Traditional Practices, 116–117 Treatment fidelity, impact on probabilities, 143, 147 Trustworthiness, of implementation, 8–9, 56 Type I errors, 62 U UCSMP. See University of Chicago School Mathematics Project Units of analysis defining, 112–114, 128–130, 147 impact on probabilities, 140, 146, 165 using the wrong unit, 138 University faculty, authoring curricular programs supported by the NSF, 25, 28 University of Chicago School Mathematics Project (UCSMP), 97–100, 105, 115, 120, 123–125, 130, 136–137, 142–143, 146–147, 164, 168, 198, 202 Integrated Mathematics, 22, 66, 87, 180 U.S. Department of Education, 68–69, 203 Panel on Exemplary Programs in Mathematics, 12 program reviews from, 83 V Validity, curricular validity of measures, 6, 9, 49, 122, 126, 195 Vivendi, 21 Volunteer teachers, 35 W Wang, Frank, 55 Weak-implementing teachers, 116 Wierenga, Timothy, 46 “Within” comparisons, 157 Workshops, defining effectiveness, 23–24 Wu, Hung Hsi, 24, 73, 76

OCR for page 261
On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations This page intentionally left blank.