Click for next page ( 2

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement

Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
1 Introduction The National Assessment of Educational Progress (NAEP), also known as the nation's report card, has chronicled American students' academic achieve- ment for over a quarter of a century. It has been a valued source of information about the academic performance of students in the United States, providing among the best-available trend data on the achievement of elementary, middle, and secondary students in key subject areas. The NAEP program has set an innova- tive agenda for conventional and performance-based testing and in doing so has become a leader in American achievement testing. NAEP's prominence and the important need for stable and accurate measures of academic achievement have prompted a legislative mandate for ongoing evalu- ation of the program. This mandate, levied by Congress, calls for evaluation of NAEP and an analysis of the extent to which its results are reasonable, valid, and informative to the public (P.L. 103-382~. The legislative charge includes evalu- ation of the national assessment, the state program, and the student performance standards reported by NAEP. A three-year evaluation of NAEP was recently conducted by the National Research Council. Its Committee on Evaluation of National and State Assess- ments of Educational Progress recently issued a report entitled Grading the Nation's Report Card: Evaluating NAEP and Transforming the Assessment of Educational Progress (National Academy Press, 1999~. The present volume is a companion to the main report and consists of a collection of papers prepared to support the committee's evaluative analyses and deliberations. To assist in its work, the committee commissioned research and syntheses on four key topics: NAEP's assessment development, NAEP's content validity, NAEP's design and 1

OCR for page 1
2 GRADING THE NATION'S REPORT CARD use, and the design of education indicator systems. This work helped to inform the committee's analysis, instigate debate, and push the committee's thinking on key topics and issues. Some of the papers in this volume are more directly relevant to and aligned with the committee's conclusions and recommendations than are others. In every case the papers represent the authors' views, not those of the committee. The first topic addressed by this volume is the development of assessment materials by NAEP. In Grading the Nation's Report Card, the committee argued that NAEP's assessment development should be guided by a coherent vision of student learning and by the kinds of inferences and conclusions about student performance that are desired in reports of NAEP results. The committee con- cluded that multiple conditions should be met in assessment development for NAEP: (a) NAEP frameworks and assessments should reflect subject-matter knowledge; research, theory, and practice regarding what students should under- stand and how they learn; and more comprehensive goals for schooling; (b) assessment instruments and scoring criteria should be designed to capture important differences in the levels and types of student knowledge and under- standing, through both large-scale surveys and multiple alternative assessment methods; and (c) NAEP reports should provide descriptions of student perfor- mance that enhance the interpretation and usefulness of summary scores. The first two authors, Patricia Ann Kenney and Jim Minstrell, discuss the develop- ment of frameworks, items, and reports for NAEP. In Chapter 2, "Families of Items in the NAEP Mathematics Assessment," Kenney presents ideas for and gives examples of families of items in mathematics. She contends that families of items support fuller understanding and description of students' understanding in mathematics because students' responses can be examined across sets of related items rather than in isolation. In Chapter 3, "Student Thinking and Related Assessment: Creating a Facet-based Learning Environment," Minstrell suggests an approach to examining students' thinking in science and shows how the approach can be used to diagnose student difficulties and tailor instruction to address performance deficits. His paper discusses ways that research on learning and teaching can be used to inform instruction in science and speaks to the development of NAEP assessments. The second topic area relates to the first and concerns the content validity of NAEP. In its final report the committee observed that many of the changes in NAEP instrumentation over the past 30 years reflect only minimally the changes that have occurred in certain critical areas of knowledge. The committee ques- tioned whether NAEP's consensus-based frameworks and the assessments based on them lead to portrayals of student performance that deeply and accurately reflect student achievement. Stephen G. Sireci and colleagues and Jennifer R. Zieleskiewicz examine the dimensionality and content validity of NAEP assessments. In Chapter 4, "An External Evaluation of the 1996 Grade 8 NAEP Science Framework," authored

OCR for page 1
INTRODUCTION 3 with Frederic Robin, Kevin Meara, H. Jane Rogers, and Hariharan Swaminathan, Sireci reports on the content validity of the NAEP science assessment to deter- mine whether inferences derived from its scores can be linked to targeted content and skill domains. Sireci and his colleagues worked with science teachers to review items from the NAEP science assessment and solicit judgments about the knowledge and skills measured by sampled items. They compared teachers' judgments to developers' categorizations of the items. In Chapter 5, "Appraising the Dimensionality of the 1996 Grade 8 NAEP Science Assessment Data," Sireci, Rogers, Swaminathan, Meara, and Robin evaluate the structure of item response data gathered in the 1996 science assessment and compare this structure to that specified in the NAEP framework. In Chapter 6, "Subject-Matter Experts' Perceptions of the Relevance of the NAEP Long-Term Trend Items in Science and Mathematics," Jennifer R. Zieleskiewicz asks whether NAEP's long-term trend items are up-to-date and relevant measures of student achievement in mathematics and science. She com- pares experts' ratings on the relevance of these items to relevance ratings for items created under the current frameworks. She presents data on the correspon- dence between long-term trend NAEP and main NAEP, national standards, and contemporary classroom practices in mathematics and science. The third topic of this volume is NAEP's design and use. In its report the committee argues that the proliferation of NAEP's multiple independent data collections national NAEP, state NAEP, and long-term trend NAEP is con- fusing, burdensome, and inefficient and sometimes produces conflicting results. The committee recommended that NAEP reduce the number of independent large-scale data collections while maintaining trend lines, periodically updating frameworks, and providing accurate national and state-level estimates of aca- demic achievement. Michael J. Kolen and Sheila Barron make suggestions for streamlining NAEP's current designs and simplifying the secondary analysis of NAEP data. In Chapter 7, "Issues in Phasing Out Trend NAEP," Kolen considers ways that long-term trend NAEP can be phased out and replaced by the main NAEP assess- ments while still maintaining the long-term trend line. In Chapter 8, "Issues in Combining State NAEP and Main NAEP," Kolen examines options for combin- ing the main and state NAEP designs. In both papers he focuses on sampling, operational and measurement concerns and lays out the strengths and weaknesses of varied designs. In Chapter 9, "Difficulties Associated with Secondary Analysis of NAEP Data," Barron outlines difficulties that secondary analysts face in using NAEP data. She discusses the means by which NAEP's sponsors have attempted to address these problems and gives recommendations for improving the usability of NAEP data. The last two chapters of the volume provide suggestions for the design of education indicator systems. In Grading the Nation's Report Card, the committee argues that the nation's educational progress should be portrayed by a broad array

OCR for page 1
4 GRADING THE NATION'S REPORT CARD of education indicators that include but go beyond NAEP's achievement results. The committee recommends that the U.S. Department of Education integrate and supplement the current collections of data on education inputs, practices, and outcomes to provide a more comprehensive picture of education in America. The committee commissioned the last two papers in this volume to help its members think about the development of an indicator system and about the collection of data on curriculum and instructional practice, academic standards, technology use, financial allocations, and other indicators of educational inputs, practices, and outcomes. In Chapter 10, "Putting Surveys, Studies, and Datasets Together: Linking NCES Surveys to One Another and to Datasets from Other Sources," George Terhanian and Robert Boruch review research and experience on the integration of federal statistics to inform science and society. The authors take lessons from past data linkage efforts to make suggestions for the National Center for Educa- tion Statistics (NCES) and the U.S. Department of Education. They suggest policies for making statistical surveys and datasets sinkable. In Chapter 11, "Developing Classroom Process Data for the Improvement of Teaching," James W. Stigler and Michelle Perry argue for the collection of edu- cational practice data. They contend that for achievement data to be informative such data must be accompanied by information about what is going on in class- rooms and that it is important to relate changes in student learning outcomes to possible sources of achievement gains and decrements. The authors suggest the kinds of data to be collected as well as methods and costs for collecting them and ways to integrate the data into present NCES activities. The committee deeply appreciates the time, energy, enthusiasm, and intellect dedicated to the evaluation by the authors. Their papers stand as important contributions to assessment research and the NAEP program.