Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
1 Introduction Educational assessments are a major feature of the educational landscape in the United States. They serve many purposes policy makers and administrators use them to monitor both the progress of schools and systems and the relative success of educational policies, for example, and also to answer questions about individual students for placement and other purposes. These purposes, for which large-scale, standardized, assessments are usually used, generate the most public discussion, but assessments are also used by teachers, in both formal and infor- mal ways on a daily basis, to monitor students' learning and to identify specific areas in which further work is needed. Classroom assessments are an important tool for providing feedback to students so they can adjust their learning; they also help teachers to identify student misconceptions and to modify their instruction accordingly.) Whatever form it takes, classroom assessment is a critical compo- nent of effective instruction. Although both kinds of assessments have a very important role to play, they are not often accorded equal weight by policy makers or in public discussion. Large-scale assessments have become increasingly politicized, at both the local and national levels. Their results have been used in political campaigns and other venues to make points they were not designed to support. Large-scale test results are also widely used to make both formal and informal evaluations of local iTn discussing classroom assessment the committee is thinking of the assessments that are part of ongoing classroom life, such as written or oral weekly quizzes, end-of-semester examinations, port- folios, and comments and grades on homework assignments (NRC, 2001b). 1
OCR for page 2
2 ASSESSMENT IN SUPPORT OF INSTRUCTION AND LEARNING schools (and thus can influence property values). As states work to comply with the testing provisions of the No Child Left Behind Act, the nation is likely to see both a greater quantity of large-scale tests, and heightened attention to their results. For all these reasons, and, perhaps, simply because they are so much more visible, large-scale tests are far more frequently on the public agenda than their classroom counterparts. Moreover, the two kinds of tests are seldom aligned in such a way that they can support one another. Indeed, classroom teachers do not always recognize the potential of large-scale assessments because the assess- ments their students are given are not directly relevant to their instructional goals, and also in many cases because teachers have not had sufficient training in assessment issues to understand fully how best to use such tests and the data they generate. The feedback from large-scale assessments is often too general for teachers to use in making future curricular and instructional decisions and often arrives so long after the assessment that it cannot be applied to current students. At the same time, large-scale assessment programs rarely seem to tap into the insights about students' learning that classroom teachers are in a unique position to offer through their own assessments. Though classroom assessments are often focused on what are known as "formative" purposes to provide imme- diate feedback that can shed light on student learning they can also provide "summative" evidence about students that can be used to classify or place them, for example. In a number of contexts, as will be discussed below, educators have found that classroom assessments, if properly designed, can be used for the broader accountability purposes that are more typical of large-scale assessments. At present, however, there is an apparently large gulf between the two types of assessments as they are used in the United States; close inspection of this gap reveals an array of interrelated issues. The gap between classroom and large-scale assessments has caught the atten- tion of several National Research Council (NRC) committees, and one result has been a clear consensus that instruction and learning are best supported in educa- tional systems when large-scale and classroom assessments are aligned with each other and with standards, curriculum, instruction, and professional development.2 A three-year study of the implications of new information about learning and cognition for educational assessments resulted in a report, Knowing What Students Know: The Science and Design of Educational Assessment (NRC, 2001c), which lays out several features that would characterize an educational system that achieves this seamless integration. 2By an aligned system, the committee means one in which each of the key elements has been designed both with reference to one another and with reference to overarching system goals. In such a system, the elements work together rather than, as can easily happen in a large, complex enterprise such as a public school system, at cross purposes.
OCR for page 3
INTRODUCTION 3 As the committee that wrote that report recognized, the ideal of seamless alignment has proved difficult to achieve in practice. To better understand how the ideal of alignment is conceptualized in practice, three NRC boards the Mathematical Sciences Education Board, the Committee on Science Education K-12, and the Board on Testing and Assessment formed a joint steering com- mittee, the Committee on Assessment in Support of Instruction and Learning, to plan a workshop that would bring together leading experts in measurement and assessment with international, state, and local program directors to illustrate some ways in which classroom and large-scale assessments can work together conceptually and operationally to better support student learning. The goal of the workshop was to highlight current efforts to align classroom and large-scale assessments with each other and with instruction, standards, curriculum, and professional development. To accomplish this, the workshop featured discussions of the relative successes and challenges of science and math- ematics assessment systems that are attempting to bridge the gap between class- room and large-scale assessments; it also included discussions of research-based visions of effective assessment programs that have not yet been put into practice on a large scale. Featured programs would be selected based on their potential to provide insight into the ways in which more coherent assessments could be designed and implemented. Selected workshop speakers would also explore practices in other countries, alternatives to standardized tests as sources of data for accountability purposes, and opportunities and advances in our understanding of cognition and learning. The intent of the workshop would not be to evaluate the programs presented, but rather to gain a better understanding of the ways in which the ideals of a coherent assessment system, as described in the research literature and synthe- sized in a number of NRC reports (1993, 1998, 2000, 2001a, 2001b, 2001c, and 2002), might be implemented in practice. Planning for the workshop was shaped by a set of specific criteria, discussed below, that might characterize an ideal system. These criteria were distilled from the reports listed above as well as from other relevant research, for example, National Science Education Standards (NRC, 1996~; Assessment Standards for School Mathematics (National Council of Teachers of Mathematics, 1995~; Con- figuring Curricula for Instructionally Supportive Assessment (Popham, in press); and Building Tests to Support Instruction and Accountability (Commission on Instructionally Supportive Assessment, 2001~. At the workshop, held January 23-24, 2003, presentations on programs developed in seven states as well as other examples, including some from abroad, stimulated lively discussion. Questions were raised not only about how ideal goals translate into practice, but also about the different kinds of obstacles to success in these efforts. (See Appendix A for the workshop agenda, Appendix B for a list of the workshop participants, and Appendix C for contact information.)
OCR for page 4
4 ASSESSMENT IN SUPPORT OF INSTRUCTION AND LEARNING While the committee made no effort to systematically evaluate the success of the programs presented, it did learn much of interest about how those involved see the challenges before them, and about some of the strategies they have devised for overcoming them. The purpose of this report is to provide an account of the discussions, and to use some of the examples presented as a way of putting flesh on the bones of the concepts that were introduced in Knowing What Students Know. The committee recognizes that no existing program has yet been able to meet all of the ambitious goals it identified. In this report the committee does not intend to signal endorsement of the examples discussed. Rather, the intention is to illustrate what different ways of attempting to meet the goals suggested by the criteria can look like.
Representative terms from entire chapter: