The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
mance depends on many factors, only some of which the school can control (Raudenbush and Willms, 1995). Because of differences in student composition, test scores by themselves say little about “school effects,” or the influence of attending a particular school on student performance. However, using statistical techniques to control for student background factors, Raudenbush and Willms (1995) have shown that it is possible to compute at least the upper and lower bounds of school effects. Separately, Sanders (Sanders and Horn, 1995) has developed a statistical method to calculate the effect on student performance of individual teachers within a school. Sanders's method has been used in several districts to determine the “value added” that teachers provide.
Instructional Sensitivity. Tests vary in the extent to which they respond to and inform instructional practice. Many tests, particularly those designed to test a range of standards, are relatively insensitive to instruction; changing teaching practices to reflect standards may not result in higher test scores on such assessments. But even tests that do capture the results of instructional improvement may not be as informative as they might; the ways the tests are scaled and results are reported tell little about what caused students to succeed or not succeed.
Determining the instructional sensitivity of assessments requires careful study of classroom practices and their relations to student performance. To carry out such studies, researchers need data on the type of instruction students receive. By showing whether instructional practices related to the standards produce gains in assessment performance while other practices do not, researchers can demonstrate whether an assessment is instructionally sensitive (Cohen and Hill, 1998; Yoon and Resnick, 1998).
Multiple Purposes. Tests should be constructed in different ways, depending on the purpose for which they are used. A test intended to inform the public and policy makers about the condition of education is more likely than other types of tests to include a broad range of items designed to provide information about students' mastery of, say, 8th grade mathematics. These tests are typically administered at most once a year, and often the results come back too late for teachers to use them to make adjustments in their instructional programs.
A test intended for instructional guidance, in contrast, is more likely than others to include items that tap a particular topic—say, algebra—in greater depth, so that teachers have an idea of students' specific knowledge and skills, and possible misconceptions. These tests, usually administered by classroom teachers, are given relatively frequently.
The technical quality of a test should be appropriate for its intended use. For measures used for accountability, system monitoring, and program evaluation, the Standards for Educational and Psychological Testing (American Educa-