Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
1 Introduction The Workforce Investment Act (WIA), enacted by Congress in 1998, requires states to establish a comprehensive accountability system for adult education programs. The WIA mandates that states must gather data on several core measures, including the educational gain of adult learners. States and local programs have typically utilized standardized tests to monitor the progress of adult learners. Yet many states and local programs are also interested in more authentic approaches, such as using performance assessments to measure students’ educational gain. At the request of the U.S. Department of Education (DOEd) and the National Institute for Literacy, the National Research Council (NRC) established the Committee for the Workshop on Alternatives for Assessing Adult Education and Literacy Programs to consider the measurement issues of using performance assessment for accountability purposes. During the course of the study, the committee operated under the following charge: The Board on Testing and Assessment (BOTA) of the National Academies proposes to convene a workshop on developing alternative assessments for measuring and reporting learning gains in adult basic education and literacy programs. At the workshop, the characteristics of psychometrically strong performance assessments as outlined in the Standards for Educational and Psychological Testing (1999) will be examined. Factors that affect the usefulness of performance assessments will be analyzed, and issues associated with identifying and managing these factors will be explored. The information gathered, discussed, and summarized at this workshop will aid states in their data collection for the National Reporting System (NRS) that assesses the
OCR for page 2
impact of adult education instruction, and in their development of performance-based accountability systems. To respond to this charge, the committee convened a workshop on December 12 and 13, 2001. The report that follows is a summary of the workshop. (The agenda for the workshop appears in Appendix A.) WORKSHOP ON PERFORMANCE ASSESSMENTS FOR ADULT EDUCATION In the United States, the nomenclature of adult education includes adult literacy, adult secondary education, and English for speakers of other languages (ESOL) services provided to undereducated and limited English proficient adults. Those receiving adult education services have diverse reasons for seeking additional education. With the passage of the WIA, the assessment of adult education students became mandatory—regardless of their reasons for seeking services. The law does allow the states and local programs flexibility in selecting the most appropriate assessment for the student. The purpose of the NRC’s workshop was to explore issues related to efforts to measure learning gains in adult basic education programs, with a focus on performance-based assessments. The two-day workshop consisted of seven panels and utilized two kinds of formats. In one format, the panel included presentations to the committee and workshop sponsors on relevant information related to a particular topic; there were five of these panels. At the end of each day, there was also a panel of discussants who were selected for their expertise in either measurement or adult education; these participants were asked to respond to the workshop presentations. The commentary and feedback of the discussants are found throughout the report. The opening panel was designed to provide a broad policy context for the two days of discussions. An overview of assessment in the context of adult education and literacy systems was presented by John Comings, senior research associate lecturer on education and director of the National Center for the Study of Adult Learning and Literacy at Harvard Graduate School of Education. Mike Dean, at DOEd’s Office of Vocational and Adult Education, presented an overview of the WIA and the NRS. Last, Sondra Stein, senior research associate at the National Institute for Literacy and the national director of Equipped for the Future (EFF), discussed EFF, a standards-based approach to defining and measuring results in the adult
OCR for page 3
education and literacy system. The EFF standards for adult literacy and lifelong learning are presented later in the report. Please see Chapter 6 and Figure 6-1 for more information about EFF. The topic of the second panel was developing performance assessments. This panel included Pamela Moss, associate professor in the School of Education at the University of Michigan, and Stephen Dunbar, professor of educational measurement and statistics at the University of Iowa. Moss was a member of the joint committee that, in 1999, revised the Standards for Educational and Psychological Testing of the American Educational Research Association (AERA), the American Psychological Association (APA), and the National Council on Measurement in Education (NCME). She presented a brief overview of the purpose and development process for the standards, highlighted the structure and organization of the standards, and discussed how they should be used in developing assessments. She also provided an example of the use of the standards to guide validity research on a K-12 test. Dunbar’s presentation identified important psychometric factors to consider in developing performance assessment tasks. These factors included administration, scoring, and security issues as well as technical issues, such as maintaining reliability and developing comparable tasks. The topic of the third panel was lessons learned from other contexts. The speakers, who represented a wide variety of disciplines and settings, shared their experiences in developing and implementing performance assessments in their fields. Their comments covered staff training, quality control, provisions for technical assistance, and cost considerations. This panel included Judy Alamprese, principal associate at Abt Associates; Eduardo Cascallar, principal research scientist at the American Institutes for Research; Myrna Manly, specialist in numeracy assessment at El Camino College (retired); Leah Bricker, senior program associate with Project 2061 at the American Association for the Advancement of Science; and Marcia Invernizzi and Joanne Meier, professors at the Curry School of Education at the University of Virginia. A subgroup of this third panel focused on lessons learned from K-12 state assessments. Several states have implemented performance-based assessment systems at the K-12 level for accountability purposes. Representatives from two states presented their states’ accountability models. Although neither model is directly aligned with the requirements of the WIA, the committee believed hearing about these experiences at the K-12 level would be a fruitful exercise. Mark Moody, assistant superintendent for planning, results, and information management at the Maryland Depart-
OCR for page 4
ment of Education, discussed the Maryland School Performance Assessment Program (MSPAP). The model for MSPAP focuses on using performance-based assessments to hold schools accountable. Students’ scores are reported only at the school level—no student-level scores are reported. Another kind of performance assessment used at the K-12 level consists of constructed-response questions in which students respond to a written prompt or short-answer questions. Kit Viator, administrator for student testing at the Massachusetts Department of Education, discussed the Massachusetts Comprehensive Assessment System (MCAS), which includes both selected-response and constructed-response questions. The panel of discussants for the first day responded to measurement issues related to developing performance assessments. Panel members, all experts in adult education or assessment, were Cheryl Keenan, director of adult education at the Pennsylvania Department of Education; Jim Impara, director of the Buros Institute of Assessment Consultation and Outreach; and Richard Hill, founder and executive director of the National Center for the Improvement of Educational Assessment. The fifth panel brought together several measurement experts to provide guidance on applying the Standards to the development and implementation of performance assessments in adult education. The panel members offered suggestions on possible approaches or models for performance assessments, discussed comparability issues inherent in the NRS, and outlined the steps for developing performance assessments. This panel included Mark Reckase, professor of measurement and quantitative methods at Michigan State University; Henry Braun, distinguished presidential appointee and managing director of literacy services at the Educational Testing Service (ETS); and Mari Pearlman, vice president of the Division of Teaching and Learning at ETS. In the sixth panel, the implications of using performance assessments with the NRS were considered from a variety of perspectives including those of state directors, local program directors, and test publishers. In the final presentation in this panel, a state director considered the level of readiness of adult education systems for high-stakes assessment. The panel members were Fran Tracy-Mumford, director of adult education for the state of Delaware; Donna Miller-Parker, director at the Shoreline Community College in Seattle, Washington; Wendy Yen, vice president of research at K-12 Works, ETS; and Bob Bickerton, director of adult education for Massachusetts. The final panel of discussants synthesized and responded to the mea-
OCR for page 5
surement issues raised over both days of the workshop. This panel was composed of well-known statisticians with extensive knowledge about assessment: Ronald Hambleton, distinguished professor at the University of Massachusetts at Amherst; David Thissen, professor of psychology at the University of North Carolina at Chapel Hill; Barbara Plake, W.C. Meierhenry distinguished professor of educational psychology at the University of Nebraska, Lincoln, and director of the Oscar and Luella Buros Center for Testing and the Buros Institute of Mental Measurements; and Stephen Sireci, associate professor in research and evaluation methods at the University of Massachusetts at Amherst. The workshop was structured to permit considerable discussion by presenters and participants. Following each speaker’s presentation, substantial time was devoted to open discussion. In preparation for the workshop, speakers were given sets of questions to address during their presentations, and they were asked to supply written copies of their presentations in advance. Members of the workshop steering committee served as moderators for each panel. After the workshop, the steering committee decided to commission several papers that either expanded or complemented presentations heard at the workshop and that would be useful for the sponsors and adult educators. The first paper is a practitioner’s guide to developing performance assessment by Mari Pearlman of ETS. The paper will expand on her workshop presentation, “Performance Assessments for Adult Education: How to Design Performance Tasks.” Lawrence Frase of George Mason University is also writing a paper on how advances in technology could address some of the assessment challenges facing the adult education community. Frase’s paper will discuss how technology can facilitate computer-based testing such as adaptive and multi-level testing, new item formats and automated scoring procedures, and professional development and training of teachers. Finally, the committee and sponsors were interested in learning how performance assessments in adult education systems were implemented internationally. The third paper will include a discussion on how countries have developed and utilized alternative assessments. Committee members thought that information on the operations of international systems with performance-based assessments of numeracy, literacy, and/or language (English Language Learning/ESOL/ESL) would be useful to the sponsor. These papers can be obtained by contacting DOEd’s Office of Vocational and Adult Education or the National Institute for Literacy.
OCR for page 6
ORGANIZATION OF THE REPORT The purpose of this report is to capture the discussions and major points made during the workshop in order to assist states and local adult education programs in their development and implementation of performance assessments. Many speakers alluded to a number of measurement concepts throughout the workshop. To assist readers not fully acquainted with measurement issues who may desire additional information about various topics, there are referrals to measurement texts and relevant journal articles throughout the report. It is important to note that as a workshop summary, this report is intended only to highlight the key issues identified by stakeholders and participants who attended the workshop; it does not attempt to establish consensus on findings and recommendations. As described above, the WIA of 1998 mandated that states develop an accountability system for adult education programs and report results on an annual basis. The DOEd established the NRS (National Reporting System) for states to use to gather and report data from their local programs. With the passage of the WIA, the stakes have risen for state and local adult education programs. The field of adult education is in a period of transition as states establish accountability systems that adhere to the federal requirements of the WIA. Chapter 2 of this report summarizes the specific measurement and reporting requirements of the WIA and the NRS and delineates the local and state responsibilities for implementing the NRS. The chapter also provides an overview of the population, structure, and resources of states and local programs to respond to these mandates. Chapter 3 discusses the purposes of assessment and test design. Chapter 4 examines the AERA/APA/NCME Standards as they relate to developing and implementing a performance assessment. Psychometric factors such as reliability, validity, generalizability, and fairness must be considered in developing quality assessments. Chapter 5 addresses the process of developing performance assessments for the NRS. Chapter 6 highlights the challenges and constraints associated with implementing the NRS, with a particular focus on performance assessments. Finally, Chapter 7 explores some options and strategies that could be useful for states and local programs in resolving the issues associated with implementing performance assessments to provide data required by the NRS.
Representative terms from entire chapter: