Page 275

9
Estimating Population Parameters

This chapter reviews the work generally referred to as education statistics. Unlike the basic and applied research reviewed in earlier chapters, this work places less emphasis on the testing, elucidation, or elaboration of scientific or practical theories. Theories in education statistics are often implicit, and assume broad agreement among the community of researchers and consumers such that data collection and reporting of their population parameter estimates is publicly defensible. Collection of education statistics is also governed by relatively established professional norms about sound sampling and measurement. In this sense, education statistics are bound by a conservatism about the object and means of measurement similar to that found for program evaluation research (see Chapter 6). Since both education statistics and program evaluation are exposed to much more public scrutiny and accountability than basic or program development research, they tend toward a broadly shared set of common-denominator variables, such as student dropout rates, grade promotion, and achievement test scores.

State Of Knowledge

Several agencies within the Department of Education collect information on English-language learners. They include the National Center for Education Statistics (NCES), the Office of Bilingual Education and Minority Languages Affairs (OBEMLA), the Office for Civil Rights (OCR), and the Office of the Under Secretary (OUS) through its Planning and Evaluation Service. In addition, the Bureau of the Census collects decennial and annual data about the English proficiency level of individuals through self-report. The Department of Education



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 275
Page 275 9 Estimating Population Parameters This chapter reviews the work generally referred to as education statistics. Unlike the basic and applied research reviewed in earlier chapters, this work places less emphasis on the testing, elucidation, or elaboration of scientific or practical theories. Theories in education statistics are often implicit, and assume broad agreement among the community of researchers and consumers such that data collection and reporting of their population parameter estimates is publicly defensible. Collection of education statistics is also governed by relatively established professional norms about sound sampling and measurement. In this sense, education statistics are bound by a conservatism about the object and means of measurement similar to that found for program evaluation research (see Chapter 6). Since both education statistics and program evaluation are exposed to much more public scrutiny and accountability than basic or program development research, they tend toward a broadly shared set of common-denominator variables, such as student dropout rates, grade promotion, and achievement test scores. State Of Knowledge Several agencies within the Department of Education collect information on English-language learners. They include the National Center for Education Statistics (NCES), the Office of Bilingual Education and Minority Languages Affairs (OBEMLA), the Office for Civil Rights (OCR), and the Office of the Under Secretary (OUS) through its Planning and Evaluation Service. In addition, the Bureau of the Census collects decennial and annual data about the English proficiency level of individuals through self-report. The Department of Education

OCR for page 275
Page 276 collects counts of students with limited English proficiency status at the state and district levels, but these figures are not generally considered to be accurate as varying definitions and methods of aggregation are used. Because the data collected by OBEMLA, OCR, and OUS are generally counts of students for purposes of project accountability, technical assistance, or compliance, this chapter focuses primarily on the data collection efforts of NCES and the Bureau of the Census, although it also reviews some relevant efforts by other offices. One of the major charges to NCES is to report on the condition of education in the United States. To address that charge, NCES conducts many sample surveys and some censuses. Although data collected from samples are considered statistics, the agency aims to generalize to the nation—to estimate the true population parameters in order to report on the condition of education nationwide. This is possible through the use of well-designed, large samples and the application of sample weights to allow extrapolation to the entire population (for example, to all K-12 English-language learners in the United States). Education statistics address a major issue facing national policymakers: the need for reliable and valid information on language-minority and limited-English-proficient students to assess the effectiveness of policies and services at the broadest national level. Yet currently, there are various obstacles to the collection and reporting of good data to address this need, resulting in inadequate data at the local, state, and national levels. Obstacles frequently mentioned include the following: • Inconsistent definitions and decision rules across surveys, resulting in differences in the comparison of results or the aggregation of data • Lack of definition of and agreement on common indicators for measuring student status and outcomes for English-language learners • Lack of data on and inability to monitor English-language learner achievement and other outcomes • Lack of consensus on how (and by whom) data on the education of English-language learners should be collected and/or funded This review of the state of knowledge in education statistics begins with a description of efforts to improve NCES data collection efforts, with particular focus on issues of interest to the education of English-language learners. This is followed by three sections providing a review of the major data collection efforts: the first summarizes the various national and state surveys and data collection efforts, the second examines limitations of current surveys and population estimate studies, and the third reviews assessment issues.

OCR for page 275
Page 277 Efforts to Improve NCES Data Collection General Improvement Efforts NCES was created in 1965 and became part of the Office of Educational Research and Improvement (OERI) in 1980 as the new Department of Education was created. The education reform movement of the early 1980s brought education statistics into the limelight. In 1985, OERI asked the National Research Council (NRC) to evaluate NCES' overall program and data quality. A committee was established, and in 1986 the NRC published Creating a Center for Education Statistics: A Time for Action. The major problems identified were poor data quality, lack of credibility, lack of understanding of user needs, lack of statistical standards, lack of timeliness, lack of resources, and lack of a publication policy. The NRC report made several recommendations in these areas. In the following year, NCES published its first Statistical Standards (1987) in response to the NRC report. Over the ensuing years, NCES made many positive strides in responding to the criticisms of the NRC report. At the same time, and on a parallel track, critical events in the area of indicator systems and education models occurred. In 1987, Rand Corporation published Shavelson et al.'s Indicator Systems for Monitoring Math and Science Education. Funded by the National Science Foundation (NSF), the report identified a set of critical indicators and various indicator systems for monitoring math and science education at a national level. It also recommended the use of an input-process-output model. This report was an important milestone in the use of indicators for monitoring education systems. In 1990, the National Forum on Education Statistics published A Guide to Improving the National Education Data System. This report made numerous recommendations for improving the collection of education data. The four domains addressed were background/demographics, educational resources, school processes, and student outcomes. That same year, the historic meeting of President George Bush and the nation's governors at the Education Summit in Charlottesville, Virginia, established the National Education Goals Panel to assess and report on six national education goals, giving further momentum to the interest in quality indicators. In 1991, NCES published Education Counts: An Indicator System to Monitor the Nation's Educational Health (NCES, 1991a), written by a Special Panel on Education Indicators. The report focuses strongly on equity issues and proposes five issue areas: learning outcomes, quality of educational institutions, readiness for school, societal support for learning, and education and economic productivity. Also in 1991, the National Education Goals Panel published Potential Strategies for Long Term Indicator Development: Reports by Six Technical Planning Subgroups. Among the recommendations made are the development of an early childhood assessment system to measure five dimensions of readiness

OCR for page 275
Page 278 (including language use) and development of a national/state/local student record system. Although these developments do not in most cases deal directly with English-language learners, they have some significance for current efforts to improve the capacity of the education data system to address the needs of these students. Improvement Efforts Related to English-Language Learners Several related developments in the area of federal monitoring of English-language learner status also occurred at around this time. In 1991, as part of a project funded by NCES and run by the Council of Chief State School Officers (CCSSO)—the Education Data Improvement Project—NCES published A Study of the Availability and Overlap of Education Data in Federal Collections (NCES, 1991b). This report focused on the amount of overlap in various education data collections and on differences in definitions of English-language learner status and similar concepts. In 1994, the CCSSO published a report under contract to NCES entitled The Feasibility of Collecting Comparable National Statistics About Students with Limited English Proficiency (Council of Chief State School Officers, 1994a). As part of a larger Educational Data System Implementation Program, the CCSSO conducted an LEP Student Count Study. The goal of this study was to determine the feasibility of making accurate counts of English-language learners from NCES surveys by using standardized definitions and procedures, and of obtaining consistent data on these students across several agencies within the Department of Education. The report is a descriptive study of current practices that provides an overall picture of what data are available. In addition, it gives recommendations on which statistics should be collected and on who should collect them. In 1995, NCES and the National Forum published Improving the Capacity of the National Education Data System to Address Equity Issues (National Center for Education Statistics, 1995a). This report examines how most data collection efforts at the federal level are organized from the point of view of equal access to resources and processes for at-risk children. The Office of Compensatory Education (Title I) collects state-level English-language learner counts. OCR collects school-level data on the status of these students by race and sex. The Migrant Education Office collects state-level data on these students by grade, and the Bureau of Indian Affairs (BIA) collects school-level data. Although the OCR data are seen as the most comprehensive, sample coverage is not representative of all districts and schools in the United States. With regard to coverage of English-language learner issues in NCES' cross-sectional and longitudinal surveys, the Schools and Staffing Survey collects school-level data on these students, and the National Assessment of Educational Progress (NAEP) collects student-level data. The National Educational Longitudinal Study (NELS) collects data on English-language learners, their parents, and their teachers, with significant sections in

OCR for page 275
Page 279 the questionnaires addressing home language use and English-language ability.1 Prospects reports on these students' English proficiency status (self-, teacher, and parent reports). The NCES/Forum report notes limitations in the following areas: inconsistent definitions across surveys, resulting in difficulties in comparing results; incomplete coverage of background processes, resources, and outcome issues, as well as English-language learner issues; and gaps in data at the national level that result in undercoverage of important student populations (e.g., English-language learners). The report points out that it is difficult to improve coverage of English-language learners because information about this population is not available on the Common Core of Data (CCD) sampling frame. Its main recommendations that are relevant to English-language learners are creating a student-based record system with common definitions and reporting metrics; linking current and future surveys; using the CCD as the basic NCES sampling frame; developing new measures of indicators and surveys for research on equity; and reporting state-and national-level data broken out by various school- and student-level characteristics, including English-language learner status. The most visible and forceful impetus for the Department of Education to consider fully the issue of English-language learner inclusion in education data collection efforts came through legislation. The Improving America's Schools Act of 1994 (P.L. 103-382) states in the Title VII section (Section 7404 (b)): The Secretary shall, to the extent feasible, ensure that all data collected by the Department shall include the collection and reporting of data on limited English proficient students. The passage of the Perkins Act (vocational-technical education) (P.L. 98-524) also gave some impetus to NCES efforts to monitor the educational progress of English-language learners and bilingual students. The Perkins Act requires that the Department of Education use ''appropriate methodologies" in testing students with disabilities and English-language learners. It requires (Section 421 (c)(3)) that the Secretary of Education ensure that appropriate methodologies are used in assessments of students with limited English proficiency, and students with handicaps, to ensure valid and reliable comparisons with the general student population and across program areas. Thus, NCES feels an obligation to provide information that is comparable and generalizable so it is representative of the U.S. population. If data are not representative and comparisons cannot be made between English-language learners and the general student population, NCES is obliged to acknowledge this to its users. 1The National Educational Longitudinal Study began with a cohort of students who were eighth graders in 1988. There have been follow-ups in 1990, 1992, and 1994.

OCR for page 275
Page 280 Inclusion of English-Language Learners in the National Assessment of Educational Progress Because states have different definitions and criteria for limited English proficiency, and the determination of exclusion has been left to school officials, there are large variations in the proportion of English-language learners included in NAEP. State-by-state comparisons of the inclusion of these students in the Trial State Assessment show large variations (McLaughlin et al., 1995). However, since the Perkins Act requires the Department of Education to ensure that appropriate methodologies are used in assessing English-language learners to allow valid comparisons with the general student population, NCES must develop adequate guidelines for inclusion. Eligibility rules for English-language learners and students with disabilities in the NAEP program were standardized to ensure greater consistency.2 Beginning with the 1995 NAEP field test, the criteria were revised to be more inclusive. English-language learners are now included in NAEP if they meet any of the following criteria: • They have received academic instruction primarily in English for at least 3 years. • They have received academic instruction in English for less than 3 years, if school staff determine that they are capable of participating in the assessment in English. • Students whose native language is Spanish have received academic instruction in English for less than 3 years, if school staff determine that they are capable of participating in the assessment in Spanish3 (Olson and Goldstein, 1996). Current NAEP experiments help ascertain the validity of using current or other criteria for exclusion. NCES recently funded studies to investigate the incorporation of English-language learners in NAEP and the effects of exclusions and modifications. One of these studies was conducted by the Educational Testing 2Note that these criteria are still experimental. Prior to 1990, NAEP procedures allowed schools to exclude sampled students if they were limited-English-proficient and if local school personnel judged them incapable of meaningful participation in the assessment (Strang and Carlson, 1991). Between 1990 and 1994, NCES instructed schools to exclude students with limited English proficiency from its assessments only if all the following conditions applied: the student was a native speaker of a language other than English, the student had been enrolled in an English-speaking school for less than 2 years (not including bilingual education programs), and school officials judged the student to be incapable of taking the assessment (Olson and Goldstein, 1996:3). 3The Spanish-language assessment in mathematics is still considered experimental. By examining the results of the 1996 NAEP, NCES staff will determine whether the assessment results for students tested in Spanish are scalable to the English-language assessment.

OCR for page 275
Page 281 Service for the NAEP 1995 field tests. The LEP Special Study explored the feasibility and validity of providing NAEP assessments in Spanish and Spanish-English bilingual versions in grades 4 and 8 in preparation for the 1996 NAEP in mathematics. The findings indicated that "in general, the 1995 field test results appear to be encouraging. Some LEP students who would not have participated under previous assessment conditions were able to participate in the field test. However, a preliminary analysis of the test items and student performance indicated that for the LEP samples of students included and assessed under nonstandard conditions (i.e., with accommodations or adaptations) in the field test, the results may not be comparable to those from other students. Further study of the statistical and measurement issues was indicated for 1996" (Olson and Goldstein, 1996). To address the need for further study, a special sample design was developed to examine the effects of inclusion criteria and accommodations for the 1996 NAEP in mathematics. A number of studies are under way to investigate further issues related to the inclusion of English-language learners. These studies are investigating scaling issues, reporting issues, the appropriateness of inclusion criteria, the construct validity of the assessment for English-language learners, language complexity issues, and inclusion procedures. The National Center for Research on Evaluation, Standards, and Student Testing at the University of California, Los Angeles, conducted a study for NCES on linguistic modification of NAEP math items. Researchers examined the role of linguistic complexity in students' performance on original and revised NAEP math items (Abedi et al., 1995). The study found no significant improvement in performance for students taking assessments whose linguistic complexity, defined syntactically, had been modified. Students in English as a second language (ESL) math classes scored the lowest and showed only a slight improvement on the simplified items; those students in remedial/basic or average classes showed the largest improvement. It should be noted that the study examined only one aspect of linguistic complexity—complex syntax—and did not address other aspects of linguistic organization, such as semantic or lexical complexity. The American Institutes for Research is currently conducting a follow-up study of fourth grade students who were excluded from the 1994 NAEP Trial State Assessment in reading. The purpose of the study is to determine the assessibility of the excluded students and the types of adaptations that would be needed to include them, as well as to obtain more detail on the exclusion decision process. The ultimate aim of the study is to suggest improvements in the directions provided to local sites. The study procedures include analysis of questionnaire data, visits to schools in states that have high percentages of English-language learners, and collection of data from assessments and interviews. The 1995 field test of NAEP included several feasibility studies aimed at increasing the participation of students with disabilities and those with limited English proficiency. The approaches to be studied included administering tests in Braille, in large print, or in Spanish and varying the testing time and methods.

OCR for page 275
Page 282 NCES contracted with Hakuta and Valdes in 1994 to prepare a study design for evaluating strategies for including English-language learners in NAEP. Hakuta and Valdez suggested using two basic principles: a continuum-of-strategies principle (trying out a number of options, with ongoing attempts to maximize the number of students offered options) and a reality principle (considering only options that are realistic in the context of policy and NAEP). In addition, Hakuta and Valdes designed a student questionnaire and a study that randomly assigns various test conditions to students. NCES convened a study group in late 1994 for a conference on Inclusion Guidelines and Accommodations for LEP Students in NAEP and published the proceedings (August and McArthur, 1996). The study group recommended research in all areas surrounding testing, translation of materials, and various types of accommodation. However, they cautioned that such research is complex given individual differences in native-language and English literacy and proficiency. Participants stressed the importance of developing a set of consistent guidelines for determining whether to include and how to assess English-language learners using NAEP. The equitable inclusion of minority students has been a critical policy issue for NCES in the development of its surveys and databases for some time. However, for some advocates, "equity" per se is presently too broadly conceived, and there is a perceived need for a more detailed description of the needs of English-language learners. Surveys and Data Collection Efforts National Efforts NCES administers a large number of surveys and data collection efforts. The major ones are described in Annex 9-1, along with a Census Bureau survey, the Current Population Survey, and two OUS-sponsored surveys (Prospects and the Descriptive Study of Services to LEP Students). State Efforts States actively collect data on their English-language learner populations, both through their assessment programs and through their bilingual education programs (including those funded by state education agencies through Title VII). This section provides a brief review of these efforts. A major effort to improve the quality of education data available at both the state and federal levels since the mid-1980s is the CCSSO Education Data Improvement Project. The project has published several reports on the availability and overlap of education data in federal collections. In addition, it published a report in 1991 entitled Summary of State Practices Concerning the Assessment of

OCR for page 275
Page 283 and the Data Collection About LEP Students (Council of Chief State School Officers, 1991). Areas of focus in the report include identification of English-language learners, state-level data collection, reporting, and utilization. Almost all state education agencies reported collecting data about English-language learners, and about half said they had laws or policies regarding the identification of these students and the provision of language assistance programs to meet their needs. Major data collected by state instruments are number of English-language learners identified and served, language background, and grade retention rate. CCSSO (1995) recently published an implementation guide for a new system for a standardized data format, called Speedee Xpress. This guide represents an attempt to develop a national standard for state and local data collection efforts and to define commonly used data elements. In addition, use of Speedee Xpress allows states and localities to exchange data electronically. Although most states have accepted the Speedee Xpress standards, only a handful are actively using the system (Utah, Colorado, Nevada, and Delaware). To document assessment policies and practices and develop policy recommendations for English-language learners, the Evaluation Assistance Center East at The George Washington University4 surveyed all 50 state assessment directors in 1994 (Hafner et al., 1995). The survey findings provide baseline information on state assessment policies and practices and identify key issues affecting our ability to measure the academic progress of these students. Key findings include the following: (1) there is no common operational definition used by states to identify English-language learners; (2) about 80 percent of states have an assessment policy pertaining to these students; (3) most states allow exemptions for English-language learners (they are not required to take the same assessments as their fluent English-speaking peers), and 33 percent report the actual number of these students assessed in their state; (4) a majority of states allow test modifications for English-language learners, but only 4 states provide assessments for these students in languages other than English; and (5) only 4 states report disaggregated scores of English-language learners, while 24 states report that they do not usually but could report disaggregated scores. Although most states collect some data on English-language learners, we currently cannot make state-to-state or state-to-nation comparisons. Speedee/Xpress and the Student Data Handbook are steps toward these ends. Limitations of Surveys and Population Estimate Studies This section examines further the issue of limited coverage of surveys and population estimate studies and the various other limitations of these efforts, 4 The Evaluation Assistance Center East was a Title VII-funded center that provided technical assistance in the area of assessment to Title VII-funded schools.

OCR for page 275
Page 284 including issues of data definition, data collection, and coverage of the variables of interest. Population Coverage Population coverage refers to whether a survey includes in its sampling frame all the possible attributes of interest of its intended population. This issue has arisen perhaps most prominently with respect to the 1990 Bureau of the Census decennial count of the U.S. population. There have been many criticisms of that census, including a lawsuit claiming that the true population was undercounted, especially minority persons in central cities. It is claimed that many homeless and transient persons were not counted and that the Census Bureau should adjust its estimates accordingly. Although most people agree that there was an undercount, there is disagreement over its extent and whether and how the Bureau should adjust the estimates. NCES' Statistical Standards (National Center for Education Statistics, 1992) sets forth department and agency policies on sample and data quality, data collection, and reporting. In its standard for the design of a survey, response rates are set at a minimum of 90 percent for longitudinal surveys and 85 percent for cross-sectional sample surveys. The target response rate for each critical variable is at least 90 percent. The exclusion of English-language learners and students with disabilities in NCES' censuses and surveys has come to be seen as an undercoverage problem. In recent years, it has become evident that NCES (as well as other agencies) has gaps in its coverage of these students, as well as incarcerated students, American Indian students, preschool children, and migrant children. According to Houser (1994), the exclusion of a subpopulation potentially biases the results of a study, and the data collected may not be generalizable to the excluded students. For this reason, NCES carried out the NELS base year follow-back study of students who had initially been deemed ineligible and recalculated the grade 8-10 dropout rate. In addition, the exclusion of students with disabilities and limited English proficiency from NCES' surveys and assessments may bias data on racial and ethnic groups since minority students are overrepresented among these two subgroups. In general, students who are excluded from NCES' assessments have also been excluded from background questionnaires linked to the assessments. About 2 percent of all eighth grade students were excluded from the NELS tests and surveys in the base year because of language barriers (Spencer, 1994). NELS did provide Spanish-language questionnaires for students in the first and second follow-ups, as well as Spanish-language surveys for students' parents. However, not many students chose to use those surveys. The Early Childhood Longitudinal Study (National Center for Education Statistics, 1995b) also plans to gather background

OCR for page 275
Page 285 data from English-language learners, even though all of them may not be included in direct assessments during their first few years in the study. Moreover, as mentioned previously, NCES has initiated several efforts, primarily with the NAEP program, to improve its estimates and coverage, including the use of alternative assessments and accommodations for English-language learners and students with disabilities. These activities are intended to have broader applicability to other national and subnational survey and assessment activities. One way to ensure that a survey will cover groups of policy interest adequately is to oversample those groups. For example, in NELS, Hispanic and Asian students were oversampled to provide a sufficient number for subgroup analysis. Although English-language learners were not oversampled, NELS did include some students with low levels of English proficiency (about 300 were identified as such by their teachers). Of the Asian and Hispanic eighth graders included in the study, three-quarters came from language-minority families. About 4 percent of these students showed low English proficiency, 32 percent moderate proficiency, and 66 percent high proficiency (Bradby, 1992). However, no additional money was obtained to oversample American Indian students, so the sample included only a small number of those students—about 200, a number that does not allow extensive analyses of this subgroup. The Prospects (Office of Policy and Planning, 1995) study included a supplemental sample of language-minority students from schools with high concentrations of English-language learners. This sample included 2,036 first grade English-language learners, 1,691 third grade English-language learners, 1,380 first grade language-minority students, and 1,837 third grade language-minority students. It was included to maximize the policy relevance of the study for the first and third grade cohorts. Although English-language learners were oversampled, about 25 percent of them were not tested in Spanish or English, which is a highly unacceptable response rate (according to NCES' Statistical Standards). No oversampling was done for the seventh grade cohort, since that grade contains fewer English-language learners in need of services. Data Definitions Another problem with surveys and population estimate studies is that they use inconsistent definitions of limited English proficiency. The following examples show the wide variation in the definitions used. Prospects Study (1995:i-2): Children whose native language is other than English and whose skills in speaking, reading, or writing English are such that they can derive limited benefit from school instruction in English.

OCR for page 275
Page 296 This chapter has pointed out that not all data of interest for our present concerns are complete and of good quality, and there is some duplication in data collection. In addition, it has noted a lack of linkage among data collection efforts, although with some exceptions (e.g., the U.S. Census and CCD, and the National Assessment of Educational Progress [NAEP] and the National Educational Longitudinal Study [NELS]). Inclusion of English-Language Learners in Assessments 9-7. An intensive research effort is needed to develop strategies for including English-language learners in all federal and state education data collection activities. Such an effort should be broad in several senses. First, it should be responsive to the intent of the laws that explicitly call for full inclusion (the Perkins Act, P.L. 98-524), and the Improving America's Schools Act, P.L. 103-382) by proposing accommodation and other strategies that are feasible in the short term; however, it should also recognize the difficulty of the challenge and develop a long-term strategy that would result in a psychometrically defensible system of inclusion. Second, it should recognize the diversity of the English-language learner population, including the range of languages represented and varying degrees of formal education and familiarity with formal testing. Finally, it should be responsive to the possibility that different issues may arise in different content areas being assessed (see Chapters 3 and 5) and that optimum inclusion strategies may differ depending on the knowledge area being measured. 9-8. Since assessments based on translations into a second language have questionable validity, research is needed to determine the equivalence of these materials. 9-9. The Department of Education might develop a checklist to help states and local administrators determine which English-language learners should be included in surveys and assessments and which should participate in alternative procedures. 9-10. The effects on test scores of excluding various subgroups and the validity and comparability of achievement measures in different languages should also be studied. 9-11. In its longitudinal studies program, NCES should oversample English-language learners to get adequate numbers. A major issue for data on English-language learners is population coverage. Our analysis has suggested two problems in this area. The first is inadequate representation of these students in samples for the general population, resulting in general population estimates that are biased. Inadequate representation occurs mainly because the standard subject matter assessments in English are not appropriate

OCR for page 275
Page 297 for English-language learners, and there are few alternatives. This problem can be addressed by developing and investigating the validity and reliability of alternative assessments, including assessments in non-English languages and modifications to English-language assessments (see Chapter 5). The second problem arises from the need to conduct studies that disaggregate the data on English-language learners for various purposes. This issue can be addressed by sufficiently oversampling these students. A main obstacle in this case is financial: Who pays for the additional data collection? Our recommendation on this latter issue is presented in Chapter 10. Research on strategies for including English-language learners in data collection activities would initially be through smaller-scale special studies. These studies would develop new assessment techniques that would allow full inclusion of those students and in the process would test the validity of alternative test modifications. One such modification would be native-language assessments; others might include modifications in test administration or test items (see Chapter 5). Special studies should also be conducted to develop standard procedures for incorporating English-language learners into assessments. Specifically, such studies should be conducted to determine which of these students should take the standard English assessments and which should take alternative versions, and what these alternative versions might be. This determination would be based in part on ascertaining the best predictors of student achievement in the various content areas (e.g., time in country, length of time in English instruction, first-language proficiency, English proficiency, linguistic isolation, language of parents). These smaller studies would be followed by field tests of regular assessments such as NAEP to explore the use of accommodations in tests and modified assessments. Data Coverage 9-12. A common set of indicators for English-language learners such as those listed in Annex 9-2 should be developed by NCES, in consultation with other offices and advocacy groups. Data on indicators identified as important and currently not being covered should be collected, and those indicators that are being covered should be carefully reviewed for their appropriateness. This framework should be the base against which samples for all Department of Education research with English-language learners are compared. In addition, this information should be made available for use by OBEMLA and the Secretary of Education in preparing their reports to Congress. Our review of variables of interest for monitoring the progress of English-language learners (Annex 9-2) uncovered a number of areas where data are either unavailable or inadequate. Major gaps identified were in the areas of school readiness; English proficiency; native-language proficiency; length of instruction

OCR for page 275
Page 298 in second language; content and topic coverage in ESL and bilingual education programs; type of instruction given in ESL and bilingual education classes; status of programs designed to help English-language learners and language-minority students in postsecondary institutions; and number of English-language learners by grade, school, and district. In addition, information on criteria for participation in and exit from bilingual programs and provisions for assessing content knowledge and skills are of interest.

OCR for page 275
Page 299 Annex 1 National Surveys And Data Collection Efforts Table 9-1 shows the various national-level surveys and data collection efforts as of 1995. The first column in Table 9-1 lists the major cross-sectional surveys. The Common Core of Data (CCD) is NCES' major elementary-secondary sampling frame for public and private K-12 schools and districts in the country. It is conducted annually and provides basic information and descriptive statistics on public elementary and secondary schools. Three types of information are collected: general descriptive information, student data, and fiscal data. Schools and Staffing is a cross-sectional survey of teachers and administrators on a nation-wide basis conducted every 3 to 5 years. Information collected includes teacher demand and shortage, programs and services offered, student characteristics, student-teacher ratios, school climate, demographic characteristics of teachers, and some administrative records on students. In 1993-1994, a new student record component collected data from class rosters of a subsample of teachers (10,326 students). The survey asked for information on the students' English proficiency status, home language, services received, courses taken, grades, and other outcomes. The Fast Response Survey System is a one-time vehicle for special-topic surveys, with a methodology that allows quick response and data availability. It collects data from state agencies, local education agencies, schools, teachers, and adult literacy programs. One recent survey included a kindergarten teacher survey on children's readiness for school. The Current Population Survey (CPS) is a monthly household survey conducted by the Census Bureau to provide information on employment, English-language proficiency of adults, frequency of use of non-English language, and other population characteristics. NCES funds an annual CPS supplement on school enrollment, educational attainment, and other educational items. The National Household Education Survey is a cross-sectional household-based survey done every 2 years on various topics related to parents, children, and education, including preschool activities, participation in adult education, and school readiness. Lastly, a one-time survey sponsored by OUS and funded by OBEMLA, the Descriptive Study of Services to LEP Students, was conducted in 1991-92. It collected data on number and characteristics of English-language learners, instructional services, administrative procedures, and instructional staff qualifications from state agencies, districts, schools, and teachers of these students. In the second column of Table 9-1 are postsecondary surveys (all cross-sectional). Many of these collect data on minority participation in higher-education programs. The Integrated Postsecondary Education Data System collects data annually from postsecondary institutions on institutional characteristics, enrollment,

OCR for page 275
Page 300 TABLE 9-1 National Surveys and Data Collection Efforts, 1995 Cross-Sectional Surveys Postsecondary Surveys Surveys That Include Assessments Common Core of Data (CCD) Integrated Postsecondary Education Data System National Assessment of Educational Progress (NAEP) (C) Schools and Staffing Recent College Graduates   Fast Response Survey System National Postsecondary Student Aid Survey National Adult Literacy Survey (C) Current Population Survey (CPS) National Survey of Postsecondary Faculty National Educational Longitudinal Study of 1988 (NELS) (L) National Household Education Survey Survey of Earned Doctorates High School and Beyond (L) Descriptive Study of Services to LEP Students Postsecondary Faculty Quick Information System High School Transcript Studies (C); National Longitudinal Survey (NLS)-72 (L); Beginning Postsecondary Student Longitudinal Survey (L); Baccalaureate and Beyond Survey (C and L); Early Childhood Longitudinal Study (L); Prospects (L) NOTE: C = cross-sectional; L = longitudinal. completions, salaries, finances, libraries, and staff. Recent College Graduates surveys this population on various topics, including field of study and employment status (especially in teaching). The National Postsecondary Student Aid Survey surveys college students every 3 years on financial aid, income, employment, demographics and costs, and special-population enrollments. The National Survey of Postsecondary Faculty provides data about postsecondary faculty characteristics and is conducted periodically. The Survey of Earned Doctorates is an annual survey filled out by all students who complete a doctorate; topics include demographics, field of study, time spent, financial support, and educational plans. The Postsecondary Education Quick Information System is a new survey system similar to the Fast Response Survey System. It collects

OCR for page 275
Page 301 timely data on focused issues, such as financial climate, status of deaf and hard-of-hearing students, and programs for disadvantaged students. In the third column of Table 9-1 are the cross-sectional and longitudinal surveys that include assessments. NAEP is the largest of the agency's surveys/assessments and is conducted every 2 years. It consists primarily of cognitive tests and student, teacher, and school administrator questionnaires. The National Adult Literacy Survey, conducted once in 1992, assessed adults nation-wide on demographics and on various types of literacy. The National Educational Longitudinal Study of 1988 (NELS) is a longitudinal study that began with eighth graders in 1988 and will follow them for 10 years or longer. It consists of student, dropout, parent, teacher, and school administrator questionnaires; high school transcripts; and cognitive tests. In the initial survey, those among the cohort of eighth grade students whose English-language ability, as judged by the teacher, would prevent them from participating in an English-language program were excluded from the survey and the assessment. The first follow-up, conducted when these students were generally tenth graders, included those students who had been excluded in the initial survey, but whose language ability had improved sufficiently to participate in an English-language survey, or who could answer the survey translated in Spanish. Also, additional students were added to the sample to make it representative of all tenth graders. By the second follow-up, when these students were generally twelfth graders, all previously excluded students were included if possible. In addition, transcripts were collected for all sampled students, regardless of previous exclusion status. High School and Beyond was a longitudinal study carried out among two cohorts: the sophomore and senior cohorts of the class of 1980. Several follow-ups were conducted through 1992. The study consisted of student, school, second-language, and parent questionnaires and a student test. High School Transcript Studies are conducted periodically by NAEP, High School and Beyond, and NELS. The National Longitudinal Survey (NLS)-72 was a longitudinal study that began in 1972; it followed the cohort for 14 years and five follow-ups. It consisted of student questionnaires, student tests, and high school transcripts. The Beginning Postsecondary Student Longitudinal Survey began in 1990. It is a national longitudinal survey of postsecondary students designed to track correlates of their progress in college. It consists of a student survey, a parent survey, and a cognitive test. The Baccalaureate and Beyond Survey is a postsecondary survey (both cross-sectional and longitudinal) consisting of a graduating student survey, a parent survey, and a cognitive test. The newest longitudinal survey, which is currently in the planning stage, is the Early Childhood Longitudinal Study (National Center for Education Statistics, 1995b). It will be field tested in 1996 and will consist of individual student assessments and parent and teacher checklists. Plans call for the teacher and

OCR for page 275
Page 302 parent checklists to be translated into Spanish. Hispanics and Asian students will be oversampled, as will private schools. Oversampling of English-language learners is not planned, although these students will be substantially represented within the Hispanic and Asian oversamples and the Head Start supplements. An additional study funded by the Planning and Evaluation Service and OBEMLA in the Department of Education—Prospects—is also listed here. Congressionally mandated, it is a 6-year longitudinal evaluation study on the impact of Chapter 1 (now Title I) programs—the first longitudinal study designed to measure the effects of Chapter 1 programs on language-minority students and English-language learners. It began in 1991 and collects data annually. In addition to collecting information on student demographics and educational services provided to English-language learners, it administers achievement tests to students—either the California Test of Basic Skills or its Spanish version (SABE). Three student cohorts from grades 1, 3, and 7 are being followed over time, and descriptive and achievement data are being collected to examine the sampled students' progress. As the longitudinal data become available, more valuable analyses will be possible. In 1995, the Department of Education published Prospects: First Year Report on Language Minority and LEP Students (Office of Policy and Planning, 1995). Findings to date indicate that English-language learners who attend public schools are particularly disadvantaged; many students are not receiving the quality instruction and services they need; most schools use several criteria, including measures of proficiency in English and non-English, to determine entry into and exit from these programs; and English-language learners receive lower academic grades, are judged by their teachers to have lower academic abilities, and score below their classmates on standardized tests of reading and math. Annex 2 Variables Of Interest For Monitoring English-Language Learner Progress Readiness for School • Birth weight • Immunizations • Preschool program participation • Motor development indicator • Physical well-being indicator • Social and emotional well-being indicator • Native-language proficiency • English proficiency* • Literacy activities in the home • General knowledge and skills

OCR for page 275
Page 303 Demographics • Percentage of English-language learners in poverty • Percentage of English-language learners in linguistically isolated households* • Race/ethnicity • Family income • Parental English proficiency level* • Home language • Access to health and human services • Length of instruction in second language (years) • Most frequently used language* • Number of English-language learners • Student participation in bilingual/ESL programs Social Support for Education • Parent involvement in school activities • Reading materials in the home • School per pupil expenditure • Average parental educational attainment • School cooperation with community agencies Quality of Educational Institutions and Teaching School Quality • Average class size • Student-teacher ratio • Instructional time • Availability of computers • Course offerings (secondary) • Amount of homework given • Availability of bilingual/ESL programs • Existence of special programs (Title I, reading, math, tutoring) • Content coverage of ESL/bilingual classes* • Number of teacher aides Teacher Quality • Years of experience • Educational background/degrees • Average teacher salary • Number certified in bilingual education/ESL • Minority status • Enrichment activities • Percentage teaching English-language learners

OCR for page 275
Page 304 Learner Outcomes • English-language proficiency* • Math achievement • Reading achievement • Achievement in other areas • Grades • General self-concept • Grade retention • Participation in extracurricular activities • Academic course taking • Teachers' judgments of student ability Education and Economic Productivity • School attendance, tardiness • Retention in grade • Dropout status (grades 8-12) • Degree completion • Completion of key classes (algebra) • College completion rates • Employment record * May require new items in federal surveys.

OCR for page 275
Page 305 References Abedi, J., C. Lord, and J. Plummer 1995 Language Background Report. Los Angeles: UCLA Graduate School of Education, National Center for Research on Evaluation, Standards, and Student Testing. August, D., and E. McArthur 1996 Proceedings of the Conference on Inclusion Guidelines and Accommodations for Limited English Proficient Students in the National Assessment of Educational Progress (December 5-6, 1994). Washington, DC: National Center for Education Statistics. Bradby, D. 1992 Language Characteristics and Academic Achievement: A Look at Asian and Hispanic Eighth Graders in NELS:88. Washington, DC: National Center for Education Statistics. Council of Chief State School Officers 1991 Summary of State Practices Concerning the Assessment of and the Data Collection about LEP Students. Washington, DC: Council of Chief State School Officers. 1994a The Feasibility of Collecting Comparable National Statistics about Students with Limited English Proficiency. Washington DC: Council of Chief State School Officers. 1994b LEP Student Count Study. Washington DC: Council of Chief State School Officers. 1995 Implementation Guide for SPEEDEE/Xpress. Washington, DC: Council of Chief State School Officers. Hafner, A. 1995 Assessment Practices: Developing and Modifying Statewide Assessments for LEP Students. Paper presented at the annual conference on Large Scale Assessment sponsored by the Council of Chief State School Officers, June. School of Education, California State University, Los Angeles. Hafner, A., C. Rivera, C. Vincent, and M. LaCelle-Peterson 1995 Participation of LEP Students in Statewide Assessment Programs. Unpublished report, Division of Educational Foundations and Interdivisional Studies, George Washington University. Hakuta, K., and G. Valdes 1994 A Study Design for the Inclusion of LEP Students in the NAEP State Trial Assessment. Paper prepared for the National Academy of Education Panel on NAEP Trial State Assessment. Stanford University, Stanford, CA. Houser, J. 1994 Assessing Students with Disabilities and Limited English Proficiency. Washington, DC: National Center for Education Statistics. Ingels, S.J. 1995 Sample Exclusion and Undercoverage in NELS 88—Characteristics of Base Year Ineligible Students: Changes in Eligibility Status After Four Years. Washington, DC: National Center for Education Statistics. McLaughlin, M.W. and L.A. Shepard with J.A. O'Day. 1995 Improving Education through Standards-Based Reform. A Report by the National Academy of Education Panel on Standards-Based Education Reform. Stanford, CA: National Academy of Education Meyer, M.M., and S.E. Fienberg, eds. 1992 Assessing Evaluation Studies: The Case of Bilingual Education Strategies. Panel to Review Evaluation Studies of Bilingual Education, Committee on National Statistics, National Research Council. Washington, DC: National Academy Press. National Center for Education Statistics 1987 Statistical Standards. Washington, DC: National Center for Education Statistics.

OCR for page 275
Page 306 1991a Education Counts: An Indicator System to Monitor the Nation's Education Health. Special Study Panel of Education Indicators. Washington, DC: National Center for Education Statistics. 1991b A Study of Availability and Overlap of Education Data in Federal Collections. Washington DC: National Center for Education Statistics/Council of Chief State School Officers. 1992 Statistical Standards. Washington, DC: National Center for Education Statistics. 1994 Student Data Handbook: Early Childhood, Elementary, and Secondary Education. Washington, DC: National Center for Education Statistics. 1995a Improving the Capacity of the National Education Data System to Address Equity Issues: An Addendum to a Guide to Improving the National Education Data System. Washington, DC: National Center for Education Statistics. 1995b Early Childhood Longitudinal Study: Kindergarten Class of 1998-99 Brochure. Washington, DC: National Center for Education Statistics. National Education Goals Panel 1991 Potential Strategies for Long Term Indicator Development, Reports by Six Technical Planning Subgroups. Washington, DC: National Education Goals Panel. National Forum on Education Statistics 1990 A Guide to Improving the National Education Data System: A Report by the National Education Statistics Agenda Committee of the National Forum on Education Statistics. Washington, DC: National Forum on Education Statistics. National Research Council 1986 Creating a Center for Education Statistics: A Time for Action. Panel to Evaluate the National Center for Education Statistics, Committee on National Statistics. Washington, DC: National Academy Press. 1995 Integrating Federal Statistics on Children: Report of a Workshop. Board on Children and Families, Committee on National Statistics, National Research Council. Washington, DC: National Academy Press. Office of Policy and Planning 1995 Prospects: The Congressionally Mandated Study of Educational Growth and Opportunity, First Annual Report. Washington, DC: U.S. Department of Education. Olson, J.F., and A.A. Goldstein 1996 Increasing the Inclusion of Students with Disabilities and Limited English Proficient Students in NAEP. NCES Focus on NAEP Series 2(1):1-5. Shavelson, R. J. McDonnell, L. Oakes, J. Carey, N.L. Picus 1987 Indicator Systems for Monitoring Math and Science Education. Santa Monica, CA: Rand Corporation. Strang, E.W., and E. Carlson 1991 Providing Chapter 1 Services to Limited English-Proficient Students. Final Report. Rockville, MD: Westat. Spencer, B.D. 1994 A study of eligibility exclusions and sampling: 1992 Trial State Assessment. In G. Bohrnsted, ed., The Trial State Assessment, Prospects and Realities: Background Studies. Palo Alto, CA: Armadillo.