1
Introduction

Each month the American public receives a report on the economic health of the nation. The report provides primary information, such as the unemployment rate, the number of new jobs created, and the number of new applications for unemployment are up or down from the previous month. The key indicators that are highlighted each month are only a few from among a vast cornucopia of economic data that might be reported. Together they provide a quick and reasonable overview of whether conditions are getting better or worse and what areas of the economy need attention.

Similar key measures could be used to monitor the state of education and other issues in the nation. In 2010, a Key National Indicator System (KNIS) was signed into law (see P.L. 111-148; H.R. 3590-562), and work to prepare for full-scale implementation is ongoing. The system is to be overseen by the Commission on Key National Indicators, an eight-member body with two members appointed by each of the majority and minority leaders of the U.S. House of Representatives and the U.S. Senate. The work to implement the system is to be carried out by the National Academy of Sciences (NAS). A total of $70 million in public financial support is authorized for a KNIS over nine year. The system is intended to “deepen our factual knowledge and understanding of the country’s most pressing issues” pertaining to the economy, the environment, and people (including families, health, education, civic engagement, and culture).1

As part of that effort, the NAS held a workshop in January 2012, to explore possibilities for a set of key indicators that will help policy makers and the public assess the state of education in the country.2 The broad goals for the national indicator system include providing a means for the nation to use a “shared set of facts” in determining “where we’ve been, where, we are, and whether we are leaving the country a better place for future generations.”3 The key task in developing education indicators will be to identify a clear and parsimonious set of measures and data that will be easy for nonspecialists to understand but which will also do justice to the complexities of the disparate U.S. education system. These indicators will be drawn from a large, often confusing and sometimes conflicting, body of information about students, teachers, schools, districts, and states.

image

1See http://www.stateoftheusa.org/ for current information about the KNIS.

2Initial choices about measures to be included in a Key National Indicators System will be made by consensus panels convened by the NAS. The process will include opportunities for public comment.

3See http://www.stateoftheusa.org/.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
1 Introduction Each month the American public receives a report on the economic health of the nation. The report provides primary information, such as the unemployment rate, the number of new jobs created, and the number of new applications for unemployment are up or down from the previous month. The key indicators that are highlighted each month are only a few from among a vast cornucopia of economic data that might be reported. Together they provide a quick and reasonable overview of whether conditions are getting better or worse and what areas of the economy need attention. Similar key measures could be used to monitor the state of education and other issues in the nation. In 2010, a Key National Indicator System (KNIS) was signed into law (see P.L. 111-148; H.R. 3590-562), and work to prepare for full-scale implementation is ongoing. The system is to be overseen by the Commission on Key National Indicators, an eight-member body with two members appointed by each of the majority and minority leaders of the U.S. House of Representatives and the U.S. Senate. The work to implement the system is to be carried out by the National Academy of Sciences (NAS). A total of $70 million in public financial support is authorized for a KNIS over nine year. The system is intended to “deepen our factual knowledge and understanding of the country’s most pressing issues” pertaining to the economy, the environment, and people (including families, health, education, civic engagement, and culture).1 As part of that effort, the NAS held a workshop in January 2012, to explore possibilities for a set of key indicators that will help policy makers and the public assess the state of education in the country.2 The broad goals for the national indicator system include providing a means for the nation to use a “shared set of facts” in determining “where we’ve been, where, we are, and whether we are leaving the country a better place for future generations.”3 The key task in developing education indicators will be to identify a clear and parsimonious set of measures and data that will be easy for nonspecialists to understand but which will also do justice to the complexities of the disparate U.S. education system. These indicators will be drawn from a large, often confusing and sometimes conflicting, body of information about students, teachers, schools, districts, and states. 1 See http://www.stateoftheusa.org/ for current information about the KNIS. 2 Initial choices about measures to be included in a Key National Indicators System will be made by consensus panels convened by the NAS. The process will include opportunities for public comment. 3 See http://www.stateoftheusa.org/. 1

OCR for page 1
KEY NATIONAL EDUCATION INDICATORS The Steering Committee on Key National Education Indicators was charged with organizing a workshop focused on exploring potential indicators that would reflect current research and address the interests of practitioners, policy makers, parents, and the general public. It was asked to commission one or more experts to develop prospective frameworks that could guide the development and implementation of a set of key education indicators and to identify candidate lists of indicators. These indicators might relate to social, economic, and other determinants of education outcomes, as well as outcomes in other sectors that are in turn affected by education. The steering committee was not asked to oversee the formal selection of a list of key indicators, or to come to any consensus about which were most promising, but rather to explore the possibilities and the primary issues to consider: the formal statement of task is shown in Box 1-1. The steering committee’s role was limited to planning the workshop, and this report has been prepared by a rapporteur as a factual summary of what occurred.. In carrying out this charge, the steering committee reviewed available information about other efforts to report on education indicators, including:  The Composite Learning Index produced by the Canadian Council on Learning (see http://www.cli-ica.ca/en/about/about-cli/what.aspx );  The Condition of Education reports produced annually by the National Center for Education Statistics (see http://nces.ed.gov/programs/coe/);  The Education at a Glance reports produced annually by OECD (see http://www.oecd.org/document/2/0,3746,en_2649_39263238_48634114_1_1_1_1 ,00.html);  Education Counts: An Indicator System to Monitor the Nation’s Educational Health (Special Study Panel on Education Indicators, 1991);  The European Lifelong Learning Index produced by UNESCO (Hoskins, Cartwright, and Schoof, 2010);  Indicator Systems for Monitoring Mathematics and Science Education (Shavelson, McDonnell, Oakes, Carey and Pikus, 1987);  The Kids Count data book reports produced annually by the Annie E. Casey Foundation (see http://www.aecf.org/MajorInitiatives/KIDSCOUNT.aspx); and  The Measuring Up reports produced biennially by the National Center for Public Policy and Higher Education (see http://measuringup2008.highereducation.org/). They also reviewed the guidance about education indicators offered by Blank (1993), Bradburn and Fuqua (2010), Bryk and Hermanson (1993), Elliott (2009), and the U.S. Government Accountability Office (2011). This review highlighted the plethora of education indicators that are currently available. The steering committee decided that the most effective use of the workshop would be to explore ways to select the most important ones among all the available indicators. To structure the workshop, the steering committee developed a framework (described more fully later in this chapter) that covered the stages of life in which education occurs, and commissioned a set of researchers who would each identify and justify a candidate list of three to five indicators in their own area. The workshop was 2

OCR for page 1
BOX 1-1 Statement of Task An ad hoc steering committee will plan and hold a two-day public workshop on the topic of key national education indicators. The workshop will take place as part of preparatory activities for the implementation of a Key National Indicator System authorized, but not yet funded, to be carried out by the Congressional Commission on Key National Indicators. The workshop will be informed by the related work conducted by the IOM on key national health indicators. The workshop audience will include individuals who will likely be involved with the anticipated work for the Commission on Key National Indicators. The committee will develop the workshop agenda, select and invite speakers and discussants, and moderate the discussions. The topics to be addressed at the workshop will be defined to help scope the domain for a small set of indicators that describe the state of education in the country. The indicators chosen should reflect current research and should address the interests of practitioners, policy makers, parents and the public at large. In preparation for the event, the steering committee will commission one or more papers on prospective frameworks that could guide the development and implementation of a set of key national education indicators to serve the needs of this broad range of audiences. The framework(s) should include a candidate list of key national education indicators to use in the initial implementation. The framework(s) may include, as appropriate, indicators related to the social, economic, and other determinants of education outcomes, as well as outcomes in other sectors that are in turn affected by education. The participants at the workshop will discuss the proposed indicator framework(s) and other presented material, identify issues that need to be resolved, and consider the design of a one-year process that could resolve the outstanding issues and reach consensus on an initial set of indicators to recommend for implementation. The discussions at the workshop will be described in an individually- authored summary document written by a designated rapporteur. then an opportunity for a diverse group of researchers and policy makers to consider two questions: 1. Given the state of the field, are there key education indicators so widely recognized that they would form a natural core of the education component of the KNIS? 2. What additional work is needed to support a formal recommendation of education indicators to be included in the KNIS? This report describes the workshop presentations and discussions and is intended as the first step in the development of a portfolio of key national indicators of progress in education. 3

OCR for page 1
KEY NATIONAL EDUCATION INDICATORS DEFINING INDICATORS The short answer to the question “What is an indicator?” is that it is a measure used to track progress toward objectives or to monitor the health of an economic, environmental, social, or cultural condition over time. Different sorts of measures are used in different contexts. For example, the unemployment rate, infant mortality rates, and air quality indexes are all indicators. In the field of education, school districts typically collect average scores on a standardized reading assessment for each grade to monitor how well students are meeting basic benchmarks as they progress in reading. Other commonly used education indicators include high school graduation rates, rates of truancy, ratios of teachers to students, and per-pupil expenditures, as well as measures of less quantifiable factors, such as teachers’ and students’ attitudes. Indicators—literally signals of the state of whatever is being measured—can cover outcomes, the presence or state of particular conditions, or the effectiveness of management approaches (National Research Council, 2011a). They can be used to measure change over time or for comparisons among outcomes, conditions, or measures of effectiveness in different places. Although indicators are usually quantitative, they may be either straightforward measures of a single phenomenon, such as the number or percentage of students who graduate in a given year, or composite measures. A composite indicator is a measure of a more complex phenomenon, such as college readiness, and may incorporate a number of variables that capture aspects of what is being measured. Thus, an indicator is not the same thing as a statistic. As a primer on education indicators explained, statistics “need context, purpose, and meaning if they are going to be considered” indicators (Planty and Carlson, 2010). A system of indicators, a recent report from the U.S. Government Accountability Office (GAO) notes, is “an organized effort to assemble and disseminate a group of indicators that together tell a story” about a jurisdiction or a particular issue (U.S. Government Accountability Office, 2011, p. 57). A comprehensive key indicator system—such as the KNIS—is designed to collect only a limited number of the most important indicators on a wide range of economic, environmental, and social and cultural issues of interest in the country. It is not intended to provide a comprehensive and in- depth database on specific issues. The selection of a short list of indicators for education, to be part of a comprehensive indicator system, poses challenges both because these few indicators will be expected to distill a complex set of issues into a concise story and because they are intended to assist policy makers and guide action. A brief look at the history of education indicators highlights some of the challenges and the state of the field as preparation for a national system of indicators gets under way. CONTEXT Interest in developing a system of education indicators dates at least to the 19th century. An 1867 federal law called for a department of education “for the purpose of collecting such statistics and facts as shall show the condition and progress of education in the several States and Territories” (quoted in Planty and Carlson, 2010, p. 9). Interest in this endeavor has varied since then, however. The early 20th century was a time of 4

OCR for page 1
blossoming interest in the use of data in the social sciences, and the Russell Sage Foundation, an early leader in social science research, published numerous reports on education and other topics in that period, including one that ranked states according to such indicators as school attendance and school expenditures (Russell Sage Foundation, 1912). By the middle of the century, a second wave of interest in education data was evident, and reports from the United States Department of Health, Education, and Welfare (HEW), the Bureau of the Census, and others were providing substantial amounts information about the educational status of the country (Bradburn and Fuqua, 2010). For example, HEW published Toward a Social Report (1969), which charted social progress using indicators covering learning, science, and art; health and illness; social mobility; the physical environment; income and poverty; public order and safety; and civic participation and alienation (Elliott, 2009). The U.S. Census Bureau (1976) produced STATUS, a monthly chart book covering social and economic trends. Its fourth edition featured a special report on education, and included such indicators as participation in education at all levels, expenditures on education, educational disparities, programs for students with special needs, achievement, and public views about education; it also provided cross-national comparisons. The National Education Association produced state comparisons for indicators related to the teaching profession, and the United States Department of Education’s Center for Education Statistics published a variety of statistics (Ginsburg, Noell, and Plisko, 1988). These reports, though valuable, were not part of a sustained effort to monitor key aspects of public education, or to measure student achievement (Ginsburg, Noell, and Plisko, 1988). The publication of A Nation at Risk in 1983 is widely credited with stimulating an earnest and sustained push for close monitoring of the system (Bryk and Hermanson, 1993; Ginsburg, Noell, and Plisko, 1988). In 1984, the Department of Education produced a 1-page summary of statistics (the “wall chart”), which made rough comparisons among the states on a set of input and outcome characteristics related to educational performance. The limitations of the data available for the wall chart spurred interest in improving the validity of state-by-state comparisons of student achievement (Ginsburg, Noell, and Plisko, 1988), and soon afterwards there was a push for increased sampling for the National Assessment of Educational Progress (NAEP) that would make it possible to report state-level achievement data. Interest in education indicators has continued to grow. As one observer noted two decades ago, “Hardly an educational agency or group at the national or state level has not been involved in the business of education indicators (Smith, 1988, p. 487). In 1984, the National Center for Education Statistics began using indicators in its regular report, Condition of Education (Bradburn and Fuqua, 2010; Smith, 1988). The National Science Foundation stimulated a number of new initiatives, including funding a 1988 report that proposed a framework for monitoring mathematics and science education that addressed teacher quality, course content, and student achievement, and other factors (National Research Council, 1985; see also National Research Council, 1985, 1988). Then- President George H.W. Bush’s America 2000 plan emphasized accountability, monitoring, and data collection for six national goals for education. State data in these six goal areas became available in 1991 when the National Education Goals Panel produced its first reports. 5

OCR for page 1
KEY NATIONAL EDUCATION INDICATORS During the same period, the Hawkins-Stafford Elementary and Secondary School Improvement Amendments of 1988 (P.L. 100-297) authorized the establishment of a Special Study Panel on Education Indicators. This panel was chartered by the Department of Education in 1989 and produced Education Counts in 1991 (Special Study Panel on Education Indicators, 1991). The report lays out a conceptual framework for an ongoing indicator system that would track enduring educational issues; it identifies six critical issue areas that such an indicator system should address: 1. learner outcomes; 2. quality of educational institutions; 3. readiness for school; 4. societal support for learning; 5. education and economic productivity; and 6. equity (measures of resources, demographics, and students at risk). The report has served as a guiding framework for decisions about the content and format of the Condition of Education reports issued annually by the National Center for Education Statistics (NCES). Of particular importance was the report’s focus on monitoring the outcomes of education rather than on complex causal factors (John Ralph, Program Director, National Center for Education Statistics, personal communication, 2012). At present, the National Center for Education Statistics, of the United States Department of Education, and many other organizations collect and publish data on education, much of it in the form of indicators in particular areas. The Condition of Education report documents various sorts of data to provide detailed information in five areas: participation in education, learner outcomes, student effort and educational progress, and contexts of elementary and secondary education as well as higher education4 Editorial Projects in Education, the publisher of Education Week and the Education Counts reports, has also published indicators in many areas, as have other organizations. International organizations have also focused on indicators. The European Lifelong Learning Indicators (ELLI), developed by Bertelsmann Stiftung (a private educational foundation) based on domains defined by the United Nation’s Educational, Scientific, and Cultural Organization (UNESCO) (Hoskins, Cartwright, and Schoof, 2010). These indicators are widely recognized for pushing thinking about ways to measure types of learning that have not traditionally been included in formal measures, though they also address academic learning (the ELLI indicators are discussed further in Chapter 6). The Programme for international Student Assessment (PISA) and the Trends in International Mathematics and Science Study (TIMSS) are other sources of international education data. The push to establish KNIS began in the early 2000s, with a request from Congress for the GAO to explore the feasibility of a system of national indicators that could chart the progress of the country in major policy areas (U.S. Government Accountability Office, 2004). This effort led to the passage of the law authorizing the KNIS. The NAS has begun preliminary work with SUSA to prepare for implementation 4 For details, see http://nces.ed.gov/programs/coe/. 6

OCR for page 1
by considering possible indicators in health (see Institute of Medicine, 2009) and education (beginning with this project). PROJECT GOALS The steering committee wanted to be sure that the complete trajectory of education across the life span would be covered in the discussion, and also that different aspects of education would be addressed. While acknowledging that other frameworks could serve a similar purpose, it developed a rough framework based on those two goals: see Table 1-1. The framework covers the stages of life by identifying five broad sectors of education: preschool, K-12 education, higher education, other postsecondary education and training, and lifelong or informal learning (learning that occurs outside the formal structures of the education system). It also identifies three aspects of education: institutions, service providers, and resources; individual-level behaviors, engagement, and outcomes; and contextual factors that influence learning. The committee emphasized that this approach is just one way to structure the discussion, and that other frameworks may be equally or more useful. For the workshop, the committee commissioned researchers who focus on education in each of these five general phases of life to think about what information will be most needed to measure the state of education in the country as it adapts to fast- changing technologies and global economic forces. The committee asked the researchers to make presentations in which they addressed the following questions: 1. In your opinion, what are the three key indicators about (a) institutions, service providers, and resources; (b) individual level behaviors, engagement, and outcomes; and (c) contextual issues for [the specific life stage]? 2. What is the evidence base that justifies the use of these indicators? That is, what is the evidence that they matter? What is the argument for using them? 3. In what direction would we want the indicator to change over time? That is, please talk about whether we would want to see the indicator increase, decrease, or stay the same and how such changes would be interpreted. 4. What are the potential consequences associated with these indicators? That is, if people begin paying attention to them, what consequences (intended and unintended, positive and negative) may result? 5. What are the equity issues to consider for these indicators? 6. What are the relevant data sources for these indicators? The steering committee also commissioned a set of experts to assist in synthesizing the information presented by the various panelists. This group was asked to reflect on the common themes that emerged from the discussion, paying particular attention to areas of agreement and disagreement. They were asked to address the following questions: 7

OCR for page 1
KEY NATIONAL EDUCATION INDICATORS TABLE 1-1 Framework for Education Indicators Developed to Guide the Workshop Indicators About Indicators About Individual-Level Institutions, Service Behaviors, Stages of Providers, and Engagement, and Indicators About Education/Learning Resources Outcomes Contextual Factors Birth to age 5 K-12 Higher education Other forms of postsecondary education and training Lifelong, informal learning 1. How do you envision that a national system of education indicators would be used? In what ways might these indicators support policy? 2. Based on the workshop discussions, what do you think are the most important variables/statistics to include in a national system of education indicators? What are the reasons for including these indicators? 3. What are the most important gaps that have emerged between currently available data sources and the types of data that would be needed to support a national system of education indicators? What additional data are needed? The panelists who suggested indicators were encouraged to strike a balance between ideal and realistic goals. That is, they were instructed to be forward looking in thinking about measures that will assess both current conditions and trends as well as anticipated future conditions, and not to feel constrained by what has been done in the past or their knowledge of practical obstacles to collecting particular types of data. At the same time, they were asked to consider the extent to which data would be available to support the indicators they suggested. A rule of thumb suggested by David Breneman, the steering committee chair, was that a good system might begin with approximately one-third indicators based on existing data sources with long-term trend lines, one-third indicators that are not routinely collected but conceivably could be collected, and one- third indicators in a more gray area—those that reflect areas of key importance but present measurement challenges and require further development. As he noted at the beginning of the workshop, “We have some freedom to think in a fairly open way about what we would like to see.” Identifying a short list of indicators that “people should attend to and making them readily available on a website would be a great contribution to education in this country,” he added. All of the workshop participants were asked to consider several questions as they considered the indicators that were discussed over the course of the two days: 8

OCR for page 1
1. Is the indicator from a reliable source? 2. Is the indicator reasonably accurate, precise, reliable, valid, and unbiased? 3. Is it available over time and measured in a consistent way over time? Will it continue to be available? 4. Does it reflect a salient outcome or measure of well-being? 5. Does the indicator have a relatively unambiguous interpretation? Will it be easily understood by the public? 6. Can the indicator be disaggregated in order to report it subnationally, for various population groups, and by specific demographic characteristics? Together, the participants brought a wide range of ideas to the workshop, suggesting a range of phenomena that might be measured and a range of approaches to data collection and data systems. Many factors should influence the selection of a short list of indicators for this broad national purpose. As Chris Hoenig, senior advisor to the NAS presidents, noted in opening remarks, the overall set of indicators ultimately adopted by SUSA will be vitally important because they will be used to guide goals and decisions about each major sector in the country. He showed the group the preliminary version of the interactive website, which displays the health indicators that have been selected, to illustrate how useful the program can be. He stressed the importance of the logical framework underlying the selected indicators, which will guide thinking about forces that shape outcomes as well as disparities. Diana Pullin reinforced this point in her opening remarks, noting that the indicators that are ultimately selected will reflect a particular conception of what it means to be an educated person. There is sometimes a tension in discussion of public education, she added, between the goals of providing a public benefit (a populace that is equipped for citizenship and work, for example) and providing a benefit to individuals (the intellectual tools to pursue a fulfilling life, for example). There is a risk that summative data about this complex enterprise may tend to commodify it, as some have suggested has occurred with indicators used in published report rankings of colleges and high schools do.5 Indicators can be used not just to indicate what has happened, she noted, with reference to the 2011 GAO report, but also to promote progress, to provide transparency, to further accountability, to promote civic engagement, to further economic productivity, and to engender conversation in communities and in commerce. Like the workshop, this report is organized by the five life spans. Chapters 2 through 6 describe the indicators proposed for the stages and the issues each presented. All of the indicators suggested for each stage are listed at the beginning of these chapters in a table, so that it will be easy to see the range of what was proposed and any areas of overlap.6 The suggestions made by the panelists are summarized, and then the key issues that emerged in discussion are described. Chapter 7 summarizes the remarks of the synthesis panel and the ensuing discussion (the workshop agenda appears in Appendix A and the particpants are listed in Appendix B). The discussion was wide-ranging and this report was designed to capture the most important themes and issues. 5 See http://www.usnews.com/rankings for more information on these rankings. 6 The presenters took varying approaches to the assigned task and this summary report reflects that. Thus, some chapters contain more references and quantitative information than others. 9