1
Introduction

As in many areas of public and private endeavor, publicly funded programs intended to protect and improve the health of the public are being asked to account in measurable ways for their performance. The 1990s have brought a growing emphasis on accountability for achieving desired outcomes, and methods of performance measurement have emerged as essential tools for operationalizing this quest for accountability. A system of performance measurement promises improved documentation of the contributions of public and private agencies, and can serve as a quality improvement tool by drawing attention to practices shown to contribute to desired outcomes and by identifying areas needing improvement. In fact, many people who are well informed about public health, health policy, health economics, and related matters believe that we cannot expect public funding to increase or even be maintained at current levels without better documentation of the return on program investments.

Measuring performance is not a new idea, but the emphasis on outcomes has changed the way we think about these issues and what needs to be measured. It is no longer enough to ask, ''How many people enrolled in a smoking cessation program?" or even "How many people finished the program?" Now, answers are also sought to questions such as "How many people stopped and are still not smoking a year after finishing the program?" Selecting the right questions requires an understanding—still limited in some fields—of the often complex relationships between program activities and health outcomes. Answering the questions requires access to appropriate data. Existing data sources, however, have generally not been created for this purpose and may not be readily adaptable to meet the need.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 15
--> 1 Introduction As in many areas of public and private endeavor, publicly funded programs intended to protect and improve the health of the public are being asked to account in measurable ways for their performance. The 1990s have brought a growing emphasis on accountability for achieving desired outcomes, and methods of performance measurement have emerged as essential tools for operationalizing this quest for accountability. A system of performance measurement promises improved documentation of the contributions of public and private agencies, and can serve as a quality improvement tool by drawing attention to practices shown to contribute to desired outcomes and by identifying areas needing improvement. In fact, many people who are well informed about public health, health policy, health economics, and related matters believe that we cannot expect public funding to increase or even be maintained at current levels without better documentation of the return on program investments. Measuring performance is not a new idea, but the emphasis on outcomes has changed the way we think about these issues and what needs to be measured. It is no longer enough to ask, ''How many people enrolled in a smoking cessation program?" or even "How many people finished the program?" Now, answers are also sought to questions such as "How many people stopped and are still not smoking a year after finishing the program?" Selecting the right questions requires an understanding—still limited in some fields—of the often complex relationships between program activities and health outcomes. Answering the questions requires access to appropriate data. Existing data sources, however, have generally not been created for this purpose and may not be readily adaptable to meet the need.

OCR for page 15
--> In 1995, the U.S. Department of Health and Human Services (DHHS) proposed the establishment of Performance Partnership Grants (PPGs) requiring the application of performance measurement methods to a set of federal block grant programs that provide funding to states for public health, substance abuse, and mental health activities. That proposal made it necessary for DHHS to consider what the appropriate performance measures would be, how they would be used, and whether suitable data were or could be made available to support the process. The department sought assistance in addressing these issues from the Committee on National Statistics of the National Research Council. The Panel on Performance Measures and Data for Public Health Performance Partnership Grants was assembled in fall 1995 to assess the state of the art in performance measurement for program areas covered by the specified block grants and to recommend steps toward improving performance measures and performance measurement for health-related programs. The work of the panel has resulted in two reports, of which this is the second. In its first report (National Research Council, 1997), the panel discussed specific measures that are feasible to use now in connection with the block grant programs, as well as conceptual and policy issues related to the use of performance measures. In this second report, the panel looks beyond measures for specific program areas to address broader data and information system issues that require attention at the federal, state, and local levels to advance the practice of performance measurement for publicly funded health programs. Origins of the Study The immediate impetus for this study was the DHHS proposal to establish PPGs for a specific set of health programs. That PPG proposal was, however, a reflection of a more general interest in performance measurement that is evidenced by parallel developments in public health, health care, and public policy. All of these developments have helped draw attention to the challenges of identifying appropriate measures and obtaining high-quality data. Performance Partnership Grants States receive DHHS grant funds in support of various health programs. Seeking a way to increase state flexibility in the use of these funds while enhancing accountability for progress toward program goals, DHHS proposed that formal legislative changes be made for some of these grant programs to mandate the implementation of PPG arrangements between states and the federal government. The program areas covered by the original PPG proposal were chronic diseases; sexually transmitted diseases (STDs), human immunodeficiency virus (HIV) infection, and tuberculosis; immunization; mental health; substance abuse; and

OCR for page 15
--> three areas of special interest to DHHS—sexual assault, disabilities, and emergency medical services. The proposal called for DHHS and each state to negotiate an agreement on program objectives for a 3- to 5-year period. Each agreement would also include a set of related performance measures to be used as a basis for monitoring progress toward those objectives. The PPG concept envisioned that DHHS, in consultation with states, public health professionals, private organizations, public agencies, and citizens, would develop a menu of performance measures from which states would select a subset appropriate to their program goals. Because the problems and priorities of states vary, a single set of required measures for use by all states was not considered appropriate. It was originally expected that the PPG mechanism would be formalized through legislation, but this has not happened. Nevertheless, the idea of performance partnerships based on negotiated federal-state agreements regarding program objectives and measures remains viable and is being implemented for certain grant programs (e.g., the Maternal and Child Health Block Grant; see Maternal and Child Health Bureau, 1997). Other Influences Perhaps the most direct antecedent to the PPG proposal is the Government Performance and Results Act (GPRA) of 1993, which requires the federal government to measure the performance of all federal programs. This requirement has focused the attention of DHHS agencies on the issue of performance measurement and gives them an incentive to implement performance reporting for their grantees. With its emphasis on the collection and analysis of data related to outcomes, performance measurement has close ties to the systematic assessment of health status and health needs that is recognized as a core function of public health (Institute of Medicine, 1988). At the federal level, these assessment activities already encompass the compilation and publication of vital statistics and disease surveillance data collected by states; survey programs such as the National Health Interview Survey; and ongoing public health monitoring efforts, including those to track progress toward the health promotion and disease prevention objectives of Healthy People 2000 (soon to be updated by Healthy People 2010). Similar activities are conducted by state and local health agencies. The health care field has responded to concerns about assessing and improving the quality of care with a variety of performance measurement activities. An early emphasis on quality assurance encouraged a focus on finding and responding to errors. There has been a gradual shift to a quality improvement approach that puts greater emphasis on using measurement to monitor processes and guide their improvement so better health outcomes can be achieved. The proliferation of new models of health care organization and delivery has led to further changes,

OCR for page 15
--> including the development of sets of standard measures for the processes and, increasingly, the outcomes of care (e.g., the Health Plan Employer Data and Information Set [HEDIS]; see National Committee for Quality Assurance, 1997). These efforts have been conducted under the aegis of independent organizations such as the National Committee for Quality Assurance, the Foundation for Accountability (1998), and the Joint Commission on Accreditation of Healthcare Organizations (1998b). Charge to the Panel For this study, the panel was charged with the following tasks: (1) identify measurable objectives that states and other interested parties might want to achieve through PPG agreements, and that can be monitored at the state and national levels either now or with small modifications to existing data systems; (2) identify measures relevant to PPG agreements that cannot be assessed, but are important to states and the federal government and therefore require further development; and (3) recommend improvements to state and federal surveys and data systems to facilitate the future collection of data for both existing and developmental measures. The panel's first report (National Research Council, 1997) focused primarily on task 1. The panel addressed broad analytic and infrastructure issues involved in developing and using performance measures. In addition, the panel assessed more than 3,200 candidate PPG measures, proposed by more than 1,500 participants at four regional meetings and by professional associations. Some 60 health outcome and risk status measures were selected as representative of those that might be used in conjunction with federal-state PPGs in the program areas covered by the original PPG proposal. (See Appendix A for a list of the health outcome and risk status measures proposed in the panel's first report.) Related process and capacity measures were also suggested. (The various types of measures are defined in the next section.) In the areas of mental health and substance abuse, a lack of consensus on outcome measures and the limited availability of comparable data collection across states led the panel to frame that portion of its discussion in terms of measures that might be used, but would require further development (i.e., an approach more consistent with task 2). The major findings presented in the panel's first report are reviewed briefly later in this chapter. The present report takes up tasks 2 and 3. The panel has given somewhat less attention to the identification of additional measures (task 2) than to improvements in data collection and information systems (task 3) for several reasons. First, the broader issues related to the development of performance measures and data systems that are addressed by task 3 were identified in the first phase of the study as being of higher priority and requiring more immediate attention than the narrower concerns of task 2. The panel felt further that since federal-state performance partnership agreements are now expected to develop in program areas not

OCR for page 15
--> included under the original proposal (e.g., maternal and child health), consideration of measures needing further development should not be restricted to the original programs. Likewise, the panel concluded that it is important to rethink the program-specific perspective of the PPG proposal as a basis for conceptualizing measures of health outcomes or enhancing data systems. The panel also recognized that the discussion could and should be broadened beyond the federal-state PPG framework to include the local level as well. Essential Definitions Performance Measurement and Related Concepts The term "performance measurement" is used in various contexts. In this report, the term denotes the selection and use of quantitative measures of program capacities, processes, and outcomes (assumed to be health outcomes in this case) to inform the public or a designated public agency about critical aspects of a program, including its effects on the public. The related term "performance monitoring" is used here in the context of a continuing set of performance measurement activities. A ''performance measure" is the specific quantitative representation of a capacity, process, or outcome deemed relevant to the assessment of program performance. One of the principal purposes of performance measurement is to assess whether progress is being made toward desired goals and whether appropriate program activities are being undertaken to promote the achievement of those goals. Performance measurement can also serve to identify problem areas that may require additional attention or, more positively, successful efforts that might serve as models for others. In some fields, performance measurement is being used under certain circumstances as a tool for regulation and resource allocation. The panel has advised against the use of performance measures for resource allocation for health programs until an adequate understanding is developed of the causal relationships between program activities and outcomes, of the measures and data needed to represent those relationships adequately, and of the appropriate adjustment methods for comparisons of dissimilar populations. Well-designed research and evaluation studies are needed to reveal more about the causal relationships that may exist between program activities and outcomes. Even with such studies, the panel cautions against using performance measures as the sole basis for causal inferences regarding program performance because of the diversity of factors beyond program activities that affect most health outcomes. See Chapter 2 for additional discussion of the characteristics and uses of performance measurement. "Accountability" for performance—an obligation or willingness to be assessed on the basis of appropriate measures of actions and outcomes with regard to the achievement of program or policy purposes—is an essential element

OCR for page 15
--> of the results-oriented management approach within which performance measurement is usually applied. Accountability can be required of government units through legislative or executive mandate. With GPRA, for example, the Congress has created a requirement that the executive branch agencies develop performance plans with appropriate performance measures. Under the kinds of performance partnership agreements represented by the PPG proposal, however, states incur an obligation to report on performance by accepting federal grant funding, but they are recognized as partners with whom some of the terms of an agreement are negotiated rather than dictated. In some cases, causal relationships between program activities and outcomes may be clear enough to justify holding the program directly accountable for observed outcomes. More often, and especially for complex matters such as health and well-being, requirements for accountability cannot be translated into an assumption that accountable parties always bear sole responsibility for the outcomes they report (Wholey and Hatry, 1992). In either event, continued failure to make progress toward intended performance goals should trigger analysis and change in policy and programs. Performance measurement is also a prominent aspect of efforts to assess the quality of health care (see, e.g., National Committee for Quality Assurance, 1997; Foundation for Accountability, 1998; Joint Commission on Accreditation of Healthcare Organizations, 1998a,b). However, the focus on quality of care differs in some respects from the panel's objective of measurement and reporting for the purpose of monitoring program performance. GPRA and the PPG efforts are specifically tied to government activities, but private-sector and provider-led organizations are playing a substantial role in clinical quality assessment. Measurement primarily for internal quality assurance and quality improvement purposes has been supplemented by the development of measures and external reporting programs to help employers and other purchasers of health services, as well as regulators and policy makers, compare the performance of provider groups. Measures and reporting formats that can be useful to individual consumers are also being studied. Categories of Performance Measures In its first report, the panel emphasized the need for several types of measures to assess program performance: health outcome, risk status, process, and capacity (see Box 1-1 for the definitions used in that report). Some health outcomes of primary interest, such as reductions in mortality or morbidity, may be impractical to measure as indicators of program performance. The time lag between an intervention and changes in those outcomes is too great for the effects to be observable within the relatively short time frames (e.g., ranging from 3 to 5 years in the PPG proposal) used to monitor program performance. To provide a partial solution to the problem posed by long latency periods, the panel included measures of risk status as intermediate outcomes. For a risk status measure to be

OCR for page 15
--> Box 1-1 Categories of Performance Measures Health Outcome: Change (or lack of change) in the health of a defined population related to an intervention, characterized in the following ways: health status outcome: change (or lack of) in physical or mental status social functioning: change (or lack of) in the ability of an individual to function in society consumer satisfaction: response of an individual to services received from health provider or program Risk Status (intermediate outcome): Change (or lack of) in the risk demonstrated or assumed to be associated with health status. Process: What is done to, for, with, or by defined individuals or groups as part of the delivery of services, such as performing a test or procedure or offering an educational service. Capacity: The ability to provide specific services, such as clinical screening and disease surveillance, made possible by the maintenance of the basic infrastructure of the public health system, as well as by specific program resources. SOURCE: National Research Council, 1997:9. appropriate, of course, there should be consensus that the result being measured is directly related to the health outcome of interest, although it is rarely possible to account adequately for all of the many confounding factors that affect the ultimate health outcome. Similarly, process and capacity measures should have a recognized and generally accepted relationship to relevant health outcomes. For example, a state with a goal of reducing its mortality rate from breast cancer could seek to reduce the risk of death by increasing the number of mammograms provided to women aged 50 and over. The mammography rate could then be used as a risk status measure. In addition, the state could track changes in processes (e.g., health education programs, requirements that private insurers include coverage of specific activities such as mammography or surgical treatment, and postoperative follow-up care) and elements of capacity (e.g., numbers of trained staff and facilities offering mammography screening) that are

OCR for page 15
--> believed to be related to the level of mortality from breast cancer. A detailed set of such measures could provide some understanding of the particular services that are available and that may be contributing to or inhibiting desired changes. Phase I: Focus on Selection of Performance Measures In its first report, the panel identified various outcome, process, and capacity measures that it considered suitable for federal-state performance partnership agreements under the specific grant programs for which PPGs had been proposed. The panel emphasized that these particular measures were representative examples, not a definitive or exhaustive list. Because health needs and program priorities, as well as data resources, vary among states and will surely vary over time, all of these measures will not be appropriate for every state and every future need. This is especially true for the process and capacity measures. States can pursue many reasonable strategies to improve health outcomes, and each strategy may require a different set of process and capacity measures. To illustrate the range of potential strategies and the implications for process measures for a single program goal, Table 1-1 (reprinted from the panel's first report) lists examples of strategies for reducing the incidence of tobacco smoking and process measures associated with each strategy. As part of phase I of the study, the panel also addressed broader issues of performance measurement by providing a general analytic framework for use by states and DHHS in assessing the appropriateness of outcome, process, and capacity measures for individual performance agreements. Recognizing that data resources and measurement methods need improvement, the panel recommended in its first report that DHHS continue to work with states and local areas toward several infrastructure goals: developing common definitions and measurement methods; encouraging efficient development of data resources that would support multiple public health, mental health, and substance abuse needs; incorporating state and local data priorities in national infrastructure development efforts; and promoting state and local data collection and analytic capabilities. These issues are addressed more thoroughly in the present report. The principal conclusions and recommendations of the panel's first report are briefly reviewed below. Use of Measures of Process and Capacity as Well as Outcomes Despite their widespread use and intuitive appeal, health outcome measures by themselves are insufficient for monitoring the effectiveness of a given program in achieving health goals. One reason is that outcomes are often influenced by factors other than activities associated with a particular program or agency. An example is mammography rates for women over age 50, which can be affected by factors such as state-sponsored consumer education, private advertising, tech-

OCR for page 15
--> TABLE 1-1 Examples of Program Strategies and Related Process Measures for Reducing the Incidence of Tobacco Smoking Program Strategy Process Measure Limit illegal youth purchases of smoking tobacco Percentage of vendors who illegally sell smoking tobacco to minors Percentage of communities with ordinances and regulations restricting smoking tobacco sales Number of vending machines selling smoking tobacco in locations accessible by youth Presence or absence of state or local tobacco retailer licensing system Increase the price of tobacco products Amount of excise tax (cents) per pack of cigarettes Restrict smoking tobacco advertising Percentage of communities with ordinances or regulations restricting smoking tobacco advertising Number of billboards advertising smoking tobacco close to schools and playgrounds Number of sport or entertainment events sponsored by tobacco companies Restrict indoor tobacco smoking Percentage of worksites (day care centers, schools, restaurants, public places) that are smoke free (have limited smoking to separately ventilated areas) Educate children about hazards of smoking tobacco Proportion of elementary, junior high, and high schools with age-appropriate smoking prevention activities and comprehensive curricula Increase access to or availability of smoking cessation programs Proportion of current tobacco smokers visiting a health care provider during the past 12 months who received advice to quit Proportion of managed care organizations (or schools or obstetric and gynecological service providers) that have active smoking prevention and cessation plans Market effective antismoking messages to the general public Percentage of adults who can recall seeing an antismoking message during the 12 months following a media campaign   SOURCE: National Research Council (1997:20).

OCR for page 15
--> nological changes that affect cost, and changes in insurance coverage. For substance abuse and mental disorders, knowledge regarding the factors that influence the longer-term outcomes of these chronic and recurring conditions is particularly limited. A second important limitation on the sole use of outcome measures to monitor program effectiveness, noted earlier, is the impractical delay involved in observing certain outcomes of interest, such as the length of time required for many cancers to become detectable. A third limitation is the rarity of some important outcomes, such as major outbreaks of food- or water-borne illness. Relying only on the detection of an outbreak of cryptosporidiosis, for example, would not be an acceptable means of monitoring the effectiveness of water treatment services. The panel therefore concluded that performance monitoring must also make use of measures of intermediate outcomes, process, and capacity for which scientific evidence or professional consensus has established a relationship to the desired health outcome. Even this "multimeasure" approach may not provide conclusive evidence of the effectiveness of particular interventions, but it will allow interested parties to examine actions taken by agencies to realize their objectives and consider whether changes in the magnitude or direction of their efforts are needed. Guidelines for Selecting Performance Measures The panel applied four guidelines in its review of the proposed PPG measures and urges others to use these same guidelines when selecting performance measures: Measures should be aimed at a specific objective and be result oriented. Outcome measures must clearly specify a desired health result, including identifying the population affected and the time frame involved. For process and capacity measures, the link to a health outcome should be clearly specified. Measures should be meaningful and understandable. Performance measures must be seen as important to both the general public and policy makers at all levels of government, and they should be stated in specific but nontechnical terms. Data should be adequate to support the measure. Data must meet reasonable statistical standards for accuracy and completeness; be available in a timely fashion, at appropriate periodicity, and at reasonable cost; and be collected using similar methods and with a common definition throughout the population of interest. Comparisons across states or other population groups are valid only if definitions and collection methodologies are consistent across those populations. Measures should be valid, reliable, and responsive. To be valid, a measure should capture the essence of what it purports to measure. To be reliable, a measure should have a high likelihood of yielding the same results in repeated

OCR for page 15
--> trials and therefore low levels of random error in measurement. To be responsive, a performance measure should be able to detect a change. It is also important to recognize that a measure meeting these requirements for one purpose may not meet them for another. For example, the infant mortality rate is usually considered a valid and reliable measure of the change in a state's rate of infant death from one period to another. It may not, however, be a valid measure of the performance of an individual public health agency that has only limited influence on factors affecting infant health. Moreover, it may not be a reliable measure of change at the local level because the small number of infant deaths at that level makes the measure subject to random variation from year to year. And it may not be a responsive measure for assessing the impact of a new prenatal counseling program serving a segment of a community that accounts for only a small share of the community's infant deaths. Limitations of a Program-Specific Approach to Performance Measurement For the first phase of this study, the panel was asked to consider performance measures that could be used in connection with federal grants to states for the specific program areas noted earlier (i.e., chronic diseases; STDS, HIV infection, and tuberculosis; mental health; immunization; substance abuse; and three areas of special interest to DHHS—sexual assault, disabilities, and emergency medical services). Clearly, the individual diseases and health conditions that the panel studied are only a subset of those that are of concern around the country. The panel believes, for at least three major reasons, that over the long-term it would be preferable to monitor performance using a more comprehensive and less program-specific approach that integrates generic with program-specific measures. First, the use of performance measures to assess the impact of a particular federal funding program is complicated by the fact that those federal funds are often only one of several sources of support for a state or local health program. For example, the federal mental health block grant represents only about 4 percent of state mental health agency budgets, with state general revenues, private insurance, Medicaid, and local sources making up the balance. Because those federal funds do not buy specific services, it appears unlikely that a change in any statewide measure of mental health outcomes could be attributed unequivocally to a mental health block grant. Second, a program-specific approach to monitoring performance tends to overlook the synergies that can result from the coordination of efforts supported by separate funding programs. For example, both a maternal and child health program and an STD program might target HIV testing in pregnant women, or resources for chronic disease prevention and environmental health might target lead abatement interventions. Given current levels of knowledge, efforts to

OCR for page 15
--> attribute outcomes to one or another partial funding source are expensive, often futile, and of no benefit. A related consideration is efficiently meeting various programs' overlapping data needs. A strictly program-specific approach might lead to duplication of data collection efforts or missed opportunities to adopt measures that can be used by more than one program. For example, measures related to tobacco use may be of interest not only to a tobacco control program but also to programs aimed at preventing cancer, preventing and controlling chronic respiratory illnesses such as asthma, and reducing the incidence of low-weight births. Finally, and much more broadly and subtly, the program-specific approach has led to hierarchical concepts about the governance, competence, and focus of performance measures and appropriate data systems that support them. A national perspective and federal leadership remain important, but an effective performance measurement and accountability system also requires that state and local agencies play a greater role in defining program priorities and shaping performance measurement activities. Effective change will require true partnership in this endeavor. These new concepts and their implications are discussed further in the succeeding chapters. Need to Strengthen State and Local Capacity for Data Collection and Analysis The panel concluded in its first report that the data infrastructure required to support state- and local-level performance monitoring needs to be strengthened. Many federal data collection programs produce national but not state- or local-level rates. Many of the potential health outcome measures identified by the panel are heavily dependent on a small number of collaborative state-federal surveys, such as those of the Behavioral Risk Factor Surveillance System and the Youth Risk Behavior Surveillance System. Even these survey programs do not cover all states or apply consistent survey methods across states. The panel therefore recommended viewing the use of performance measures to assess the effectiveness of public health, substance abuse, and mental health programs as an ongoing, long-term public administration effort that requires a strong commitment by the federal government to providing technical assistance and infrastructure support to its partners at the state and local levels. Inadvisability of Using Performance Measures Alone for Resource Allocation Purposes Although there is considerable value in using performance measurement to enhance the effectiveness and accountability of publicly funded programs, the development and use of performance measures, particularly for comparisons across states, is not yet—and may never be—a precise scientific process. Under-

OCR for page 15
--> standing of the relationships between health interventions and outcomes and between individuals' characteristics and health outcomes is still limited. Such knowledge is essential for making appropriate statistical adjustments for socio-demographic and other relevant factors. Moreover, the complexity of the relationships among health outcomes, program interventions, and other factors in the physical and socioeconomic environments may make it difficult to monitor performance in sufficient detail to ensure that resource allocation decisions are based on consideration of all the appropriate causal factors. In practical terms, timely and comparable data are often unavailable. Consequently, the panel warned that using cross-state comparisons of "performance" as the analytic basis for determining financial rewards or penalties for participating agencies is, at present, highly problematic. Phase II: Data and Information System Development to Support Performance Measurement For the second phase of the study, which addressed the needs for data and information system development to support performance measurement, the panel adopted a broader perspective than was suggested by the study's initial focus on state-level performance measurement for federally funded programs in specific areas of public health, substance abuse, and mental health. Rather than pursue a strictly technical assessment of program-specific measures, data collection methods, or analytic techniques, the panel judged it important to put performance measurement in a broader data context and to emphasize the commonalities across programs, while still taking note of some special concerns in specific program areas. The study's second phase continued to focus largely on the public sector, but the panel looked beyond the federal-state relationship that defined the PPG proposal to consider a more general notion of performance partnership agreements that can encompass state and local interests as well. The panel has three aims for the present report: (1) to highlight important technical and policy issues that must be considered in the further development and use of performance measurement for health-related programs; (2) to describe a health information network that would support performance measurement at the national, state, and local levels; and (3) to present a strategy for developing such a network. A Vision for a National Health Information Network Certain elements are fundamental to the panel's vision of the kind of information network that should be developed to support health-related performance measurement. A key factor is the development of a national network through an active collaboration among local, state, and federal agencies. A national approach should ensure that information resources, interests, and needs at each of these

OCR for page 15
--> levels, as well as in the private sector, are taken into account, while still allowing the aggregation of data in ways useful for larger geographic and administrative units. DHHS and other federal agencies have an important leadership role to play, but they must work in partnership with others who have an interest in such a network. Indeed, the panel envisions a national network of interacting systems, with data and transaction standards supporting the production of performance data that are comparable across sources. The aim is to find a means of supporting information needs for health-related performance measurement within a broader system that serves other operational, managerial, and analytic purposes. A specialized data system for performance measurement is generally not an efficient or cost-effective goal. Because health needs, program priorities, and resources differ throughout the country and change over time, an information network useful for performance measurement must be adaptable to these differences and changes. Furthermore, because understanding of performance measures and performance measurement is still evolving, an information network must be able to respond as additional empirical evidence is obtained and better methods of data collection are implemented. Finally, any such information network must provide strong protections for the confidentiality and security of data. The panel's vision for a national health information network is discussed in detail in Chapter 5. Critical Issues This report addresses several issues the panel believes to be critical to further advances in performance measurement for health-related programs. In the development of plans for performance measurement, the assessment of data needs, and the enhancement (or redesign) of data systems to support performance measurement, a primary concern is the need for an integrated perspective and effective collaboration. This collaboration involves multiple partners—federal, state, and local agencies, each with multiple stakeholders, as well as program managers, service providers, private nonprofit organizations, and consumers. Also requiring attention are ways to improve the use of existing data and to develop better performance measures. In addition, performance measurement systems will need to address the quality of the data that are collected and used. Information technologies are creating greater opportunities to apply performance measurement, but using those technologies effectively will require attention to data standards. Successful implementation of performance measurement systems will also depend on the availability of training and technical assistance to ensure that skilled staff can apply appropriate policy, programmatic, and technical expertise. Research to improve the science base for and the development and use of performance measures and performance measurement is essential. Another fundamental concern

OCR for page 15
--> is determining what resources are needed to support performance measurement activities and ensuring that those resources will be available. Structure of the Report This report presents the panel's findings and recommendations regarding data and information systems to support performance measurement for publicly funded health programs. Chapter 2 examines performance-based systems and the uses of performance measures and performance measurement. Chapter 3 considers the characteristics of various health program areas and the implications of those characteristics for performance goals and performance measurement. Chapter 4 explores factors in the current data and information system environment that must be addressed to advance the use of performance measurement. In Chapter 5, the panel outlines its vision of a national health information network that would effectively support performance measurement as well as other objectives, and makes recommendations to further the development and implementation of such a network. References Foundation for Accountability 1998. About FACCT. http:www.facct.org/about.html (April 15, 1998). Institute of Medicine 1988. The Future of Public Health. Committee for the Study of the Future of Public Health. Washington, D.C.: National Academy Press. Joint Commission on Accreditation of Healthcare Organizations 1998a. Nation's Three Leading Health Care Quality Oversight Bodies to Coordinate Measurement Activities. Press release. May 19, 1998. http://www.jcaho.org/news/nb.htm (June 5, 1998). 1998b. Oryx Fact Sheet for Health Care Organizations. http://www.jcaho.org/perfmeas/oryx/sidebar1.htm (July 24, 1998). Maternal and Child Health Bureau 1997. Guidance and Forms for the Title V Application/Annual Report. Maternal and Child Health Services Title V Block Grant Program. Rockville, Md.: U.S. Department of Health and Human Services, Health Resources and Services Administration. National Committee for Quality Assurance 1997. HEDIS 3.0/1998. Washington, D.C.: National Committee for Quality Assurance. National Research Council 1997. Assessment of Performance Measures for Public Health, Substance Abuse, and Mental Health. E.B. Perrin and J.J. Koshel, eds. Panel on Performance Measures and Data for Public Health Performance Partnership Grants, Committee on National Statistics. Washington, D.C.: National Academy Press. Wholey, J.S., and H.P. Hatry 1992. The case for performance monitoring. Public Administration Review 52(6):604–610.