3
Narrowing the Research-Practice Divide—Systems Considerations

OVERVIEW

Bridging the inference gap, as described in this chapter, is the daily leap physicians must make to piece existing evidence around individual patients in the clinical setting. Capturing and utilizing data generated in the course of care offers the opportunity to bring research and practice into closer alignment and propagate a cycle of learning that can enhance both the rigor and the relevance of evidence. Papers in this chapter illustrate process and analytic changes needed to narrow the research-practice divide and allow healthcare delivery to play a more fundamental role in the generation of evidence on clinical effectiveness.

In this chapter, Brent James outlines the system-wide reorientation that occurred at Intermountain Healthcare as it implemented a system to manage care at the care delivery level. Improved performance and patient care were fostered by a system designed to collect data to track inputs and outcomes and provide feedback on performance—elements that also created a useful research tool that has led to incremental improvements in quality along with discovery and large advancements in care at the practice level. The experience at Intermountain identifies some of the organizational and cultural changes needed, but a key was the utilization of electronic health records (EHRs) and support systems. Walter F. Stewart expands on the immense potential of the EHR as a tool to narrow the inference gap—the gap between what is known at the point of care and what evidence is needed to make a clinical decision. In his paper, he focuses on the potential for EHRs to increase real-time access to knowledge and facilitate the creation of evidence



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 151
The Learning Healthcare System: Workshop Summary 3 Narrowing the Research-Practice Divide—Systems Considerations OVERVIEW Bridging the inference gap, as described in this chapter, is the daily leap physicians must make to piece existing evidence around individual patients in the clinical setting. Capturing and utilizing data generated in the course of care offers the opportunity to bring research and practice into closer alignment and propagate a cycle of learning that can enhance both the rigor and the relevance of evidence. Papers in this chapter illustrate process and analytic changes needed to narrow the research-practice divide and allow healthcare delivery to play a more fundamental role in the generation of evidence on clinical effectiveness. In this chapter, Brent James outlines the system-wide reorientation that occurred at Intermountain Healthcare as it implemented a system to manage care at the care delivery level. Improved performance and patient care were fostered by a system designed to collect data to track inputs and outcomes and provide feedback on performance—elements that also created a useful research tool that has led to incremental improvements in quality along with discovery and large advancements in care at the practice level. The experience at Intermountain identifies some of the organizational and cultural changes needed, but a key was the utilization of electronic health records (EHRs) and support systems. Walter F. Stewart expands on the immense potential of the EHR as a tool to narrow the inference gap—the gap between what is known at the point of care and what evidence is needed to make a clinical decision. In his paper, he focuses on the potential for EHRs to increase real-time access to knowledge and facilitate the creation of evidence

OCR for page 151
The Learning Healthcare System: Workshop Summary that is more directly relevant to everyday clinical decisions. Stewart views the EHR as a transforming technology and suggests several ways in which appropriate design and utilization of this tool and surrounding support systems can allow researchers to tap into and learn from the heterogeneity of patients, treatment effects, and the clinical environment to accelerate the generation and application of evidence in a learning healthcare system. Perhaps one of the most substantial considerations will be how these quicker, practice-based opportunities to generate evidence might affect evidentiary standards. Steven Pearson’s paper outlines how the current process of assessing bodies of evidence to inform the coverage decision process might not be able to meet future needs, and the potential utility of a means to consider factors such as clinical circumstance in the process. He discusses possible unintended consequences of approaches such as Coverage with Evidence Development (CED) and suggests concepts and processes associated with coverage decisions in need of development and better definition. Finally, Robert Galvin discusses the employer’s dilemma of how to get true innovations in healthcare technology to populations of benefit as quickly as possible but guard against the harms that could arise from inadequate evaluation. He suggests that a “cycle of unaccountability” has hampered efforts to balance the need to foster innovation while controlling costs, and discusses some of the issues facing technology developers in the current system and a recent initiative by General Electric (GE), UnitedHealthcare, and InSightec to apply the CED approach to a promising treatment for uterine fibroids. Although this initiative has potential to substantially expand the capacity for evidence generation while accelerating access and innovation, challenges to be overcome include those related to methodology, making the case to employers to participate, and confronting the culture of distrust between payers and innovators. FEEDBACK LOOPS TO EXPEDITE STUDY TIMELINESS AND RELEVANCE Brent James, M.D., M.Stat. Intermountain Healthcare Quality improvement was introduced to health care in the late 1980s. Intermountain Healthcare, one of the first groups to attempt clinical improvement using these new tools, had several early successes (Classen et al. 1992; James 1989 [2005, republished as a “classics” article]). While those experiences showed that Deming’s process management methods could work within healthcare delivery, they highlighted a major challenge: the results did not, on their own, spread. Success in one location did not lead to widespread adoption, even among Intermountain’s own facilities.

OCR for page 151
The Learning Healthcare System: Workshop Summary Three core elements have been identified for a comprehensive quality-based strategy (Juran 1989): (1) Quality control provides core data flow and management infrastructure, allowing ongoing process management. It creates a context for (2) quality improvement—the ability to systematically identify then improve prioritized targets. (3) Quality design encompasses a set of structured tools to identify, then iteratively create new processes and products. Since the quality movement’s inception, most care delivery organizations have focused exclusively on improvement. None have built a comprehensive quality control framework. Quality control provides the organizational infrastructure necessary to rapidly deploy new research findings across care delivery locations. The same infrastructure makes it possible to generate reliable new clinical knowledge from care delivery experience. In 1996, Intermountain undertook to build clinical quality control across its 22 hospitals, 100-plus outpatient clinics, employed and affiliated physician groups (1,250 core physicians, among more than 3,000 total associated physicians), and a health insurance plan (which funds about 25 percent of Intermountain’s total care delivery). Intermountain’s quality control plan contained 4 major elements: (1) key process analysis; (2) an outcomes tracking system that measured and reported accurate, timely, medical, cost, and patient satisfaction results; (3) an organizational structure to use outcomes data to hold practitioners accountable, and to enable measured progress on shared clinical goals; and (4) aligned incentives, to harvest some portion of resulting cost savings back to the care delivery organization (while in many instances better quality can demonstrably reduce care delivery costs, current payment mechanisms direct most such savings to health payers). The Intermountain strategy depended heavily upon a new “shared baselines” approach to care delivery, that evolved during early quality improvement projects as a mechanism to functionally implement evidence-based medicine (James 2002): All health professionals associated with a particular clinical work process come together on a team (physicians, nurses, pharmacists, therapists, technicians, administrators, etc.). They build an evidence-based best practice guideline, fully understanding that it will not perfectly fit any patient in a real care delivery setting. They blend the guideline into clinical workflow, using standing order sets, clinical worksheets, and other tools. Upon implementation, health professionals adapt their shared common approach to the needs of each individual patient. Across more than 30 implemented clinical shared baselines, Intermountain’s physicians and nurses typically (95 percent confidence interval) modify about 5 to 15 percent of the shared baseline to meet the specific needs of a particular patient. That makes it “easy to do it right” (James 2001), while facilitating the role of clinical expertise. It also is much more efficient. Expert clinicians can focus on a subset of critical issues because the remainder of the care

OCR for page 151
The Learning Healthcare System: Workshop Summary process is reliable. The organization can staff, train, supply, and organize physical space to a single defined process. Shared baselines also provide a structure for electronic data systems, greatly enhancing the effectiveness of automated clinical information. Arguably, shared baselines are the key to successful implementation of electronic medical record systems. Key Process Analysis The Institute of Medicine’s prescription for reform of U.S. health care noted that an effective system should be organized around its most common elements (IOM 2001). Each year for 4 years, Intermountain attempted to identify high priority clinical conditions for coordinated action, through expert consensus among senior clinical and administrative leaders generated through formal nominal group technique. In practice, consensus methods never overcame advocacy. Administrative and clinical leaders, despite a superficially successful consensus process, still focused primarily on their own departmental or personal priorities. We therefore moved from expert consensus to objective measurement. That involved first, identifying front line work processes. This complex task was aided by conceptually subdividing Intermountain’s operations into 4 large classes: (1) work processes centered around clinical conditions; (2) clinical work processes that are not condition-specific (clinical support services, e.g., processes located within pharmacy, pathology, anesthesiology/procedure rooms, nursing units, intensive care units, patient safety); (3) processes associated with patient satisfaction; and (4) administrative support processes. Within each category, we attempted to identify all major work processes that produced value-added results. These work processes were then prioritized. To illustrate, within clinical conditions we first measured the number of patients affected. Second, clinical risk to the patient was estimated. We used intensity of care as a surrogate for clinical risk, and assessed intensity of care by measuring true cost per case. This produced results that had high face validity with clinicians, while also working well with administrative leadership. Third, base-state variability within a particular clinical work process was measured by calculating the coefficient of variation, based on intensity of care (cost per case). Fourth, using Batalden and Nelson’s concept of clinical microsystems specialty groups that routinely worked together on the basis of shared patients were identified along with the clinical processes through which they managed those patients (Batalden and Splaine 2002; Nelson et al. 2002). This was a key element for organizational structure. Finally, we applied two important criteria for which we could not find metrics: we used expert judgment to identify underserved subpopulations, and to balance our roll-out across all elements of the Intermountain care delivery system.

OCR for page 151
The Learning Healthcare System: Workshop Summary Among more than 1,000 inpatient and outpatient condition-based clinical work processes, 104 accounted for almost 95 percent of all of Intermountain’s clinical care delivery. Rather than the traditional 80/20 rule (the Pareto principle), we saw a 90/10 rule: Clinical care concentrated massively, on a relative handful of high priority clinical processes (the IOM’s Quality Chasm report got it right!). Those processes were addressed in priority order, to achieve the most good for the most patients, while freeing resources to enable traditional, one by one care delivery plans for uncommon clinical conditions. Outcomes Tracking Prior to 1996, Intermountain had tried to start clinical management twice. The effort failed each time. Each failure lost $5 million to $10 million in sunk costs, and cashiered a senior vice president for medical affairs. When asked to make a third attempt, we first performed a careful autopsy on the first two attempts. Each time Intermountain had found clinicians willing to step up and lead. Then, each time, Intermountain’s planners uncritically assumed that the new clinical leaders could use the same administrative, cost-based data to manage clinical processes, as had traditionally been used to manage hospital departments and generate insurance claims. On careful examination, the administrative data contained gaping holes relative to clinical care delivery. They were organized for facilities management, not patient management. One of the National Quality Forum’s (NQF) first activities, upon its creation, was to call together a group of experts (its Strategic Framework Board—SFB) to produce a formal, evidence-based method to identify valid measurement sets for clinical care (James 2003). The SFB found that outcomes tracking systems work best when designed around and integrated into front-line care delivery. Berwick et al. noted that such integrated data systems can “roll up” into accountability reports for practice groups, clinics, hospitals, regions, care delivery systems, states, and the nation. The opposite is not true. Data systems designed top down for national reporting usually cannot generate the information flow necessary for front-line process management and improvement (Berwick et al. 2003). Such top-down systems often compete for limited front-line resources, damaging care delivery at the patient interface (Lawrence and Mickalide 1987). Intermountain adopted the NQF’s data system design method. It starts with an evidence-based best practice guideline, laid out for care delivery—a shared baseline. It uses that template to identify and then test a comprehensive set of medical, cost, and satisfaction outcomes reports, optimized for clinical process management and improvement. The report set leads to a list of data elements and coding manuals, which generate data marts within an

OCR for page 151
The Learning Healthcare System: Workshop Summary electronic data warehouse (patient registries), and decision support structure for use within electronic medical record systems. The production of new clinical outcomes tracking data represented a significant investment for Intermountain. Clinical work processes were attacked in priority order, as determined by key process analysis. Initial progress was very fast. For example, in 1997 outcomes tracking systems were completed for the two biggest clinical processes within the Intermountain system. Pregnancy, labor, and delivery represents 11 percent of Intermountain’s total clinical volume. Ischemic heart disease adds another 10 percent. At the end of the year, Intermountain had a detailed clinical dashboard in place for 21 percent of Intermountain’s total care delivery. Those data were designed for front-line process management, then rolled up into region- and system-level accountability reports. Today, outcomes data cover almost 80 percent of Intermountain’s inpatient and outpatient clinical care. They are immediately available through internal websites, with data lag times under one month in all cases, and a few days in most cases. Organizational Structure About two-thirds of Intermountain’s core physician associates are community-based, independent practitioners. That required an organizational structure that heavily emphasized shared professional values, backed up by aligned financial incentives (in fact, early successes relied on shared professional values alone; financial incentives came quite late in the process, and were always modest in size). The microsystems (Batalden and Splaine 2002) subpart of the key process analysis provided the core organizational structure. Families of related processes, called Clinical Programs, identified care teams that routinely worked together, even though they often spanned traditional subspecialty boundaries. Intermountain hired part-time physician leaders (1/4 full time equivalent) for each Clinical Program in each of its 3 major regions (networks of outpatient practices and small community hospitals, organized around large tertiary hospital centers). Physician leaders are required to be in active practice within their Clinical Program; to have the respect of their professional peers; and to complete formal training in clinical quality improvement methods through Intermountain’s internal clinical QI training programs (the Advanced Training Program in Clinical Practice Improvement). Recognizing that the bulk of process management efforts rely upon clinical staff, Intermountain also hired full-time “clinical operations administrators.” Most of the staff support leaders are experienced nurse administrators. The resulting leadership dyad—a physician leader with a nursing/support staff leader—meet each month with each of the local clinical teams that work within their Clinical Program. They present and review patient outcomes results for each team, compared to

OCR for page 151
The Learning Healthcare System: Workshop Summary their peers and national benchmarks. They particularly focus on clinical improvement goals, to track progress, identify barriers, and discuss possible solutions. Within each region, all of the Clinical Program dyads meet monthly with their administrative counterparts (regional hospital administration, finance, information technology, insurance partners, nursing, and quality management). They review current clinical results, track progress on goals, and assign resources to overcome implementation barriers at a local level. In addition to their regional activities, all leaders within a particular Clinical Program from across the entire Intermountain system meet together monthly as a central Guidance Council. One of the 3 regional physician leaders is funded for an additional part-time role (1/4 time) to oversee and coordinate the system-wide effort. Each system-level Clinical Program also has a separate, full-time clinical operations administrator. Finally, each Guidance Council is assigned at least one full-time statistician, and at least one full-time data manager, to help coordinate clinical outcomes data flow, produce outcomes tracking reports, and to perform special analyses. Intermountain coordinates a large part of existing staff support functions, such as medical informatics (electronic medical records), electronic data warehouse, finance, and purchasing, to support the clinical management effort. By definition, each Guidance Council oversees a set of condition-based clinical work processes, as identified and prioritized during the key process analysis step. Each key clinical process is managed by a Development Team which reports to the Guidance Council. Development Teams meet each month. The majority of Development Team members are drawn from front-line physicians and clinical staff, geographically balanced across the Intermountain system, who have immediate hands-experience with the clinical care under discussion (technically, “fundamental knowledge”). Development Team members carry the team’s activities—analysis and management system results—back to their front-line colleagues, to seek their input and help with implementation and operations. Each Development Team also has a designated physician leader, and Knowledge Experts drawn from each region. Knowledge Experts are usually specialists associated with the Team’s particular care process. For example, the Primary Care Clinical Program includes a Diabetes Mellitus Development Team (among others). Most team members are front-line primary care physicians and nurses who see diabetes patients in their practices every day. The Knowledge Experts are diabetologists, drawn from each region. A new Development Team begins its work by generating a Care Process Model (CPM) for their assigned key clinical process. Intermountain’s central Clinical Program staff provides a great deal of coordinated support for this effort. A Care Process Model contains 5 sequential elements:

OCR for page 151
The Learning Healthcare System: Workshop Summary The Knowledge Experts generate an evidence-based best practice guideline for the condition under study, with appropriate links to the published literature. They share their work with the body of the Development Team, who in turn share it with their front-line colleagues, asking “What would you change?” As the “shared baseline” practice guideline stabilizes over time, The full Development Team converts the practice guideline into clinical workflow documents, suitable for use in direct patient care. This step is often the most difficult of the CPM development process. Good clinical flow can enhance clinical productivity, rather than adding burden to front-line practitioners. The aim is to make evidence-based best care the lowest energy default option, with data collection integrated into clinical workflow. The core of most chronic disease CPMs is a treatment cascade. Treatment cascades start with disease detection and diagnosis. The first (and most important) “treatment” is intensive patient education, to make the patient the primary disease manager. The cascade then steps sequentially through increasing levels of treatment. A font-line clinical team moves down the cascade until they achieve adequate control of the patient’s condition, while modifying the cascade’s “shared baseline” based upon individual patient needs. The last step in most cascades is referral to a specialist. The team next applies the NQF SFB outcomes tracking system development tools, to produce a balanced dashboard of medical, cost, and satisfaction outcomes. This effort involves the electronic data warehouse team, to design clinical registries that bring together complementary data flows with appropriate pre-processing. The Development Team works with Intermountain’s medical informatics groups, to blend clinical workflow tools and data system needs into automated patient care data systems. Central support staff help the Development Team build web-based educational materials for both care delivery professionals, and the patients they serve. A finished CPM is formally deployed into clinical practice by the governing Guidance Council, through its regional physician/nurse leader dyads. At that point, the Development Team’s role changes. The Team continues to meet monthly to review and update the CPM. The Team’s Knowledge Experts have funded time to track new research developments. The Team also reviews care variations as clinicians adapt the shared baseline. It closely follows major clinical outcomes, and receives and clears improvement ideas that arise among Intermountain’s front-line practitioners and leadership.

OCR for page 151
The Learning Healthcare System: Workshop Summary Drawing on this structure, Intermountain’s CPMs tend to change quite frequently. Knowledge Experts have an additional responsibility of sharing new findings and changes with their front-line colleagues. They conduct regular continuing education sessions, targeted both at practicing physicians and their staffs, for their assigned CPM. Education sessions cover the full spectrum of the coordinated CPM: They review current best practice (the core evidence-based guideline); relate it to clinical workflow; show delivery teams how to track patient results through the outcomes data system; tie the CPM to decision support tools built into the electronic medical record; and link it to a full set of educational materials, for patients and for care delivery professionals. Chronic disease Knowledge Experts also run the specialty clinics that support front-line care delivery teams. Continuing education sessions usually coordinate the logistics of that support. The Knowledge Experts also coordinate specialty-based nurse care managers and patient trainers. An Illustrative CPM in Action: Diabetes Mellitus Through its health plan and outpatient clinics, Intermountain supports almost 20,000 patients diagnosed with diabetes mellitus. Among about 800 primary care physicians who manage diabetics, approximately one-third are employed within the Intermountain Medical Group, while the remainder are community-based independent physicians. All physicians and their care delivery teams—regardless of employment status—interact regularly with the Primary Care Clinical Program medical directors and clinical operations administrators. They have access to regular diabetes continuing education sessions. Three endocrinologists (one in each region) act as Knowledge Experts on the Diabetes Development Team. In addition to conducting diabetes training, the Knowledge Experts coordinate specialty nursing care management (diabetic educators), and supply most specialty services. Each quarter, Intermountain sends a packet of reports to every clinical team managing diabetic patients. The reports are generated from the Diabetes Data Mart (a patient registry) within Intermountain’s electronic data warehouse. The packet includes, first, a Diabetes Action List. It summarizes every diabetic patient in the team’s practice, listing testing rates and level controls (standard NCQA HEDIS measures: HbA1c, LDL, blood pressure, urinary protein, dilated retinal exams, pedal sensory exams; Intermountain was an NCQA Applied Research Center that helped generate the HEDIS diabetes measures, using the front-line focused NQF outcomes tracking design techniques outlined above). The report flags any care defect, as reflected either in test frequency or level controls. Front-line teams review lists, then either schedule flagged patients for office visits, or assign them to general care management nurses located within the local clinic. While

OCR for page 151
The Learning Healthcare System: Workshop Summary Intermountain pushes Diabetes Action Lists out every quarter, front-line teams can generate them on demand. Most teams do so every month. In addition to Action Lists, front-line teams can access patient-specific Patient Worksheets through Intermountain’s web-based Results Review system. Most practices integrate Worksheets into their workflow during chart preparation. The Worksheet contains patient demographics, a list of all active medications, and a review of pertinent history and laboratory focused around chronic conditions. For diabetic patients, it will include test dates and values for the last seven HbA1c, LDLs, blood pressures, urinary proteins, dilated retinal examinations, and pedal sensory examinations. A final section of the Worksheet applies all pertinent treatment cascades, listing recommendations for currently due immunizations, disease screening, and appropriate testing. It will flag out-of-control levels, with next-step treatment recommendations (technically, this section of the Worksheet is a passive form of computerized physician order entry). The standard quarterly report packet also contains sections comparing each clinical team’s performance to their risk-adjusted peers. A third report tracks progress on quality improvement goals, and links them to financial incentives. Finally, a separate summary report goes to the team’s Clinical Program medical director. In meeting with the front-line teams, the Clinical Program leadership dyad often share methods used by other practices to improve patient outcome performance, with specific practice flow recommendations. Intermountain managed more than 20,000 diabetic patients by March, 2006 and Figures 3-1 and 3-2 show system-level performance on representative diabetes outcomes measures, as pulled real-time from the Intermountain outcomes tracking system. Primary care physicians supply almost 90 percent of all diabetes care in the system. As the last step on a treatment cascade, Intermountain’s Diabetes Knowledge Experts tend to concentrate the most difficult patients in their specialty practices. As a result, they typically have worse outcomes than their primary care colleagues. Using Routine Care Delivery to Generate Reliable Clinical Knowledge Evidence-based best practice faces a massive evidence gap. The healing professions currently have reliable evidence (Level I, II, or III—randomized trials, robust observational designs, or expert consensus opinion using formal methods (Lawrence and Mickalide 1987) to identify best, patient-specific, practice for less than 20 percent of care delivery choices (Ferguson 1991; Lappe et al. 2004; Williamson 1979). Bridging that gap will strain the capacity of any conceivable research system. Intermountain designed its Clinical Programs to optimize care delivery

OCR for page 151
The Learning Healthcare System: Workshop Summary FIGURE 3-1 Blood sugar control with Clinical Program management over time, for all diabetic patients managed within the entire Intermountain system National guidelines recommend that all diabetic patients be managed to hemoglobin A1c levels < 9 percent; ideally, patients should be managed to levels < 7 percent. performance. The resulting organizational and information structures make it possible to generate robust data regarding treatment effects, as a by-product of demonstrated best care. CPMs embed data systems that directly link outcome results to care delivery decisions. They deploy organized care delivery processes. Intermountain’s Clinical Programs might be thought of as effectiveness research, built system-wide into front-line care delivery. At a minimum, CPMs routinely generate Level II-3 information (robust, prospective observational time series) for all key clinical care delivery processes. In such a setting, all care changes get tested. For example, any new treatment, recently released in the published medical literature; any new drug; a new organizational structure for an ICU; a new nurse staffing policy implemented within a hospital can generate robust information to assess its effectiveness in a real care delivery setting. At need, Development Teams move up the evidence chain as a part of routine care delivery operations. For example, the Intermountain Car-

OCR for page 151
The Learning Healthcare System: Workshop Summary is safe, but are not so sure that it is effective? Or is it effective, but we don’t know whether the effectiveness is durable over the long term? Are these questions all the same visions of “promising” evidence? Is CED a new hurdle? Does it lower the one behind it? Does it introduce the opportunity to bring new hurdles such as comparative and cost-effectiveness that we have not had before? Ultimately CED may be used to support studies whose results will enhance the strength of evidence to meet existing standards, certainly part of the vision of CED—but might it also lead to a shift to a lower initial standard of evidence for coverage decisions? As we know when CED policy became known to industry, many groups approached CMS with not “promising” but perhaps even “poor” evidence, asking for coverage in return for the establishment of a registry from which we will all “learn.” Resolving these issues is an active area of policy discussion—with individuals at CMS and elsewhere still very early on the learning curve—and is vital to improving approaches as we develop and advance our vision of a learning healthcare system. IMPLICATIONS FOR ACCELERATING INNOVATION Robert Galvin, M.D. General Electric Technological innovations have substantially improved our nation’s health, but they also account for the largest percentage of the cost increases that continue to strain the U.S. healthcare system (Newhouse 1993). The process to decide whether to approve and then provide insurance coverage for these innovations has represented a “push-pull” between healthcare managers—representing healthcare insurers and public payers, trying to control cost increases—and manufacturers, including pharmaceutical companies, biotech startups, and others, looking for return on their investment and a predictable way to allocate new research resources. Employers, providers, and consumers also figure into the process, and the sum of all these stakeholders and their self-interests has, unfortunately, led to a cycle of “unaccountability” and a system that everyone agrees doesn’t work well (Figure 3-4). Over the past several years, a single-minded concentration on the rising costs of health care has gradually been evolving into a focus on the “value” of care delivered. In the context of assessing new technologies, evaluation has begun to shift to determining what outcomes are produced from the additional expense for a new innovation. A notable example of this approach is Cutler’s (Cutler et al. 2006) examination of cardiac innovations. In weighing technology costs and outcomes, he concluded that several years of additional life were the payback for the additional expense of these new interventions and at a cost that has been considered accept-

OCR for page 151
The Learning Healthcare System: Workshop Summary FIGURE 3-4 The cycle of unaccountability. able in our health system. However for employers and other payers, who look at value a little differently (i.e., what is the best quality achievable at the most controlled cost?), the situation is complex. While applauding innovations that add value—similar to those examined by Cutler—they remain acutely aware of the innovations that either didn’t add much incremental value or offered some improvement for specific circumstances but ended up increasing costs at an unacceptable rate due to overuse. A good example of the latter is the case of COX-2 inhibitors for joint inflammation. This modification of nonsteroidal anti-inflammatory drugs (NSAIDs) represented a significant advance for the 3-5 percent of the population who have serious gastric side effects from first-generation NSAIDs; however, within two years of their release, more than 50 percent GE’s population on NSAIDs were using COX-2s, an overuse of technology that has cost GE tens of millions of dollars in unnecessary cost. The employers’ dilemma is how to get breakthrough innovations to populations as fast as possible, but used by just those who will truly benefit, and not overpay for innovations whose costs exceed their benefits. How Coverage Decisions Work Today Although a lot of recent attention has focused on Food and Drug Administration (FDA) approval, employers are impacted most directly

OCR for page 151
The Learning Healthcare System: Workshop Summary by decisions about coverage and reimbursement. Although approval and coverage are linked, it is often not appreciated that they are distinctly different processes. FDA approval does not necessarily equate to insurance coverage. Payers, most often CMS and health insurance companies, make coverage decisions. CMS makes national coverage decisions in a minority of cases and otherwise delegates decision making to its regional carriers, largely Blue Cross insurance plans. Final coverage decisions vary among these Blue Cross plans and other commercial health insurers, but in general, a common process is followed. The method developed by the TEC, sponsored by the Blue Cross Blue Shield Association and composed of a committee of industry experts, is typical. The TEC decides what new services to review and then gathers all available literature and evaluates the evidence against five criteria (BlueCross BlueShield Association 2007). There is a clear bias toward large, randomized, controlled trials. If the evidence is deemed insufficient to meet the TEC criteria, a new product will be designated “experimental.” This has significant implications for technology developers, as payers, following their policy of reimbursing only for “medically necessary” services, uniformly do not pay for something considered experimental. This process has many positives, particularly the insistence on double blinding and randomization, which minimizes “false positives” (i.e., interventions that appear to work but turn out not to). Certain innovations have significant potential morbidity and/or very high cost (e.g., some pharmaceuticals or autologous bone marrow transplants for breast cancer), and having a high bar for coverage protects patients and payers. However, the process also has several negatives. It is a slow process working in the fast-moving world of innovation, and new services that greatly help patients can be unavailable for years after the FDA has approved them. Large, randomized controlled trials are often not available or feasible, and take a significant amount of time to complete Also, RCTs are almost exclusively performed in academic medical centers, and results achieved in this setting frequently cannot be extrapolated to the world of community-based medical care, where the majority of patients receive their care. The process overall is better at not paying for unproven innovations that it is at providing access to and encouraging promising new breakthroughs. The coverage history for digital mammography provides an example of these trade-offs. Digitizing images of the breast leads to improvements in sensitivity and specificity in the diagnosis of breast cancer, which most radiologists and oncologists believe translates into improved treatment of the disease. Although the FDA approved this innovation in 2000, it was deemed “experimental” in a 2002 TEC report due to insufficient evidence. Four years elapsed before a subsequent TEC review recommended coverage, and very soon after, all payers reimbursed the studies. In an interest-

OCR for page 151
The Learning Healthcare System: Workshop Summary ing twist, CMS approved reimbursement in 2001 but at the same rate as film-based mammography, a position that engendered controversy among radiologists and manufacturers. While the goal of not paying for an unproven service was met, the intervening four years between approval and coverage did not lead to improvement in this “promising” technology but rather marked the time needed to develop and execute additional clinical studies. The current process therefore falls short in addressing one part of the employer dilemma, speeding access of valuable new innovations to their populations. What is particularly interesting in the context of today’s discussion is that the process described is the only accepted process. Recognizing that innovation needed to occur in technology assessment and coverage determination, CMS developed a new process called CED—Coverage with Evidence Development (Tunis and Pearson 2006). This process takes promising technologies that have not accumulated sufficient patient experience and instead of calling them “experimental” and leaving it to the manufacturer to gather more evidence, combines payment of the service in a selected population with evidence development. Evidence development can proceed through submission of data to a registry or through practical clinical trials, and the end point is a definitive decision on coverage. This novel approach addresses three issues simultaneously: by covering the service, those patients most in need have access; by developing information on a large population, future tailoring of coverage to just those subpopulations who truly benefit can mitigate overuse; and by paying for the service, the manufacturer collects revenue immediately and gets a more definitive answer on coverage sooner—a potential mechanism for accelerating innovation. To date, CMS has applied CED to several interventions. The guidelines were recently updated with added specification on process and selection for CED. However, given the pace of innovations, it is not reasonable to think that CMS can apply this approach in sufficient volume to meet current needs. Because one-half of healthcare expenditures come from the private sector in the form of employer-based health benefits, it makes sense for employers to play a role in finding a solution to this cycle of unaccountability. On this basis, GE, in its role as purchaser, has launched a pilot project to apply CED in the private sector. Private Sector CED General Electric is working with UnitedHealthcare, a health insurer, and InSightec, an Israel-based manufacturer of healthcare equipment, to apply the CED approach to a new, promising treatment for uterine fibroids. The treatment in question is magnetic resonance (MR) based focused ultrasound (MRgFUS), in which ultrasound beams directed by magnetic

OCR for page 151
The Learning Healthcare System: Workshop Summary resonance imaging are focused at and destroy the fibroids (Fennessy and Tempany 2005). The condition is treated today by either surgery (hysterectomy or myomyectomy) or uterine artery embolization. The promise of the new treatment is that it is completely noninvasive and greatly decreases the time away from work that accompanies surgery. The intervention has received FDA pre-market approval on the basis of treatment in approximately 500 women (FDA 2004), but both CMS and TEC deemed the studies not large enough to warrant coverage and the service has been labeled “experimental” (TEC Assessment Program October 2005). As a result, no major insurer currently pays for the treatment. Both CMS and TEC advised InSightec to expand its studies to include more subjects and measure whether there was recurrence of fibroids. InSightec is a small company that has been having trouble organizing further research, due to both the expense and the fact that the doctors who generally treat fibroids, gynecologists, have been uninterested in referring treatment to radiologists. The company predicts that it will likely be three to five years before TEC will perform another review. All stakeholders involved in this project are interested in finding a non-invasive treatment for these fibroids. Women would certainly benefit from a treatment with less morbidity and a shorter recovery time. There are also economic benefits for the three principals in the CED project: GE hopes to pay less for treatment and have employees out of work for a shorter time; UnitedHealthcare would pay less for treatment as well, plus it would have the opportunity to design a study that would help target future coverage to specific subpopulations where the benefit is greatest; and InSightec would have the opportunity to develop important evidence about treatment effectiveness while receiving a return on its initial investment in the product. The parties agreed to move forward and patterned their project on the Medicare CED model, with clearly identified roles. General Electric is the project sponsor and facilitator, with responsibility for agenda setting, meeting planning, and driving toward issue resolution. As a self-insured purchaser, GE will pay for the procedure for its own employees. United-Healthcare has several tasks: (1) market the treatment option to its members; (2) establish codes and payment rates and contract with providers performing the service; (3) extend coverage to its insured members and its own employees in addition to its self-funded members; and (4) co-develop the research protocol with InSightec, including data collection protocols and parameters around study end-points and future coverage decisions. Finally, as the manufacturer, InSightec is co-developing the research protocol, paying for the data collection and analysis (including patient surveys), and soliciting providers to participate in the project.

OCR for page 151
The Learning Healthcare System: Workshop Summary Progress to Date and Major Challenges The initiative has progressed more slowly than originally planned, but data collection is set to begin before the end of 2006. The number and intensity of challenges has exceeded the expectations of the principals, and addressing them has frankly required more time and resources than anyone had predicted. However, the three companies recognize the importance of creating alternative models to the current state of coverage determination, and their commitment to a positive outcome is, if anything, stronger than it was at the outset of the project. There are challenges. Study Design and Decision End Points From a technical perspective this area has presented some very tough challenges. There is little information or experience about how to use data collected from nonrandomized studies in coverage decisions. The RCT has so dominated the decision making in public and private sectors that little is known about the risks or benefits of using case controls or registry data. What level of certainty is required to approve coverage? If a treatment is covered and turns out to be less beneficial than thought, should this be viewed as a faulty coverage process that resulted in wasted money or a “reasonable investment” that didn’t pay off? Who is the “customer” in the coverage determination process: the payers, the innovators, or the patients? If it is patients, how should their voice be integrated in the process? Another set of issues has to do with fitting the coverage decision approach to the new technology in question. It is likely that some innovations should be subject to the current TEC-like approach while others would benefit from a CED-type model. On what criteria should this decision be made and who should be the decision maker? Engaging Employers Private sector expenditures, whether through fully insured or self-funded lines of business, ultimately derive from employers (and their workers). Although employers have talked about value rather than cost containment over the past five years, it remains to be seen how many of them will be willing to participate in CED. Traditionally employers have watched “detailing” by pharmaceutical sales people and direct-to-consumer advertising lead to costly overuse and they may be reluctant to pay even more for technologies that would otherwise not be covered. The project is just reaching the stage in which employers are being approached to participate, so it is too early to tell how they will react. Their participation may, in part, be based on how CED is framed. If the benefit to employees

OCR for page 151
The Learning Healthcare System: Workshop Summary is clearly described and there is a business case to offer them (e.g., that controlled accumulation of evidence could better tailor and limit future use of the innovation), then uptake may be satisfactory. However, employers’ willingness to participate in the CED approach is critical. Culture of Distrust The third, and most surprising, challenge is addressing the degree of distrust between payers and innovators. Numerous difficult issues arise in developing a CED program (e.g., pricing, study end points, binding or nonbinding coverage decisions), and as in any negotiation, interpersonal relationships can be major factors in finding a compromise. Partly from simply not knowing each other, partly from suspiciousness about each other’s motives, the lack of trust has slowed the project. Manufacturers believe that payers want to delay coverage to enhance insurance margins, and payers believe that manufacturers want to speed coverage to have a positive impact on their own profit statements. Both sides have evidence to support their views, but both sides are far more committed to patient welfare than they realize. If CED or other innovations in coverage determinations are going to expand, partnership and trust are key. The system would benefit by having more opportunities for these stakeholders to meet and develop positive personal and institutional relationships. The current processes to determine coverage for innovations are more effective at avoiding paying for new service that may turn out not to be beneficial than they are at getting new treatments to patients quickly or helping develop needed evidence. These processes protect patients from new procedures that may lead to morbidity and are consistent with the “first, do no harm” approach of clinical medicine. However, this approach also slows access to new treatments that could reduce morbidity and improve survival and inadvertently makes investing in new innovations more difficult. With a rapidly growing pipeline of innovations from the device, biotechnology, and imaging industries, there is growing interest in developing additional models of evidence development and coverage determination. Three companies began an initiative in early 2006 to adopt a promising approach called Coverage with Evidence Development, pioneered by CMS, to the private sector. The initiative has made steady progress but it has also faced significant challenges, primarily the lack of experience in pricing, study design, and negotiating study end points in a non-RCT context. A major and unexpected issue is the lack of trust between payers and manufacturers. With the stakes for patients, payers, and innovators growing rapidly, pursuing new approaches to evidence development and coverage determination and addressing the resulting challenges should be a high priority for healthcare leaders.

OCR for page 151
The Learning Healthcare System: Workshop Summary REFERENCES Batalden, P, and M Splaine. 2002. What will it take to lead the continual improvement and innovation of health care in the twenty-first century? Quality Management in Health Care 11(1):45-54. Berwick, D, B James, and M Coye. 2003. Connections between quality measurement and improvement. Medical Care 41(1 Suppl.):130-138. Black, N. 1996. Why we need observational studies to evaluate the effectiveness of health care. British Medical Journal 312(7040):1215-1218. BlueCross BlueShield Association. 2007. Technology Evaluation Center [accessed 2006]. Available from www.bcbs.com/tec/teccriteria.html. Charlton, B, and P Andras. 2005. Medical research funding may have over-expanded and be due for collapse. QJM: An International Journal of Medicine 98(1):53-55. Classen, D, R Evans, S Pestotnik, S Horn, R Menlove, and J Burke. 1992. The timing of prophylactic administration of antibiotics and the risk of surgical-wound infection. New England Journal of Medicine 326(5):281-286. Cutler, D, A Rosen, and S Vijan. 2006. The value of medical spending in the United States, 1960-2000. New England Journal of Medicine 355(9):920-927. Dean, N, M Silver, K Bateman, B James, C Hadlock, and D Hale. 2001. Decreased mortality after implementation of a treatment guideline for community-acquired pneumonia. American Journal of Medicine 110(6):451-457. Dean, N, K Bateman, S Donnelly, M Silver, G Snow, and D Hale. 2006a. Improved clinical outcomes with utilization of a community-acquired pneumonia guideline. Chest 130(3):794-799. Dean, N, P Sperry, M Wikler, M Suchyta, and C Hadlock. 2006b. Comparing gatifloxacin and clarithromycin in pneumonia symptom resolution and process of care. Antimicrobial Agents and Chemotherapy 50(4):1164-1169. FDA (Food and Drug Administration). 2004 (October 22). Pre-Market Approval Letter: Ex-Ablate 2000 System, [accessed November 30 2006]. Available from http://www.fda.gov/cdrh/pdf4/P040003.html. Fennessy, F, and C Tempany. 2005. MRI-guided focused ultrasound surgery of uterine leiomyomas. Academic Radiology 12(9):1158-1166. Ferguson, J. 1991. Forward: research on the delivery of medical care using hospital firms. Proceedings of a workshop. April 30 and May 1, 1990, Bethesda, MD. Medical Care 29(7 Suppl.):JS1-JS2. Flexner, A. 2002. Medical education in the United States and Canada. From the Carnegie Foundation for the Advancement of Teaching, Bulletin Number Four, 1910. Bull World Health Organ 80(7):594-602. Graham, H, and N Diamond. 2004. The Rise of American Research Universities: Elites and Challengers in the Postwar Era. Baltimore, MD: Johns Hopkins University Press. Greenblatt, S. 1980. Limits of knowledge and knowledge of limits: an essay on clinical judgment. Journal of Medical Philosophy 5(1):22-29. Haynes, R. 1998. Using informatics principles and tools to harness research evidence for patient care: evidence-based informatics. Medinfo 9(1 Suppl.):33-36. Haynes, R, R Hayward, and J Lomas. 1995. Bridges between health care research evidence and clinical practice. Journal of the American Medical Informatics Association 2(6):342-350. Horwitz, R, C Viscoli, J Clemens, and R Sadock. 1990. Developing improved observational methods for evaluating therapeutic effectiveness. American Journal of Medicine 89(5):630-638.

OCR for page 151
The Learning Healthcare System: Workshop Summary IOM (Institute of Medicine). 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press. James, B. 1989 (2005, republished as a “classics” article). Quality Management for Health Care Delivery (monograph). Chicago, IL: Hospital Research and Educational Trust (American Hospital Association). ———. 2001. Making it easy to do it right. New England Journal of Medicine 345(13): 991-993. ———. 2002. Quality improvement opportunities in health care. Making it easy to do it right. Journal of Managed Care Pharmacy 8(5):394-399. ———. 2003. Information system concepts for quality measurement. Medical Care 41(1 Suppl.):171-179. Juran, J. 1989. Juran on Leadership for Quality: An Executive Handbook. New York: The Free Press. Lappe, J, J Muhlestein, D Lappe, R Badger, T Bair, R Brockman, T French, L Hofmann, B Horne, S Kralick-Goldberg, N Nicponski, J Orton, R Pearson, D Renlund, H Rimmasch, C Roberts, and J Anderson. 2004. Improvements in 1-year cardiovascular clinical outcomes associated with a hospital-based discharge medication program. Annals of Internal Medicine 141(6):446-453. Lawrence, R, and A Mickalide. 1987. Preventive services in clinical practice: designing the periodic health examination. Journal of the American Medical Association 257(16): 2205-2207. Meldrum, M. 2000. A brief history of the randomized controlled trial. From oranges and lemons to the gold standard. Hematology/Oncology Clinics of North America 14(4):745-760, vii. Nelson, E, P Batalden, T Huber, J Mohr, M Godfrey, L Headrick, and J Wasson. 2002. Microsystems in health care: Part 1. Learning from high-performing front-line clinical units. Joint Commission Journal on Quality Improvement 28(9):472-493. Newhouse, J. 1993. An iconoclastic view of health cost containment. Health Affairs 12(suppl.): 152-171. Porter, M, and E Teisber. 2006. Redefining Health Care: Creating Value-Based Competition on Results. 1 ed. Cambridge, MA: Harvard Business School Press. Reiss-Brennan, B. 2006. Can mental health integration in a primary care setting improve quality and lower costs? A case study. Journal of Managed Care Pharmacy 12(2 Suppl.):14-20. Reiss-Brennan, B, P Briot, W Cannon, and B James. 2006a. Mental health integration: rethinking practitioner roles in the treatment of depression: the specialist, primary care physicians, and the practice nurse. Ethnicity and Disease 16(2 Suppl.):3, 37-43. Reiss-Brennan, B, P Briot, G Daumit, and D Ford. 2006b. Evaluation of “depression in primary care” innovations. Administration and Policy in Mental Health 33(1):86-91. Salas, M, A Hofman, and B Stricker. 1999. Confounding by indication: an example of variation in the use of epidemiologic terminology. American Journal of Epidemiology 149(11):981-983. Seeger, J, P Williams, and A Walker. 2005. An application of propensity score matching using claims data. Pharmacoepidemiology and Drug Safety 14(7):465-476. Stewart, W, N Shah, M Selna, R Paulus, and J Walker. 2007 (In press). Bridging the inferential gap: the electronic health record and clinical evidence. Health Affairs (Web Edition). TEC Assessment Program. October 2005. Magnetic Resonance-Guided Focused Ultrasound Therapy for Symptomatic Uterine Fibroids. Vol. 20, No. 10. Tunis, S, and S Pearson. 2006. Coverage options for promising technologies: medicare’s “coverage with evidence development.” Health Affairs 25(5):1218-1230. Vandenbroucke, J. 1987. A short note on the history of the randomized controlled trial. Journal of Chronic Disease 40(10):985-987.

OCR for page 151
The Learning Healthcare System: Workshop Summary Williamson, J, P Goldschmidt, and I Jillson. 1979. Medical Practice Information Demonstration Project: Final Report. Office of the Assistant Secretary of Health, Department of Health Education and Welfare, Contract #282-77-0068GS. Baltimore, MD: Policy Research, Inc.

OCR for page 151
The Learning Healthcare System: Workshop Summary This page intentionally left blank.