Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary Summary INTRODUCTION AND OVERVIEW1 Clinical effectiveness research (CER) serves as the bridge between the development of innovative treatments and therapies and their productive application to improve human health. Building on efficacy and safety determinations necessary for regulatory approval, the results of these investigations guide the delivery of appropriate care to individual patients. As the complexity, number, and diversity of treatment options grow, the provision of clinical effectiveness information is increasingly essential for a safe and efficient healthcare system. Currently, the rapid expansion in scientific knowledge is inefficiently translated from scientific lab to clinical practice (Balas and Boren, 2000; McGlynn, 2003). Limited resources play a part in this problem. Of our nation’s more than $2 trillion investment in health care, an estimated less than 0.1 percent is devoted to evaluating the relative effectiveness of the various diagnostics, procedures, devices, pharmaceuticals, and other interventions in clinical practice (AcademyHealth, 2005; Moses et al., 2005). The problem is not merely a question of resources but also of the way they are used. With the information and practice demands at hand, and new tools in the works, a more practical and reliable clinical effectiveness research paradigm is needed. Information relevant to guiding decision making in clinical practice requires the assessment of a broad range of research 1 The planning committee’s role was limited to planning the workshop, and the workshop summary has been prepared by Roundtable staff as a factual summary of the issues and presentations discussed at the workshop.
OCR for page 2
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary questions (e.g., how, when, for whom, and in what settings are treatments best used?), yet the current research paradigm, based on a hierarchical arrangement of study designs, assigns greater weight or strength to evidence produced from methods higher in the hierarchy, without necessarily considering the appropriateness of the design for the particular question under investigation. For example, the advantages of strong internal validity, a key characteristic of the randomized controlled trial (RCT)—long considered the gold standard in clinical research—are often muted by constraints in time, cost, and limited external validity or applicability of results. And, although the scientific value of well-designed clinical trials has been demonstrated, for certain research questions, this approach is not feasible, ethical, or practical and may not yield the answer needed. Similarly, issues of bias and confounding inherent to observational, simulation, and quasi-experimental approaches may limit their use and enhancement, even for situations and circumstances requiring a greater emphasis on external validity. Especially given the growing capacity of information technology to capture, store, and use vastly larger amounts of clinically rich data and the importance of improved understanding of an intervention’s effect in real-world practice, the advantages of identifying and advancing methods and strategies that draw research closer to practice become even clearer. Against the backdrop of the growing scope and scale of evidence needs, limits of current approaches, and potential of emerging data resources, the Institute of Medicine (IOM) Roundtable on Evidence-Based Medicine, now the Roundtable on Value & Science-Driven Health Care convened the Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches workshop. The issues motivating the meeting’s discussions are noted in Box S-1, the first of which is the need for a deeper and broader evidence base for improved clinical decision making. But also important are the needs to improve the efficiency and applicability of the process. Underscoring the timeliness of the discussion is recognition of the challenges presented by the expense, time, and limited generalizability of current approaches, as well as of the opportunities presented by innovative research approaches and broader use of electronic health records that make clinical data more accessible. The overall goal of the meeting was to explore these issues, identify potential approaches, and discuss possible strategies for their engagement. Participants examined ways to expedite the development of clinical effectiveness information, highlighting the opportunities presented by innovative study designs and new methods of analysis and modeling; the size and expansion of potentially interoperable administrative and clinical datasets; and emerging research networks and data resources. The presentations and discussion emphasized approaches to research and learning that had the potential to supplement, complement, or supersede RCT findings and
OCR for page 3
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary BOX S-1 Issues Motivating the Discussion Need for substantially improved understanding of the comparative clinical effectiveness of healthcare interventions. Strengths of the randomized controlled trial muted by constraints in time, cost, and limited applicability. Opportunities presented by the size and expansion of potentially interoperable administrative and clinical datasets. Opportunities presented by innovative study designs and statistical tools. Need for innovative approaches leading to a more practical and reliable clinical research paradigm. Need to build a system in which clinical effectiveness research is a more natural by-product of the care process. suggested opportunities to engage these tools and methods as a new generation of studies that better address current challenges in clinical effectiveness research. Consideration also was given to the policies and infrastructure needed to take greater advantage of existing research capacity. Current Research Context Starting points for the workshop’s discussion reside in the presentation of what has come to be viewed as the traditional clinical research model, depicted as a pyramid in Figure S-1. In this model, the strongest level of evidence is displayed at the peak of the pyramid: the randomized controlled double blind study. This is often referred to as the “gold standard” of clinical research, and is followed, in a descending sequence of strength or quality, by randomized controlled studies, cohort studies, case control studies, case series, and case reports. The base of the pyramid, the weakest evidence, is reserved for undocumented experience, ideas, and opinions. A brief overview of the range of clinical effectiveness research methods is presented in Table S-1. Approaches are categorized into two groups: experimental and nonexperimental. Experimental studies are those in which the choice and assignment of the intervention is under control of the investigator; the results of a test intervention are compared to the results of an alternative approach by actively monitoring the respective experiences of either individuals or groups receiving or not receiving the intervention. Nonexperimental studies are those in which manipulation or randomiza-
OCR for page 4
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary FIGURE S-1 The classic evidence hierarchy. SOURCE: DeVoto, E., and B. S. Kramer. 2005. Evidence-Based Approach to Oncology. In Oncology an Evidence-Based Approach. Edited by A. Chang. New York: Springer. Modified and reprinted with permission of Springer SBM. tion is generally absent, the choice of an intervention is made in the course of clinical care, and existing data collected in the course of the care process are used to draw conclusions about the relative impact of different circumstances or interventions that vary between and among identified groups, or to construct mathematical models that seek to predict the likelihood of events in the future based on variables identified in previous studies. Noted at the workshop was the fact that, as currently practiced, the randomized controlled and blinded trial is not the gold standard for every circumstance. While not an exhaustive catalog of methods, Table S-1 provides a sense of the range of clinical research approaches that can be used to improve understanding of clinical effectiveness. Each method has the potential to advance understanding of the various aspects of the spectrum of questions that emerge throughout a product’s or intervention’s lifecycle in clinical practice. The issue is therefore not whether internal or external validity should be the overarching priority for research, but rather which approach is most appropriate to the particular need. In each case, careful attention to design and execution studies are vital. Recent methods development, along with the identification of problems in generalizing research results to broader populations than those enrolled in tightly controlled trials, as well as the impressive advances in the potential availability of data through expanded use of electronic health records, have all prompted re-consideration of research strategies and opportuni-
OCR for page 5
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary TABLE S-1 Selected Examples of Clinical Research Study Designs for Clinical Effectiveness Research Approach Description Data types Randomization Randomized Controlled Trial (RCT) Experimental design in which patients are randomly allocated to intervention groups (randomized) and analysis estimates the size of difference in predefined outcomes, under ideal treatment conditions, between intervention groups. RCTs are characterized by a focus on efficacy, internal validity, maximal compliance with the assigned regimen, and, typically, complete follow-up. When feasible and appropriate, trials are “double blind”—i.e., patients and trialists are unaware of treatment assignment throughout the study. Primary, may include secondary Required Pragmatic Clinical Trial (PCT) Experimental design that is a subset of RCTs because certain criteria are relaxed with the goal of improving the applicability of results for clinical or coverage decision making by accounting for broader patient populations or conditions of real-world clinical practice. For example, PCTs often have fewer patient inclusion/exclusion criteria, and longer term, patient-centered outcome measures. Primary, may include secondary Required Delayed (or Single-Crossover) Design Trial Experimental design in which a subset of study participants is randomized to receive the intervention at the start of the study and the remaining participants are randomized to receive the intervention after a pre-specified amount of time. By the conclusion of the trial, all participants receive the intervention. This design can be applied to conventional RCTs, cluster randomized and pragmatic designs. Primary, may include secondary Required
OCR for page 6
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary Approach Description Data types Randomization Adaptive Design Experimental design in which the treatment allocation ratio of an RCT is altered based on collected data. Bayesian or Frequentist analyses are based on the accumulated treatment responses of prior participants and used to inform adaptive designs by assessing the probability or frequency, respectively, with which an event of interest occurs (e.g., positive response to a particular treatment). Primary, some secondary Required Cluster Randomized Controlled Trial Experimental design in which groups (e.g., individuals or patients from entire clinics, schools, or communities), instead of individuals, are randomized to a particular treatment or study arm. This design is useful for a wide array of effectiveness topics but may be required in situations in which individual randomization is not feasible. Often secondary Required N of 1 trial Experimental design in which an individual is repeatedly switched between two regimens. The sequence of treatment periods is typically determined randomly and there is formal assessment of treatment response. These are often done under double blind conditions and are used to determine if a particular regimen is superior for that individual. N of 1 trials of different individuals can be combined to estimate broader effectiveness of the intervention. Primary Required Interrupted Time Series Study design used to determine how a specific event affects outcomes of interest in a study population. This design can be experimental or nonexperimental depending on whether the event was planned or not. Outcomes occurring during multiple periods before the event are compared to those occurring during multiple periods following the event. Primary or secondary Approach dependent
OCR for page 7
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary Approach Description Data types Randomization Cohort Registry Study Non-experimental approach in which data are prospectively collected on individuals and analyzed to identify trends within a population of interest. This approach is useful when randomization is infeasible. For example, if the disease is rare, or when researchers would like to observe the natural history of a disease or real world practice patterns. Primary No Ecological Study Non-experimental design in which the unit of observation is the population or community and that looks for associations between disease occurrence and exposure to known or suspected causes. Disease rates and exposures are measured in each of a series of populations and their relation is examined. Primary or secondary No Natural Experiment Non-experimental design that examines a naturally occurring difference between two or more populations of interest—i.e., instances in which the research design does not affect how patients are treated. Analyses may be retrospective (retrospective data analysis) or conducted on prospectively collected data. This approach is useful when RCTs are infeasible due to ethical concerns, costs, or the length of a trial will lead to results that are not informative. Primary or Secondary No Simulation and Modeling Non-experimental approach that uses existing data to predict the likelihood of outcome events in a specific group of individuals or over a longer time horizon than was observed in prior studies. Secondary No
OCR for page 8
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary Approach Description Data types Randomization Meta Analysis The combination of data collected in multiple, independent research studies (that meet certain criteria) to determine the overall intervention effect. Meta analyses are useful to provide a quantitative estimate of overall effect size, and to assess the consistency of effect across the separate studies. Because this method relies on previous research, it is only useful if a broad set of studies are available. Secondary No SOURCE: Adapted, with the assistance of Danielle Whicher of the Center for Medical Technology Policy and Richard Platt from Harvard Pilgrim Healthcare, from a white paper developed by Tunis, S. R., Strategies to Improve Comparative Effectiveness Research Methods and Data Infrastructure, for June 2009 Brookings workshop, Implementing Comparative Effectiveness Research: Priorities, Methods, and Impact. ties (Kravitz, 2004; Liang, 2005; Rush, 2008; Schneeweiss, 2004; Agency for Healthcare Research and Quality [AHRQ] CER methods and registry issues). This emerging understanding about limitations in the current approach, with respect to both current and future needs and opportunities, sets the stage for the workshop’s discussions. Clinical Effectiveness Research and the IOM Roundtable Formed in 2006 as the Roundtable on Value & Science-Driven Health Care brings together key stakeholders from multiple sectors—patients, health providers, payers, employers, health product developers, policy makers, and researchers—for cooperative consideration of the ways that evidence can be better developed and applied to drive improvements in the effectiveness and efficiency of U.S. medical care. Roundtable participants have set the goal that “by the year 2020, 90 percent of clinical decisions will be supported by accurate, timely, and up-to-date clinical information, and will reflect the best available evidence.” To achieve this goal, Roundtable members and their colleagues identify issues and priorities for cooperative stakeholder engagements. Central to these efforts is the Learning Healthcare System series of workshops and publications. The series collectively characterizes the key elements of a healthcare system that is designed to generate and apply the best evidence for healthcare choices of patients and providers. A related purpose of these meetings is the identification and
OCR for page 9
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary engagement of barriers to the development of the learning healthcare system and the key opportunities for progress. Each meeting is summarized in a publication available through The National Academies Press. Workshops in this series include The Learning Healthcare System (July 20–21, 2006) Judging the Evidence: Standards for Determining Clinical Effectiveness (February 5, 2007) Leadership Commitments to Improve Value in Healthcare: Toward Common Ground (July 23–24, 2007) Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches (December 12–13, 2007) Clinical Data as the Basic Staple of Health Learning: Creating and Protecting a Public Good (February 28–29, 2008) Engineering a Learning Healthcare System: A Look to the Future (April 28–29, 2008) Learning What Works: Infrastructure Required for Learning Which Care Is Best (July 30–31, 2008) Value in Health Care: Accounting for Cost, Quality, Safety, Outcomes and Innovation (November 17–18, 2008) This publication summarizes the proceedings of the fourth workshop in the Learning Healthcare System series, focused on improving approaches to clinical effectiveness research. The Roundtable’s work is predicated on the principle that “to the greatest extent possible, the decisions that shape the health and health care of Americans—by patients, providers, payers, and policy makers alike—will be grounded on a reliable evidence base, will account appropriately for individual variation in patient needs, and will support the generation of new insights on clinical effectiveness.” Well-conducted clinical trials have and will continue to contribute to this evidence base. However, the need for research insights is pressing, and as data are increasingly captured at the point of care and larger stores of data are made available for research, exploration is urgently needed on how to best use these data to ensure care is tailored to circumstance and individual variation. The workshop’s intent was to provide an overview of some of the most promising innovations and approaches to clinical effectiveness research. Opportunities to streamline clinical trials, improve their practical application, and reduce costs were reviewed; however, particular emphasis was placed on reviewing methods that improve our capacity to draw upon data collected at the point of care. Rather than providing a comprehensive review of methods, the discussion in the chapters that follow uses examples
OCR for page 10
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary to highlight emerging opportunities for improving our capacity to determine what works best for whom. A synopsis of key points from each session is included in this chapter; more detailed information on session presentations and discussions can be found in the chapters that follow. Day one of the workshop identified key lessons learned from experience (Chapter 2) and important opportunities presented by new tools and techniques (Chapter 3) and emerging data resources (Chapter 4). Discussion and presentations during day two focused on strategies to better plan, develop, and sequence the studies needed (Chapter 5) and concluded with presentations on opportunities to better align policy with research opportunities and a panel discussion on organizing the research community for change (Chapter 6). Keynote presentations provided overviews of the evolution and opportunities for clinical effectiveness research and provided important context for workshop discussions. These presentations and a synopsis of the workshop discussion are included in Chapter 1. The workshop agenda, biographical sketches of the speakers, and a list of workshop participants can be found in Appendixes A, B, and C, respectively. COMMON THEMES The Redesigning the Clinical Effectiveness Research Paradigm workshop featured speakers from a wide range of perspectives and sectors in health care. Although many points of view were represented, certain themes emerged from the 2 days of discussion, as summarized below and in Box S-22: Address current limitations in applicability of research results. Because clinical conditions and their interventions have complex and varying circumstances, there are different implications for the evidence needed, study designs, and the ways lessons are applied: the internal and external validity challenge. In particular given our aging population, often people have multiple conditions—co-morbidities—yet study designs generally focus on people with just one condition, limiting their applicability. In addition, although our assessment of candidate interventions is primarily through premarket studies, the opportunity for discovery extends throughout the lifecycle of an intervention—development, approval, coverage, and the full period of implementation. 2 The material presented expresses the general views and discussion themes of the participants of the workshop, as summarized by staff, and should not be construed as reflective of conclusions or recommendations of the Roundtable or the Institute of Medicine.
OCR for page 11
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary BOX S-2 Redesigning the Clinical Effectiveness Research Paradigm Address current limitations in applicability of research results Counter inefficiencies in timeliness, costs, and volume Define a more strategic use to the clinical experimental model Provide stimulus to new research designs, tools, and analytics Encourage innovation in clinical effectiveness research conduct Promote the notion of effectiveness research as a routine part of practice Improve access and use of clinical data as a knowledge resource Foster the transformational research potential of information technology Engage patients as full partners in the learning culture Build toward continuous learning in all aspects of care Counter inefficiencies in timeliness, costs, and volume. Much of current clinical effectiveness research has inherent limits and inefficiencies related to time, cost, and volume. Small studies may have insufficient reliability or follow-up. Large experimental studies may be expensive and lengthy but have limited applicability to practice circumstances. Studies sponsored by product manufacturers have to overcome perceived conflicts and may not be fully used. Each incremental unit of research time and money may bring greater confidence but also carries greater opportunity costs. There is a strong need for more systematic approaches to better defying how, when, for whom, and in what setting an intervention is best used. Define a more strategic use to the clinical experimental model. Just as there are limits and challenges to observational data, there are limits to the use of experimental data. Challenges related to the scope of possible inferences, to discrepancies in the ability to detect near-term versus long-term events, to the timeliness of our insights and our ability to keep pace with changes in technology and procedures, all must be managed. Part of the strategy challenge is choosing the right tool at the right time. For the future of clinical effectiveness research, the important issues relate not to whether randomized experimental studies are better than observational studies, or vice versa, but to what’s right for the circumstances (clinical and economic) and how the capacity can be systematically improved.
OCR for page 40
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary tiveness of various treatments in the clinical practice setting. Greg Pawlson from the National Committee for Quality Assurance notes that although these data offer the opportunity to bridge the chasm between clinical practice and clinical or health services investigations, much work is needed to make these data available and useful for research. Key barriers to progress include insufficient funding for the development of the health information technology (HIT) infrastructure needed for rapid learning; the influence of HIPAA protections and the proprietary nature of these data on the research questions and approaches pursued; and the structure of the EHR. Important to enabling the full potential of EHR and other course-of-care data resources will require greater focus on designing and developing EHR systems for research—beginning with the development of data standards and increased funding for the development of HIT infrastructure to facilitate access and use of data. Suggested policy interventions include: Better coordinating how the private and public sectors fund research, clinical learning, and HIT development; Increasing the proportion of funding dedicated to improving the quality and quantity of secondary database analyses; Creating incentives for collecting and structuring data useful for research; Providing more open and affordable access to data held by health plans and others; Engaging both the private and public sectors in an effort to set standards for how and what data are entered and retrieved from EHRs; Modifying HIPAA regulations to remove the major barriers imposed on research while balancing privacy concerns; and Improving medical education to better prepare health professionals to use individual and aggregated data for the care of patients. Pharmaceutical Industry Data As the healthcare system becomes more complex, the pharmaceutical industry is increasingly challenged to meet regulatory, payer, and patient demands for demonstration of the value of their products. Such assessments of risk–benefit, long-term safety, and comparative effectiveness often require postmarket clinical trial and database commitments. Peter Honig of Merck reflected on the difficult balance between the data transparency and data access needed to support the necessary epidemiologic, pharmacovigilance, and outcomes research with the increasingly commoditized and proprietary nature of data sources. Additional barriers to efficient use of
OCR for page 41
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary these data include the decentralized nature of most utilization and claims outcome data and needed improvements in study and trial methods. These issues are of particular interest to the industry in light of the high risk and high costs of the drug development process. Honig discusses several important initiatives underway to address these challenges. The FDA’s Critical Path initiative is advocating for public–private partnerships in the precompetitive space to address challenges in drug discovery and development, comparators other than placebo are being increasingly incorporated into clinical postapproval (Phase IV) trials, and structured methods are being developed to provide risk–benefit information that aids clinicians, payers, and regulators in making important decisions about safety, indicated use, and effectiveness. Industry acceptance of an ongoing, learning paradigm for evidence development has increased. Increasing rigor in pharmacoepidemiologic and risk–benefit standards is an asset to the field and the patient. The use of registries, both sentinel and population based, may provide a better method for pharmacovigilance. Increasing use of Bayesian statistical approaches, use of spontaneous and population-based data mining for postmarketing surveillance, and development of sophisticated data analysis tools to improve database output are all advances toward a smarter data collection system. To ensure these data add value to how care is delivered, educational efforts are also needed to improve the translation of generated knowledge into behavior. Regulatory Requirements and Data Generation Data developed and collected to satisfy regulatory requirements offer a rich resource and a driving force for improvements in our capacity for clinical effectiveness research. Mark B. McClellan of the Brookings Institution reports on two examples of how we might begin to take better advantage of these resources to enhance the healthcare system’s capacity to routinely generate and use data to learn what works in practice. The recently passed Food and Drug Administration Amendment Act of 2007 envisions the development of a postmarket surveillance system that actively monitors for suspected safety problems with medical products. However, it also has the potential to lead to the development of infrastructure built into the healthcare system that can be used to address questions of effectiveness and use of products in different types of patients and populations. To take advantage of the opportunity presented, attention is needed to developing standards and consistent methods for defining adverse events and pooling relevant summary data from large-scale analyses, as well as contending with issues that impeded data sharing. Medicare’s CED policy has also helped to develop the data needed to better inform coverage and clinical decisions by supporting the conduct
OCR for page 42
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary of trials or the development of registries that collect and house sophisticated sets of clinical data about the use and impact of several medical technologies on patient outcomes. McClellan pointed out that clarification by Congress of CMS’s authority to use these methods is needed to support efforts currently underway and to encourage similar efforts taking place in the private sector. These efforts seek to build the capacity to develop better evidence into the healthcare system, an approach that will become increasingly important to contend with the vast scope and scale of current and future knowledge gaps. The majority of these gaps are in areas for which it has been particularly challenging to develop evidence—such as assessing the effects of the many subtle and built-in differences in medical practice for patients with chronic disease. Because such information is critically important to improving outcomes, and will not be derived from traditional RCTs, priority efforts are needed to enhance the healthcare system’s capacity to generate data as a routine part of care and to use these data to learn what works in practice. Additional support is needed for the infrastructure, data aggregation, and analysis, and for improving the relevant statistical methods. Ensuring Optimal Use of Data Generated by Public Investment Though large amounts of data exist and have the potential to inform clinical and comparative effectiveness assessment, substantial barriers prevent optimal use of these data. Many innovative opportunities are possible from these publicly supported and generated data, such as the ability to inform clinical practice and policy. However, the restrictive interpretation of HIPAA and related privacy concerns, the growth of Medicare HMOs, and the fragmentation and commercialization of private-sector clinical databases all limit effectiveness research and threaten effectiveness findings. J. Sanford Schwartz referred to this as the paradox of available but inaccessible data and called for more attention to reducing the barriers to data use until they do not impede research effectiveness. Enhanced coordination in the development of publicly generated data both within and across agencies can mitigate overlap and redundancy; the government should expand the RCT registry to include all comparative effectiveness research to further the range of issues addressed and information available. Access to data generated by public investment, including those by publicly funded investigators, should be expanded through the development of effective technical and support mechanisms. To move past the barriers presented by HIPAA, Medicare HMOs, and private-sector databases, Schwartz urged establishing practical, less burdensome policies for secondary data that protect patient confidentiality, expand Medicare claims files to incorporate new types and sources of data, and develop more
OCR for page 43
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary cost-effective access to private-sector, secondary clinical data for publicly funded studies. Building the Research Infrastructure Given that evidence-based medicine requires integration of clinical expertise and research and depends on an infrastructure that includes human capital and organizational platforms, the NIH’s Alan M. Krensky said the NIH is committed to supporting a stable, sustainable scientific workforce. Continuity in the pipeline and the increasing age at which new investigators obtain independent funding are the major threats to a stable workforce. To address these concerns, the NIH is developing new programs that target first-time R01-equivalent awardees with programs such as the Pathway to Independence and NIH Director’s New Innovator Awards, with more than 1,600 new R01 investigators funded in 2007. NIH-based organizational platforms are intra- and interinstitutional. CTSAs fund academic health centers to create homes for clinical and translational science, from informatics to trial design, regulatory support, education, and community involvement. The NIH is in the midst of building a national consortium of CTSAs that will serve as a platform for transforming how clinical and translational research is conducted. The Immune Tolerance Network (ITN), funded by the National Institute of Allergy and Infectious Diseases, the National Institute of Diabetes and Digestive and Kidney Diseases, and the Juvenile Diabetes Research Foundation, is an international collaboration focused on critical path research from translation to clinical development. The ITN conducts scientific review, clinical trials planning and implementation, tolerance assays, data analysis, and identification of biomarkers, while also providing scientific support in informatics, trial management, and communications. Centralization, standardization, and the development of industry partnerships allow extensive data mining and specimen collection. Most recently, the Immune Tolerance Institute, a nonprofit, was created at the intersection of academia and industry to quickly transform scientific discoveries into marketable therapeutics. Policies aimed at building a sustainable research infrastructure are central to support evidence-based medicine. Engaging Consumers Conducting meaningful clinical effectiveness research requires collecting, sharing, and analyzing large quantities of health information from many individuals, potentially for long periods of time. To be successful, this research will need the support and active participation of patients. The relationship between researcher and research participant, as defined by
OCR for page 44
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary current practice, is ill suited to successfully leverage such active participation. As reported by Kathy Hudson of Johns Hopkins University, however, public engagement efforts in biomedical research, while still in their infancy, suggest some key challenges and opportunities for cultivating active public participation in clinical effectiveness research. The biomedical community—and the science and technology community more generally—traditionally have viewed the linear progression from public education to public understanding to public support as an accurate model through which to cultivate a public enthusiastically supportive of and involved in research. As the flaws in this philosophy have become more apparent, research-performing institutions increasingly are turning to public engagement and public consultation approaches to enlist public support. Unlike unidirectional and hierarchal communications that characterize past efforts, public engagement involves symmetric flow of information using transparent processes and often results in demonstrable shifts in attitudes among participants (though not always in the direction one might expect or prefer). The outcome is different, as well: Rather than aspiring for or insisting on the public’s deeper understanding of science, a primary goal of public engagement is the scientists’ deeper understanding of the public’s preferences and values. ORGANIZING THE RESEARCH COMMUNITY FOR CHANGE Most issues here require the attention of the research community in order to drive change, with some of the most pressing concerns in areas such as methods improvement, data quality and accessibility, incentive alignment, and infrastructure. Much work is already underway to enhance and accelerate clinical effectiveness research, but efforts are needed to ensure stronger coordination, efficiencies, and economies of scale within the research community. Participants in the final panel, composed of sector thought leaders, were asked to consider how the research community might be best organized to develop and promote the needed change and to offer suggestions on immediate opportunities for progress not contingent on expanded funding or legislative action. Panelists characterized the current research paradigm, infrastructure, funding approaches, and policies—some more than 50 years old—as in need of overhauling and emendation. Discussion highlighted the need for principles to guide reform, including a clarification of the mission of research as centered on patient outcomes, identification of priority areas for collective focus, a research paradigm that emphasizes best practices in methodologies, and a greater emphasis on supporting innovation. Apart from a need for stronger coordination, collaboration, and the setting of priorities for questions to address and the studies to be undertaken,
OCR for page 45
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary there is a need to develop systems that inherently integrate the needs and interests of patients and healthcare providers. The lifecycle approach to evidence development in which trials and studies are staged or sequenced to better monitor the effectiveness of an intervention as it moves into the postmarket environment was also suggested as an approach that could support the development of up-to-date best evidence. Finally, participants suggested immediate opportunities to build on existing infrastructure that could support the continual assessment approach to evidence development, including broader support of clinical registries and networked resources such as CTSAs, as well as the FDA’s efforts to develop a sustainable system for safety surveillance. ISSUES FOR POSSIBLE ROUNDTABLE FOLLOW-UP Among the range of issues engaged in the workshop’s discussion were a number that could serve as candidates for the sort of multistakeholder consideration and engagement represented by the Roundtable on Value & Science-Driven Health Care, its members, and their colleagues. Clinical Effectiveness Research Methodologies. How do various research approaches best align to different study circumstances—e.g., nature of the condition, the type of intervention, the existing body of evidence? Should Roundtable participants develop a taxonomy to help to identify the priority research advances needed to strengthen and streamline current methodologies, and to consider approaches for their advancement and adoption? Priorities. What are the most compelling priorities for comparative effectiveness studies and how might providers and patients be engaged in helping to identify them and to set the stage for research strategies and funding partnerships? Coordination. Given the oft-stated need for stronger coordination in the identification, priority setting, design, and implementation of clinical effectiveness research, what might Roundtable members do to facilitate evolution of the capacity? Clustering. The NCI is exploring the clustering of clinical studies to make the process of study consideration and launching quicker and more efficient. Should this be explored as a model for others? Registry collaboration. Since registries offer the most immediate prospects for broader “real-time” learning, can Roundtable participants work with interested organizations on periodic convening
OCR for page 46
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary of those involved in maintaining clinical registries, exploring additional opportunities for combined efforts and shared learning? Phased intervention with evaluation. How can progress be accelerated in the adoption by public and private payers of approaches to allow phased implementation and reimbursement for promising interventions for which effectiveness and relative advantage has not been firmly established? What sort of neutral venue would work best for a multistakeholder effort through existing research networks (e.g., CTSAs, HMO Research Network [HMORNs])? Patient preferences and perspectives. What approaches might help to refine practical instruments to determine patient preferences—such as NIH’s PROMIS (Patient-Reported Outcomes Measurement Information System)—and apply them as central elements of outcome measurement? Public–private collaboration. What administrative vehicles might enhance opportunities for academic medicine, industry, and government to engage cooperatively in clinical effectiveness research? Would development of common contract language be helpful in facilitating public–private partnerships? Clinician engagement. Should a venue be established for periodic convening of primary care and specialty physician groups to explore clinical effectiveness research priorities, progress in practice-based research, opportunities to engage in registry-related research, and improved approaches to clinical guideline development and application? Academic health center engagement. With academic institutions setting the pattern for the predominant approach to clinical research, drawing prevailing patterns closer to broader practice bases will require increasing the engagement with community-based facilities and private practices for practice-based research. How might Roundtable stakeholders partner with the Association of American Medical Colleges and Association of Academic Health Centers to foster the necessary changes? Incentives for practice-based research. Might an employer–payer working group from the Roundtable be useful in exploring economic incentives to accelerate progress in using clinical data for new insights by rewarding providers and related groups that are working to improve knowledge generation and application throughout the care process? Condition-specific high-priority effectiveness research targets. Might the Roundtable develop a working group to characterize the gap between current results and what should be expected, based on
OCR for page 47
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary current treatment knowledge, strategies for closing the gap, and collaborative approaches (e.g., registries) for the following conditions: Adult oncology Orthopedic procedures Management of co-occurring chronic diseases Clinical Data Secondary use of clinical data. Successful use of clinical data as a reliable resource for clinical effectiveness evidence development requires the development of standards and approaches that assure the quality of the work. How might Roundtable members encourage or foster work of this sort? Privacy and security. What can be done within the existing structures and institutions to clarify definitions and to reduce the tendencies for unnecessarily restrictive interpretations on clinical data access, in particular related to secondary use of data? Collaborative data mining. Are there ways that Roundtable member initiatives might facilitate the progress of EHR data-mining networks working on strategies, statistical expertise, and training needs to improve and accelerate post-market surveillance and clinical research? Research-related EHR standards. How might EHR standard-setting groups be best engaged to ensure that standards developed are research-friendly, developed with the research utility in mind, and have the flexibility to adapt as research tools expand? Transparency and access. What vehicles, approaches, and stewardship structures might best improve the receptivity of the clinical data marketplace to enhanced data sharing, including making federally sponsored clinical data more widely available for secondary analysis (data from federally supported research, as well as Medicare-related data)? Communication Research results. Since part of the challenge in public misunderstanding of research results is a product of “hyping” by the research community, how might the Roundtable productively explore the options for “self-regulatory guidelines” on announcing and working with media on research results? Patient involvement in the evidence process. If progress in patient outcomes depends on deeper citizen understanding and engagement as full participants in the learning healthcare system—both
OCR for page 48
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary as partners with caregivers in their own care, and as supporters of the use of protected clinical data to enhance learning—what steps can accelerate and enhance patient involvement? As interested parties consider these issues, it is important to remember that the focus of the research discussed at the workshop is, ultimately, for and about the patient. The goals of the work are fundamentally oriented to bringing the right care to the right person at the right time at the right price. The fundamental questions to answer for any healthcare intervention are straightforward: Can it work? Will it work—for this patient, in this setting? Is it worth it? Do the benefits outweigh any harms? Do the benefits justify the costs? Do the possible changes offer important advantages over existing alternatives? Finally, despite the custom of referring to “our healthcare system,” the research community in practice functions as a diverse set of elements that often seem to connect productively only by happenstance. Because shortfalls in coordination and communication impinge on the funding, effectiveness, and efficiency of the clinical research process—not to mention its progress as a key element of the learning healthcare system—the notion of working productively together is vital for both patients and the healthcare community. Better coordination, collaboration, public–private partnerships, and priority setting are compelling priorities, and the attention and awareness generated in the course of this meeting are important to the Roundtable’s focus on redesigning the clinical effectiveness research paradigm. REFERENCES AcademyHealth. 2005. Placement, coordination, and funding of health services research within the federal government. In Academyhealth Report. Washington, DC. ———. 2008. Health Services Research (HSR) Methods. [cited June 18, 2008]. http://www.hsrmethods.org/ (accessed June 21, 2010). DeVoto, E., and B. S. Kramer. 2006. Evidence-based approach to oncology. In Oncology: An Evidence-Based Approach, edited by A. E. Chang, P. A. Ganz, D. F. Hayes, T. Kinsella, H. I. Pass, J. H. Schiller, R. M. Stone, and V. Strecher. New York: Springer. Hartung, D. M., and D. Touchette. Overview of clinical research design. American Journal of Health-System Pharmacy 66:398-408. Kravitz, R. L., N. Duan, and J. Braslow. 2004. Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. Milbank Quarterly 82(4):661-687. Liang, L. 2007 (March/April). The gap between evidence and practice. Health Affairs 26(2): w119-w121. Lohr, K. N. 2007. Emerging methods in comparative effectiveness and safety: Symposium overview and summary. Medical Care 45(10):55-58. Moses, H., 3rd, E. R. Dorsey, D. H. Matheson, and S. O. Thier. 2005. Financial anatomy of biomedical research. Journal of the American Medical Association 294(11):1333-1342. Rush, A. J. 2008. Developing the evidence for evidence-based practice. Canadian Medical Association Journal 178:1313-1315.
OCR for page 49
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary Sackett, D. L., R. Brian Haynes, G. H. Guyatt, and P. Tugwell. 2006. Dealing with the media. Journal of Clinical Epidemiology 59(9):907-913. Schneeweiss, S. 2007. Developments in post-marketing comparative effectiveness research. Clinical Pharmacology and Therapeutics 82(2):143-156.
OCR for page 50
Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches – Workshop Summary This page intentionally left blank.