National Academies Press: OpenBook
« Previous: 4 Organizing and Improving Data Utility
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

5
Moving to the Next Generation of Studies

INTRODUCTION

Scientific information today is expanding much faster than our ability to effectively translate and process knowledge in ways that improve patient care. To expedite the development of information—and to address both existing gaps in the evidence base and newly emerging research challenges—innovation is needed in how we use existing research tools, strategies, and study design methodologies to produce reliable knowledge. Furthermore, new approaches are needed, with special attention to using new tools, techniques, and data resources. Workshop participants discuss the potential of a next generation of studies that complement and possibly supplant those already employed in clinical effectiveness research. In that regard, decisive efforts are need to support the development of new approaches and to nurture their inclusion in research. Papers included in this chapter examine opportunities to take better advantage of emerging resources to plan, develop, and sequence studies that are more timely, relevant, efficient, and generalizable. Also considered are approaches that better account for lifecycle variation of the conditions and interventions in play. Current opportunities and needed advancements also are discussed.

A variety of innovations are presented as important components of a redesigned research paradigm as well as immediate opportunities to build toward a next generation of studies. These innovations include new approaches to observational and hybrid studies; tools for collecting and using information captured at the point of care, including those relevant to genetic variation; cooperative research networks; and possible incentives.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Presenting a vision for new inferential and statistical tools, Sharon-Lise T. Normand from Harvard Medical School discusses opportunities to increase the efficiency with which information is produced through improved use of large data streams from a variety of sources, including clinical registries, billing databases, electronic health records, preclinical research, and trials. New tools are needed to develop and implement data pooling algorithms and inferential tools. In addition, study designs not used to their full potential—including hybrid designs, preference-based designs, and quasiexperimental designs—are well suited to exploit features of the new information sources.

Findings of observational studies are intrinsically more prone to uncertainty than those from randomized trials; however, Wayne A. Ray from Vanderbilt University contends that this methodology has great value in its capacity to address the dilemma presented by the logistical difficulties and slow pace of randomized controlled trials (RCTs). Perhaps more importantly, they also enable research on many important clinical questions that RCTs are not appropriate to answer. To exploit the wealth of data becoming available, researchers will need to become more familiar with and adhere to fundamental clinical and epidemiological principles that define state-of-the-art use of observational data.

Giving clinicians information on how, for whom, and in what settings specific treatments are best used is essential to improving clinical care. John Rush from the University of Texas Southwestern Medical Center proposes that researchers widen the breadth of study designs that they employ. Rush illustrates how certain clinically important questions can be addressed with observational data obtained when systematic practices are employed, or with new study designs (e.g., hybrid studies and equipoise stratified randomized designs) or posthoc analyses. Additional challenges will be to identify key questions and develop infrastructure to conduct the needed studies.

Echoing Rush’s call for a reengineered practice system to better facilitate research, Isaac Kohane from Harvard Medical School discusses opportunities to instrument the health delivery system for research. While speaking specifically to the potential of high-throughput genotyping, phenotyping, and sample acquisition to accelerate genomic research, Kohane emphasizes the additional benefit to quality and performance improvement efforts. Needed for progress are increased investments in information technology (IT), increased transparency in regulation and patient autonomy, continued development of an informatics-savvy healthcare research workforce, and creation of a safe harbor for methodological experimentation.

Citing the experience of the Center for Medical Technology Policy (CMTP) in attempting to facilitate private-sector coverage with evidence development, the CMTP’s Wade M. Aubry argues that “coverage with

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

evidence development” should complement, not compete with, traditional research enterprises. Aubry proposes that in order to draw from and expand on the experience of existing models, researchers must formalize ground rules for workgroups and separate evidence gap identification, prioritization, and selection for study design and funding. He discusses coverage with evidence development and outlines concepts for phased introduction and payment for interventions under protocol. Eric B. Larson from the Group Health Cooperative concludes the chapter by suggesting that emerging research networks, such as the development of programs funded by the National Institutes of Health (NIH) under the Clinical and Translational Science Awards, offer opportunities to contribute to a learning healthcare system in ways that produce relevant results that can be generalized.

LARGE DATA STREAMS AND THE POWER OF NUMBERS

Sharon-Lise T. Normand, Ph.D.

Harvard Medical School & Harvard School of Public Health

Abstract

This paper describes the rationale for integrating information from multiple and diverse data sources in order to efficiently produce information. Key statistical challenges involved in integrating and interpreting information are described. The fundamental issue underpinning the use of large data streams is the poolability of the data sources. New statistical tools are required to integrate the multiple and diverse data streams in order to produce valid scientific findings.

Introduction and Background

We are witnessing the rapid growth in the quantity, the type, and the quality of health data that are collected. These data derive from many different information sources: preclinical data obtained from the bench, clinical trial data, registries maintained by professional societies such as the American College of Cardiology, electronic health record data, administrative billing data such as those maintained by the Centers for Medicare & Medicaid Services, hospital discharge billing data maintained by state departments of public health, and population-based surveys data such as the Medical Expenditure Panel Survey maintained by the Agency for Healthcare Research and Quality (AHRQ).

We also are collecting more information than ever before about outcomes in both the clinical trial and observational settings. This increasingly frequent strategy has been adopted for several reasons: A single outcome

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

may not adequately characterize a complex disease; there may be a lack of consensus of the most important outcome; or there may be a desire to demonstrate clinical effectiveness on multiple outcomes. The consequence of the proliferation of these databases is an unprecedented demand to combine and use diverse data streams.

What circumstances have led to the proliferation of databases? First, technology and innovation are evolving rapidly, producing a plethora of new medical devices, biologics, drugs, and combination products. Scientists have made medical devices smaller, smarter, and more convenient for patients. Miniaturization techniques have allowed pacemakers to weigh less than one ounce and are the size of a quarter; biological medical devices, such as microarray-based diagnostic tests for detection of genetic variation to select medication and doses of medications, are promoting personalized medicine; and combination products, such as antimicrobial catheters and drug-eluting stents, have changed the way diseases are diagnosed and treated. Moreover, in the fast-paced device environment, technologies become quickly outdated as designs are rapidly improved. Consequently, at market introduction, the next-generation devices are already under development and under study.

Second, information technology has revolutionized medicine. The design, development, and implementation of computer-based information systems have permitted major advances in our understanding of the consequences of medical treatments through access to large data streams. Similarly, the excitement in bioinformatics of discovery of new biological insights has resulted in the development of tools to enable access to and use and management of these computer-based information systems. New initiatives to develop technologies and resources to advance the handling of larger and diverse datasets and to assist interpretation have been established in the fields of proteomics, genomics, and glycomics.

Third, rising healthcare costs have prompted stakeholders to assess the value of health care through measurement. Using administrative billing data, early research funded by the AHRQ documented substantial variations in the use of medical therapies across geographic units such as states as well as across patient subgroups such as race/ethnicity and sex. The corresponding lack of geographic variation in patient outcomes prompted research using administrative data enhanced with clinical data to assess the quality of medical care. The number and type of quality measures reported on healthcare providers, such as hospitals, nursing homes, physicians, and health plans, have grown substantially over the past decade (Byar, 1980). A second and related line of research related to rising healthcare costs is the comparative effectiveness of therapeutic options. Information obtained from comparative randomized trials, systematic reviews of randomized trials, decision analyses, or large registries are used to quantitatively assess effectiveness of competing technologies.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

The availability of many large and diverse data sources presents an opportunity and a challenge to the scientific community. Under the current paradigm of assessing evidence, we continue to waste information by adhering to historical analytical and inferential procedures. Data sources relating to the same topic are treated as silos of information rather than as well-integrated information when assessing new technologies; information contained in multiple outcomes and multiple patient subgroups is ignored; and treatment heterogeneity in randomized trials is overlooked. The scientific community is not producing information efficiently. New tools, beyond those that expedite the mechanics of searching and accessing information, are required.

Using Diverse Data Streams

A fundamental problem of using diverse data sources is that of poolability. Combining data from multiple data sources is not new. At a practical level, for example, zip code level sociodemographic information from census data is often merged with patient-level information in administrative claims data to supplement covariate information. Estimates of treatment effects from diverse studies are commonly combined in the context of meta-analyses in order to learn about adverse events. The next generation of studies need to combine data sources for other reasons, however: to enhance results when the data source from which the information is based is different from the population of interest; to bridge results when transitioning from one definition to another (changing the definition from single to multiple race and ethnicity reporting); and to enhance small area estimation (see Schenker and Raghunathan, 2007, for a summary for combining survey data). Meta-analysis methods for combining information for assessing the relative effectiveness of two treatments when they have not been directly compared in randomized trials but have each been compared to other treatments have recently emerged (Lumley, 2002).

When is it sensible to combine data sources? While this is not a new statistical problem, it is increasingly more frequent and more complex. A familiar setting of combining data sources is that of meta-analysis in which the data sources are estimates obtained from multiple studies. In the typical meta-analysis setting, researchers consider whether the study populations are adequately similar, whether the treatments are defined similarly, and whether the clinical outcomes are similar. These decisions are subjective.

Once the decision is made to combine data, how should the information be pooled? Even if the patient-level data from each study were available, it would not be sensible to treat the observations from each patient across all of the studies as completely exchangeable. Exchangeability implies that we have no systematic reason to differentiate between the

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

outcomes of patients participating in different studies. There are numerous methodological issues to consider, such as whether data are missing and the reason for missingness, the quality of the data, the completeness of followup, type of measurement error, etc, and are beyond the scope of this paper. In looking forward, however, increasing data pooling should provide more information.

Using Observational Data to Enhance Clinical Trial Data

The use of observational data to supplement a randomized trial is not a new idea, and there exists a large literature describing advantages and disadvantages. There has been much discussion, for example, on the use of historical controls in clinical trial (Byar, 1980). Viewing data sources as a continuum, at one extreme, we could ignore concurrent observational data but clearly that would be wasteful and inefficient (Figure 5-1). When collecting data from participants in a clinical trial, obtaining parallel information from non-trial participants at study sites will enhance inferences. At the other end of the continuum we could use all available data and treat information obtained from the observational subjects on an equal footing (that is, exchangeable) with the information obtained from the clinical trial participants. This strategy involves a heroic assumption that will typically be unmet in practice. Between these extremes, there are many options available but rarely utilized. Neaton and colleagues summarize strategies for pooling information in the context of designs for circulatory system devices (Neaton et al., 2007).

The Mass COMM trial1 is a randomized trial comparing percutaneous coronary intervention (PCI) between Massachusetts hospitals with cardiac surgery-on-site (SOS) and community hospitals without cardiac surgery-on-site. The primary objective of the trial is to compare the acute safety and long-term outcomes between sites with and without cardiac SOS for patients with ischemic heart disease treated by elective PCI. The trial involves a 3:1 (sites without SOS: sites with SOS) randomization scheme that permits community hospitals to keep their volume given the substantial infrastructure investment they have made and the knowledge that volume is important. The recruitment strategy for the randomized study involves only patients presenting to community hospitals2 (it would be very difficult to randomize patients arriving at tertiary hospitals to community hospitals).

1

A randomized trial to compare percutaneous coronary intervention between Massachusetts hospitals with cardiac surgery-on-site and community hospitals without cardiac surgery-on-site (see http://www.mass.gov/Eeohhs2/docs/dph/quality/hcq_circular_letters/hospital_mdph_protocol.pdf).

2

Massachusetts law permits elective angioplasty only at hospitals with cardiac surgery-on-site.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-1 Options for pooling data in the context of a randomized trial. SOURCE: Spiegelhalter, D. J., K. R. Abrams, J. P. Myles. 2004. Bayesian approaches to clinical trials and health-care evaluation. West Sussex, England: John Wiley & Sons, Ltd. Reproduced with permission of John Wiley & Sons, Ltd.

FIGURE 5-1 Options for pooling data in the context of a randomized trial.

SOURCE: Spiegelhalter, D. J., K. R. Abrams, J. P. Myles. 2004. Bayesian approaches to clinical trials and health-care evaluation. West Sussex, England: John Wiley & Sons, Ltd. Reproduced with permission of John Wiley & Sons, Ltd.

To bolster inferences and increase efficiency, the Mass COMM investigators adopted a hybrid design that borrows information from patients presenting at tertiary hospitals (concurrent observational controls). Figure 5-2 diagrams the hybrid design of this study, a randomized controlled trial using observational data.

How will the data sources (the randomized subjects and the observational subjects) be pooled? From a practical standpoint, it is not sensible to assume the observational patients arriving at tertiary hospitals and the patients randomized from community to tertiary hospitals are completely exchangeable. One strategy is to assume some differences in the outcomes of the observational controls (“additive bias”) compared to the patients randomized to the tertiary hospitals. The Mass COMM investigators assumed that the observational controls either over- or under-estimate the trial end-point by a factor of two. This decision was made prior to the enrollment of any patients.

Using Multiple Data Sources to Enhance Inference

Drug-eluting stents (DES) are combination products that have largely prevented the problem of restenosis. The critical path for approval of DES, like all first-in-class therapies, included several phases, each of which involved a pass or fail score: basic research, prototype design, preclinical development including bench and animal testing, clinical development, and Food and Drug Administration (FDA) filing. Sharing of knowledge in each of these domains rather than a pass or fail grade should enhance the

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-2 Schematic of Mass COMM Trial: One-way randomization with observational arm.

FIGURE 5-2 Schematic of Mass COMM Trial: One-way randomization with observational arm.

FIGURE 5-3 Integrating information: New ontologies (variations to consider in designing processes that link data in the case of drug-eluting stents).

FIGURE 5-3 Integrating information: New ontologies (variations to consider in designing processes that link data in the case of drug-eluting stents).

SOURCE: Image appears courtesy of the Food and Drug Administration.

estimates of effectiveness and safety. A selection of types of data streams for DES includes device, procedure, patient characteristics and outcomes, as displayed in Figure 5-3. It seems sensible to assume that the device characteristics would impact the device, procedural, and patient outcomes and that the procedural characteristics would impact the procedural and patient

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

outcomes, etc. By linking together all of these data streams through pooling, we will make more efficient use of information.

How should we pool these data sources? It is clear that there should be some probability model that links together the various silos of information. Statistical models for networks of information like that for DES exist but their practical applications have been limited.

Concluding Remarks

A key issue in the next generation of studies involves the development and implementation of pooling algorithms. The appropriateness of any pooling algorithm depends on the structure of the data, the data collection tools, and the completeness, maintenance, and documentation of data elements. Expanding our experience with pooling different data sources is the next step. New study designs are needed that exploit features of diverse information sources. There is some experience in pooling observational data with clinical trial data. These designs, such as hybrid designs, preference-based designs, and quasi-experimental designs, while available, have not been exploited to their full potential. Little experience exists for pooling data beyond the historical or concurrent observational control setting. The diverse data streams, such as that illustrated by the DES problem, are increasingly common. More focus on the development of inferential tools that will enable combining data appropriately and assessing the relationships among the streams in large databases is needed.

With the increasing number of registries, approaches for building the infrastructure to enable data sharing must be developed. Very little attention and money have been allocated for sufficient data documentation and for quality control. An additional consideration is how to best validate findings. What is the correct strategy for combining preclinical, clinical, and bench data? How do we minimize false discovery rates and determine which hypotheses are true and which are false.

Finally, we need to educate researchers, regulators, and policy makers in the interpretation of results from more diverse study designs, and the assumptions made and limitations with these designs. The availability of large data streams does not guarantee valid results—thoughtful use of data sources and innovative analytical strategies will help produce valid information.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

OBSERVATIONAL STUDIES

Wayne A. Ray, M.D., M.P.H.

Vanderbilt University


Observational studies of therapeutic interventions are critical for protecting the public health. However, high-profile, misleading observational studies, such as those of hormone replacement therapy (HRT), have materially undermined confidence in this methodology. While findings of observational studies are intrinsically more prone to uncertainty than those from randomized trials, at present many of these investigations have suboptimal methodology, which can be corrected. Common problems include elementary design errors; failure to identify a clinically meaningful t0, or start of follow-up; exposure and disease misclassification; use of overly broad end-points for safety studies; confounding by the healthy drug user effect; and marginal sample size. If observational studies are to play their needed role in clinical effectiveness studies, better training of epidemiologists to recognize and address these key issues is essential.

New technologies and expanding innovations in therapeutic interventions have led to an urgent need for expansion of safety and efficacy studies. The logistical difficulties and slow pace of randomized controlled trials limit its use in many cases; but the RCT is also not appropriate for all research questions. The value of observational studies to address this dilemma and to enable research on many important clinical questions is illustrated by a number of findings regarding safety and efficacy that have been made in the past through observational designs. Prominent examples include the high risk of endometrial cancer associated with unopposed estrogen therapy and the mortality benefit of colonoscopy in colorectal cancer.

However, observational studies have been criticized as inadequate for this purpose, having yielded several controversial and misleading findings, such as HRT and vitamin E associated with cardiovascular disease and dementia protection, findings later shown to be inaccurate by randomized controlled trials. The HRT findings led to millions more women using these therapies without the expected benefits. The same pitfalls are present in efficacy and safety studies based on observational data, as illustrated by findings that demonstrated a protective effect of non-steroidal anti-inflammatory drugs (NSAIDs) on dementia.3 The outcome of these well-publicized inaccurate findings is to lead researchers to discount the value of observational studies without exploring the source or analyzing the meth-

3

Thal, L. J., S. H. Ferris, L. Kirby, G. A. Block, C. R. Lines, E. Yuen, C. Assaid, M. L. Nessly, B. A. Norman, C. C. Baranak, S. A. Reines. Rofecoxib Protocol 078 Study Group. 2005. A randomized, double-blind, study of rofecoxib in patients with mild cognitive impairment. Neuropsychopharmacology 30(6):1204-1215.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-4 Notation used for observational studies in this paper.

FIGURE 5-4 Notation used for observational studies in this paper.

odology. A closer look reveals that these errors are really the predictable result of ignoring some basic pharmaco-epidemiologic principles.

Figure 5-4 lays out the notation that will be followed throughout this paper. Consider a medication under study. Exposure (E) to a medication is either present (E+) or absent (E–) for various patients. In a clinical trial, individuals are randomized and starting at t0, these individuals are followed forward in time where occurrence and end-points of a disease under-study are recorded for both E+ and E– groups. Observational studies also have E+ and E– groups, follow-up commences at a certain t0, and individuals are followed forward in time to determine end-points; however, there are some important differences. First, the exposure group (E) is determined not by randomization but by measurement, and, secondly, choice by providers and patients in an observational study will lead to differences based on self-selection, some of which may present as confounders of real associations. Other potential problems that frequently surface during pharmacoepidemiology studies include suboptimal t0, immortal person-time with respect to follow-up, misclassification of exposure (both at baseline and time-dependent), misclassification of disease end-points—including overly broad or narrow designations. Potential confounders include the health user effect and variables that are time dependent, unavailable, or misclassified. Finally, the study may be powered inadequately—particularly in situations with infrequent end-points or chronic exposure.

The issue of suboptimal t0, or beginning of follow-up, is best illustrated by first considering evaluation of a surgical intervention such as coronary artery bypass graft (CABG). An evaluation that started following patients 90 days after surgery—perhaps to wait for patients to stabilize

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-5 Risk of developing serious CHD in women using HRT therapy.

FIGURE 5-5 Risk of developing serious CHD in women using HRT therapy.

SOURCE: Derived from Hulley, S., D. Grady, T. Bush, et al. 1998. Randomized trial of estrogen plus progestin for secondary prevention of coronary heart disease in postmenopausal women. Heart and Estrogen Replacement Study (HERS) Research Group. Journal of the American Medical Association 280:605-613; Ray, W. A. 2003. Evaluating medication effects outside of clinical trials: new-user designs. American Journal of Epidemiology 158(9):915-920.

post-op—would conveniently exclude perioperative mortality. With these data excluded, CABG would appear much better than actual results. Although this type of t0 is an obvious error for surgical interventions, studies of medication often make this error with disastrous results. For example consider a woman who starts HRT. Studies suggest that as shown in Figure 5-5, there is an initial period of high risk for occurrence of coronary heart disease (CHD) and that this period of high risk abates with time (Ray, 2003). However, most of the epidemiologic studies of HRT began follow-up after this initial period, leading to a distortion of these studies’ results. Simply ensuring that follow-up initiates immediately following the start of therapy would greatly improve confidence in study findings.

Another common problem is the failure to consider drug exposure that changes over time. This commonly will lead to underestimation of drug risk. For example, attrition and dosing changes can obscure true effects. For example in an examination of benzodiazepine use, in just 1 month, fewer than 60 percent of patients on the drug at baseline were still using it and by 1 year this was less than 40 percent. Figure 5-6 illustrates the point that if this single-point-in-time measurement of drug exposure is used to determine relative risk for falls, no effect is observed (1.02); whereas, if we take

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-6 Relative risk of benzodiazinepine determined through a single-point-in-time measurement of drug exposure.

FIGURE 5-6 Relative risk of benzodiazinepine determined through a single-point-in-time measurement of drug exposure.

SOURCE: Ray, W. A., P. B. Thapa, and P. Gideon. 2002. Misclassification of current benzodiazepine exposure by use of a single baseline measurement and its effects upon studies of injuries. Pharmacoepidemiology and Drug Safety 11(8):663-669. Reproduced with permission of John Wiley & Sons, Ltd.

into account time-dependent changes, a 44 percent increased risk of falls is observed (Ray et al., 2002). Although some advocate the use of intention to treat as in the conduct of clinical trials, in observational studies, there is not necessarily an intention to treat, maintain treatment, or promote adherence as in an RCT, so adherence rates may be low and discontinuation rates may be very high.

A third common issue is the use of overly broad end-points. The choice of end-points should differ between the two designs, although RCT design often uses broad end-points appropriately to assess safety and efficacy, it may not be as useful in larger observational trials. A pitfall of the broad approach in analyzing safety end-points is obscuring more serious events by including them under less serious categories, such as classifying torsadesde-pointes as an “arrhythmia.” In addition, all-cause mortality, which is an important indicator in closely defined, homogeneous populations of RCTs, is much more difficult to assess in the more heterogeneous and less controlled setting of observational studies. This makes certain therapies associated with death or general functional health appear to have mortality benefits when in fact none exist. For example, NSAID use has been shown

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

statistically to confer a mortality benefit in observational studies that cannot be reproduced in randomized trials.

A final but important source of bias in observational studies is the healthy drug user effect. People who seek preventive interventions and take medications regularly are different from those who do not. This effect will bias results in favor of medications via healthier status of those who will consistently take medication. For example, a study of antiotensin receptor blockers (ARBs) in heart failure demonstrated a 30–50 percent reduction in cardiovascular mortality for persons that were “good compliers,” but with placebo. These data showed medication adherence to be better predicted by adherence than therapy.

Given these potential problems, designing a “false-negative safety study” would include the following: a marginally adequate sample size and the use of an exposure that is not time dependent or includes substantial nonuser person time. The end-points should be broad and perhaps detected by invalidated computerized date. Similarly, one can create a “false-positive efficacy study” by focusing on an exposure that people seek, whether it is one used for prevention, sought out by informed consumers, or requires patient reporting of symptoms. Second, the cohort is a large group of prevalent users who are survivors of the period of prior drug therapy. This cohort is compared to a group of nonusers of drugs. Finally we use an end-point—such as cardiovascular disease or mortality—that is strongly influenced by behavior.

The design of observational studies is a complex subject but the previous discussion has outlined some starting points for the way forward. A first step is to separate observational analyses looking at safety from efficacy. For safety, the limitations that lead to false results are fairly easy to identify and counteract. A more difficult challenge is the need for infrastructure changes to reduce conflicts of interest by those who conduct safety studies. For efficacy, RCTs should generally be a required first step to ensure the expected benefits of therapy exist for the population as a whole if not for the individual. Third, it is necessary to challenge the assumption that because observational data often already exists in a database, study design and analysis will be fast or easy. There is an enormous amount of work involved in thinking through the particular question at hand, how various biases might apply, and how study design might effectively avoid these pitfalls. Finally, it is time to train a generation of epidemiologists to be more familiar with the clinical and pharmacological principles that affect the use of observational data. This expertise will allow clinicians to better exploit the wealth of available observational data and will lead to improve study designs. These efforts also will improve the reviews of grants and manuscripts, two additional forces critically important to improving the quality of studies of healthcare interventions.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

ENHANCING THE EVIDENCE TO IMPROVE PRACTICE: EXPERIMENTAL AND HYBRID STUDIES

A. John Rush, M.D.

Departments of Clinical Sciences and Psychiatry

The University of Texas Southwestern Medical Center at Dallas

Abstract

Efficacy studies establish treatments as safe, effective, and tolerable. Clinicians, however, need to know how, for whom, when (in the course of illness or in the course of multiple treatment steps), and in what settings specific treatments are best used. Variations in treatment tactics (e.g., dose, duration) are often required for patients with different ages or co-morbid conditions, for example. Alternatively, treatments are sometimes combined to enhance outcomes, but for which patients is a particular combination better? At what treatment step(s) is/are particular treatment(s) best? When should a treatment be switched if patients are not responding? Is there a preferred sequence of treatments for specific patient groups?

This report illustrates how some of these clinically important questions can be addressed with observational data obtained when systematic practices are employed, or with new study designs (e.g., hybrid studies, equipoise stratified randomized designs) or post hoc (e.g., moderator) analyses. Suggestions for advancing this type of T2 translational research are provided.

Introduction

In the pursuit of new treatments, basic science focuses on elaborating our understanding of how the human organism works—often relying on nonhuman experiments to elucidate biological processes and functions. As this understanding grows, one attempts to determine what diseases might be better understood with this basic knowledge. For example, new “drugable” targets may be identified. Then, new molecules are developed and tested preclinically to define their effects on the targets, their effects in animal models of disease, and their safety.

Once these hurdles are passed, these potential treatments are tested in man. If successful, one has established efficacy and safety of the new drug in one or another condition. FDA approval ensues, and the new treatment is announced.

The primary outcome of this process—sometimes called T1 translational research or “bench to bedside” research—is the development of a new treatment (Woolf, 2008). This process entails the “effective translation

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

of the new knowledge, mechanisms, and techniques generated by advances in basic science research into new approaches for prevention, diagnosis, and treatment of disease” (Fontanarosa and DeAngelis, 2002).

Alternatively, an established treatment for one disease may be found in clinical practice (or by additional basic laboratory testing) to be of potential utility in another condition (e.g., the use of selected antiepileptic medications in the treatment of bipolar disorder) (for example, Emrich, 1990).

Once a new treatment is defined as safe enough and effective, many issues remain. Specifically, how to apply the treatment in practice—sometimes called T2 translational research (Sung et al., 2003)—must be addressed. T2 translational research has several components: (1) At the patient/clinician level: How, when, for whom, and in what settings or contexts should the new treatment be provided; (2) How can the new treatment be implemented widely (disseminated); and (3) If widely implemented, what is the cost, cost efficiency, and cost consequences of properly using the treatment.

I suggest that T1 translational research should be called Translational Research and that T2 research be renamed to Applications Research and divided into Clinical Implementation, Dissemination, and Systems/Policy research to further specify these different research enterprises, as implied by Woolf (2008).

This paper focuses on Clinical Implementation Research at the clinician/ patient level. The following discussion attempts to identify the knowledge gaps that exist when a new treatment becomes available (i.e., it has established efficacy, safety, and regulatory approval). Major depressive disorder (MDD) (American Psychiatric Association, 2000) is used to illustrate the principles discussed and the issues that need to be addressed in this type of research.

Depression as a Case Example

Clinical depression is prevalent, typically chronic or recurring, disabling, and amenable to treatment with a wide range of interventions (Practice guideline for the treatment of patients with major depressive disorder [revision]. American Psychiatric Association, 2000; U.S. Department of Health and Human Services et al., 1993). Similar to other medical syndromes, it is heterogeneous in terms of pathobiology, course of illness, genetic loading, and response to various treatments. It typically requires longer-term, not simply brief, acute management. These properties are common to other major medical disorders (e.g., congestive heart failure, cancer, hypertension, migraine headaches, epilepsy, etc.). Therefore, the following will use depression as an example to illustrate the principles proposed.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Conceptualizing Clinical Applications Research

When a new antidepressant is released, it is known to be: (1) more effective than placebo; (2) as effective as other available medications; (3) safe and well-enough tolerated to be a sensible option; and (sometimes) (4) to have established longer term efficacy based on randomized, placebo-controlled discontinuation trials.

What is unknown (Figure 5-7) are answers to a plethora of clinically relevant questions in addition to how well it works overall in practice. Specifically, how, when, for whom, and in what settings is the new treatment best used? Historically, answers to these questions have been relegated to the “art of medicine”—meaning that they are never empirically answered. These evidence deficits in turn lead to a high variance in how treatments are used and in the outcomes obtained.

Why are these questions unanswered? Perhaps it is assumed that clinicians will learn on their own how best to dispense the treatment. Alternatively, this type of research may be viewed as too simple or of little public health significance to merit funding. Or perhaps systems of care will decide these issues based on bottom line, short-term costs. In fact, without answers to these questions, far from optimal outcomes are likely with the treatment, and the cost efficiency is reduced.

Figure 5-7 suggests a conceptual map of the key factors that affect

FIGURE 5-7 T2 Translational research.

FIGURE 5-7 T2 Translational research.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

outcomes of any treatment. The treatments are sometimes called treatment strategies. The remaining factors (how, whom, when, where) inform the treatment tactics (Crismon et al., 1999; Rush and Kupfer, 1995).

Treatment guidelines often provide what strategies are reasonable options at various steps in treatment (e.g., what medications are best used in the first, second, or subsequent steps). Guidelines also may recommend tactics about delivering the treatment (Rush, 2005; Rush and Prien, 1995). These recommendations more often than not rest largely or entirely on clinical consensus rather than on definitive evidence.


The “How” Factors. How treatment is delivered clearly affects the outcome. If the dose is too low, efficacy is low. If it is too high, either efficacy is again reduced and/or side effects ensue such that poor outcomes result. Other “how” factors include visit frequency, rate of dose escalation, and the diligence with which the dose and duration are managed such that an optimal chance of benefit can be achieved. These “how” factors, as with the other factors, affect outcomes and patient retention or attrition.

To illustrate the importance of how a treatment is delivered, consider Figure 5-8. Greater depressive symptom reduction than achieved by treatment as usual was obtained with a treatment algorithm (which provided both strategic and tactical recommendations and included the routine measurement of symptoms and tolerability to inform dose adjustments) (Trivedi et al., 2004a), despite the availability of the same antidepressant medications for both the algorithm and treatment as usual groups. Thus, a systematic approach that enhanced the quality of care resulted in better outcomes than more widely varying routine practice.

As further evidence of the importance of how a treatment is delivered, consider recent results from the National Institute of Mental Health (NIMH) multisite Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial (Fava et al., 2003; Rush et al., 2004). Typical practice entails a 2–4 week trial of an antidepressant, after which, when little effect is seen, the treatment is switched. The STAR*D trial revealed that one-third of those who ultimately responded after up to 14 weeks of treatment did so after 6 weeks of medication (Trivedi et al., 2006a). These new data argue for longer trials that are likely to improve response rates.


The “When” Factors. When to use a new treatment is also unclear. “When” refers to either when in the course of an illness (e.g., earlier or later) or when in the course of multiple treatment steps, or when in the context of treatment that has produced only a partial response. To illustrate the importance of these “when” factors, Figure 5-9 shows that remission is least likely and slowest in depressed patients with a recurrent course and

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-8 Adjusted mean depressive symptom scores on the IDS-C30. NOTE: IDS-C30 = 30-item Inventory of Depressive Symptomatology–Clinician-rated. SOURCE: Trivedi, M. H., A. J. Rush, M. L. Crismon, T. M. Kashner, M. G. Toprac, T. J. Carmody, T. Key, M. M. Biggs, K. Shores-Wilson, B. Witte, T. Suppes, A. L. Miller, K. Z. Altshuler, and S. P. Shon. 2004 (July). Clinical results for patients with major depressive disorder in the Texas Medication Algorithm Project. Archives of General Psychiatry 61(7):669-680. Copyright © 2004 American Medical Association. All rights reserved.

FIGURE 5-8 Adjusted mean depressive symptom scores on the IDS-C30. NOTE: IDS-C30 = 30-item Inventory of Depressive Symptomatology–Clinician-rated. SOURCE: Trivedi, M. H., A. J. Rush, M. L. Crismon, T. M. Kashner, M. G. Toprac, T. J. Carmody, T. Key, M. M. Biggs, K. Shores-Wilson, B. Witte, T. Suppes, A. L. Miller, K. Z. Altshuler, and S. P. Shon. 2004 (July). Clinical results for patients with major depressive disorder in the Texas Medication Algorithm Project. Archives of General Psychiatry 61(7):669-680. Copyright © 2004 American Medical Association. All rights reserved.

a chronic (≥2 year) index episode, while it is most rapid and effective in nonchronic, nonrecurrent patients (Rush et al., 2008). This previously unavailable information tells clinicians that a longer treatment trial is especially needed for more chronic and recurrent depressive illnesses (e.g., 9–12 weeks). Furthermore, relapse is most likely for those with chronic and recurrent depressions (Figure 5-10).

In addition, when a treatment is used in the course of multiple treatment steps, it can affect outcomes. Often new treatments are used only after several prior standard treatments. Is this preferred? In STAR*D, remission rates were lower if any treatment was used later in the step sequence. Only 33 percent remitted after the first treatment, 30 percent after the second, and 14 percent after the third and fourth treatment steps, respectively (Rush et al., 2006).


The “For Whom” Factors. The third domain that affects outcome involves patient groups for whom the treatment is best. Since depression is hetero-

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-9 Time to remission by prior course of illness.

FIGURE 5-9 Time to remission by prior course of illness.

NOTE: CNR = chronic and nonrecurrent; CR = chronic and recurrent course; NCNR = neither chronic nor recurrent; NCR = non chronic but recurrent.

geneous with regard to response, no one treatment works for all. While some evidence suggests that medication responses run in families (Stern et al., 1980), the “for whom” question is never addressed in efficacy trials, perhaps in part because efficacy trials enroll symptomatic volunteers with little or no co-morbid psychiatric or general medical pathology, with minimal chronicity and treatment resistance (i.e., prior failed treatment trials) (Table 5-1).

Narrowly defined efficacy samples arguably enhance internal validity by excluding subjects with concurrent co-morbid disorders that could affect efficacy or tolerability. Such samples, however, cannot address the “for whom” question. To illustrate this point, consider the results of our recent finding (Wisniewski et al., 2007) that only 635 of 2,855 depressed STAR*D participants (22 percent) would have qualified for typical efficacy trials conducted for registration purposes. Remission rates were 35 percent for efficacy trial–eligible patients and 25 percent for efficacy trial–ineligible patients. Similarly, for depressed outpatients with three to four concurrent general medical conditions (GMCs), the odds ratio of remission was

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-10 Time to relapse by prior course of illness.

FIGURE 5-10 Time to relapse by prior course of illness.

NOTE: CNR = chronic and nonrecurrent; CR = chronic and recurrent course; NCNR = neither chronic nor recurrent; NCR = non chronic but recurrent.

TABLE 5-1 Population Gaps

Parameter

Symptomatic Volunteers

Typical Patients

Chronically il

+++

Concurrent Axis I

+

+++

Concurrent Axis III

+

+++

Treatment-resistant

±

+++

Suicidal

++

Substance abusing

++

Will accept placebo

+

±

0.47 (for three) and 0.52 (for four or more), as compared to 1.0 for no co-morbid GMCs and 0.83 for one to two co-morbid GMCs (Trivedi et al., 2006b). Similar results were found if depression was accompanied by several anxiety disorders (Fava et al., 2008; Trivedi et al., 2006b). These findings question the value of antidepressant medication in those with more GMCs—a fact that could not be learned from efficacy trials.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

The “In What Settings” Factor. Finally, practice setting or context may well affect outcome. Different settings are associated with different kinds of patients, with different degrees of treatment resistance, different co-morbidities, different levels of social support or stress, different treatment procedures (e.g., visit frequency, dose escalation profiles, etc.), and different prior history. Thus, setting itself is likely a highly relevant outcome parameter as it encompasses several factors that can affect outcomes.

The above conceptual model (illustrated by case examples from depression) is applicable to treatment of most diseases. The answers to the “how?,” “who?,” “for whom?,” and “in what setting?” questions will better define the best (safest, most effective) treatment for particular patients, treated under specific conditions. To develop Clinical Implementation Research, two key issues must be resolved: (1) designing cost-efficient, rapidly executed studies to obtain the answers and (2) developing a consensus by which to prioritize the questions to be answered.

Trial Design Options

Efficacy

Efficacy trials carefully control the parameters of how, whom, when, and in what settings so that when treatments are randomly assigned, should a treatment difference (e.g., drug versus placebo or drug versus drug) be found, one can ascribe the differences in outcomes to the treatments with high certainty. A classic effectiveness trial can be seen as allowing all four parameters to vary. In fact, variance is sought, as are large samples, so that post hoc moderator analyses might be conducted to generate hypotheses about for which patient group a treatment is clearly better than another (Kraemer et al., 2002). Alternatively, one can ask whether between-treatment differences are greater in one (e.g., primary care) versus another (e.g., psychiatric care) setting. Such effectiveness studies require large samples and simple outcomes—so-called Practical Clinical Trials (March et al., 2005; Tunis et al., 2003). They usually entail randomization and are most easily conducted across sites with common electronic records and clinicians who routinely use the same primary outcome measures (e.g., blood pressure, a common depression rating scale like the 16-item Quick Inventory of Depressive Symptomatology—Self-report or QIDS-SR16 (Rush et al., 2003; Trivedi et al., 2004b).

Effectiveness

These effectiveness trials have the advantage of “generalizability” and the potential for identifying target populations, settings, preferred treatment

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

procedures, or optimal timing (the “when” issue) on the use of a treatment. Once these moderators are identified, they must be prospectively tested to be valid (Kraemer et al., 2002). If differences between treatments A and B are found in an effectiveness trial, the cause of the difference could be due to patient subgroups, different treatment procedures in use, the timing of the use of the treatment, etc., or a combination. If the sample is large, the randomization should usually guard against these parameters being causally related to outcomes, however.

Hybrid

An alternative to a full effectiveness trial is a “hybrid trial” (Rush, 1999). Hybrid trials allow variance in one or more of the above four Treatment Outcome Relevant Parameters (TORPs) while controlling some or all of the remaining parameters. One can randomize to treatments, to different treatment procedures, or to different populations, etc.

The STAR*D trial (Fava et al., 2003; Rush et al., 2004) was a hybrid trial. The primary question in STAR*D was “what is the next best treatment if the initial, second, or third treatment steps have failed?” In essence, what are the best treatments for treatment-resistant depressions (i.e., depressions that have not benefited from one or several prior treatments)? Results had to be applicable to primary and psychiatric care settings, and generalizable to typical patients in practice (i.e., with common co-morbidities and a level of depression for which medications would typically be used as a first step). Thus, a full range of variance in patients (“for whom”) was allowed, but settings were restricted to primary and psychiatric care (public and private).

We wanted to know what the next best treatment is for depressions that did not benefit after one or several prior adequate treatment trials—not poorly delivered treatment. We, therefore, controlled both for the “how” and “when” parameters. In terms of the “how” parameters, we had to ensure that treatment was well delivered (i.e., to ensure that sufficient doses and durations were used in each treatment step) so that a failure to benefit from a treatment was likely due to the failure of the treatment and not to the failure to deliver the treatment.

We had to control for the “when” parameter to define the number of prior failed treatments and to enroll nonresistant (i.e., no prior treatment failures) in the first step. Thus, at enrollment eligible patients were defined as not treatment resistant. The first step was a single Selective Seratonin Reuptake Inhibitor (SSRI). Then, by using randomized treatment assignment in the second, third, and fourth treatment steps, we could isolate which of several different treatment options would be best for patients for whom one, two, or three prior treatments (each provided in the study itself) had failed.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Since both primary and psychiatric care settings were involved in the study, there was a risk that setting could affect outcome. We found, however, that both the types of patients and the fidelity to protocol-recommended, guideline-based treatment was similar across the two types of settings. Consequently, we found outcomes to be comparable across settings throughout all four treatment steps in the study.

This sort of hybrid design allows rather clear causal attribution to be made when between-treatment differences occur. For example, some advantage to bupropion-SR versus buspirone augmentation of citalopram was found in the second step. This difference was not due to setting, differences in treatment procedures, or when in the course of treatment these two treatments were used. In addition, hybrid trials of sufficient size can be subjected to moderator analyses (Rush et al., 2008).

Registries

Registries also can provide important information about the how, whom, when, and what setting parameters noted above. Since STAR*D patients received a single, well-delivered SSRI (citalopram) in the first step, for those who did not need the second step, we had a large population that was followed for up to a year after this first step. These sorts of registry-like data help to define the long-term course of treated depression, and such registry cohorts also provide safety and tolerability data. For example, Figure 5-10 shows that among depressed patients who do well enough to enter long-term treatment, those with a more chronic or recurrent prior course have the worst prognosis, even in treatment. They also may suggest genetic features relevant to side effects (Laje et al., 2007) or longer term outcomes. As with any observational study, replication is essential, however.

Other Designs

Finally, a comment about other study designs is in order—in particular adaptive designs (Murphy et al., 2007; Pineau et al., 2007) and equipoise stratified randomized designs (Lavori et al., 2001). Both designs attempt to mimic practice and allow prospective evaluation of common practice procedures about which there is controversy. For example, adaptive designs can determine whether continuing the same treatment longer, switching to a different treatment, or adding a second treatment to the first is preferred overall for certain patient subgroups. As a further illustration, when depressed patients have a worsening following months of a good response on treatment, does one raise the dose, hold and wait, or add an augmenting agent? We have no idea now. While a registry without randomization may

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

identify the common practices for these cases, without randomization we cannot be sure of the next best step.

Another attempt to mimic practice while retaining randomization entails the equipoise stratified randomized design (ESRD) (Lavori et al., 2001), which was used in STAR*D. It allowed patients with an inadequate benefit from the first treatment step (citalopram) to eliminate certain treatment strategies in the second step, while accepting the remaining treatment strategies, all of which entailed randomization to various treatment options. To illustrate, the second step provided both (1) a switch strategy (randomization to one of four new treatments after the first step was discontinued) (i.e., citalopram was stopped; the new treatment was begun) and (2) an augmentation strategy (to one of three new treatments to be added to continuing citalopram). Patients could decline one of these strategies (e.g., eliminate augmentation) while accepting the switch strategy and the subsequent randomization to one of four treatments. This design was based on clinical experience, which suggested that patients who were substantially better with the first treatment—but not entirely well so that additional treatment would be needed—would decline switching (to avoid losing the benefit from the step 1 treatment). On the other hand, we expected that those with little benefit and/or high side effects from the first treatment would prefer to switch and decline augmentation. This is, in fact, what we found (Wisniewski et al., 2007). This ESRD allowed participants to be randomized to the specific second step treatments that they were more likely to receive in practice, so that results are generalizable to practice.

Our conclusion is that effectiveness, hybrid, registry adaptive treatment, and other designs all can inform practice. The key issue is to identify the most important questions to be addressed in Clinical Implementation Research.

Defining the Key Clinical Implementation Questions

The discussion above illustrates that a host of clinically critical questions remain once a new treatment becomes available. These questions can be grouped into four conceptual domains (how, for whom, when, what setting). A range of study designs (registry/cohort studies, effectiveness, hybrid, adaptive, and ESRD) are available. The most important issue, however, is how to identify the most important questions to be addressed by Clinical Implementation Research.

Ideally, all stakeholders would have the same question in mind at the outset, but this is often not the case. In fact, the key questions likely vary based on the disease and the available knowledge about treatment of the disease. For example, for STAR*D we wanted to know the next best treatment if the first (or a subsequent one) failed. For Parkinson’s disease, it

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

could be how to manage the depression or prevent the dementia. For HIV, it could be how to manage lipodystrophy or when to use specific combination treatments.

In addition, the perspective of various stakeholders are different. Patients may be more concerned with side effects, adherence, or quality of life. For clinicians, it may be symptom control. For payers, it may be cost recovery or defining the best way to implement procedures. For family members, it could be how to reduce care burden.

Other parameters that affect selection of the key questions for study include (1) Will the answer change practice?, (2) Will the answer change our understanding of the disorder?, (3) Will the answer have an enduring shelf life?, and (4) Will the answer reduce wide practice variations or resolve common controversies about how to manage the disease? If the procedure is commonly used but supported with little evidence, the importance of evaluating the new procedure may be particularly high—especially if it is a more or less costly procedure (e.g., a diuretic versus an ACE inhibitor for hypertension).

One way to define the key questions is to use disease focus groups, which could be accomplished by Web meetings or in-person meetings or perhaps by convening task forces that report to Councils of specific NIH Institutes. Based on registries or large healthcare use databases or literature reviews—perhaps commissioned by AHRQ—one could identify common practices for which there is wide practice variation (or uncommon practices for which there is great promise), with little evidence for which of these alternatives is most effective, safe, or cost-efficient.

Other information sources that could help to define these key questions could include secondary analyses of available large trials, data from current registries, development of registries to identify common practices or potential changes in practice, and data mining large databases (e.g., HMOs).

I would suggest that each NIH Institute select one to three disease targets based on the public health impact of the diseases and the potential for better prevention or treatment, given current practice, practice variation, cost impact, and knowledge about and availability of the interventions. This could be accomplished by the Institute with a consensus conference to identify the key one to two questions that are the highest priority to diverse stakeholder participation. From this consensus, requests for application (RFAs) could be released and contracts let to address these questions in a timely and focused fashion. A significant annual financial commitment from the relevant institutes should be made to Clinical Implementation Research (T2) as well as System Implementation Research (T2).

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Reengineering Practice for T2Research

Not only must the key questions for specific diseases be identified, but also the practice “system” needs to be reengineered to facilitate such efforts. By such efforts, the cost of such research should go down, the system can learn as it goes, and answers can be provided much more rapidly. Obvious suggestions include (1) registries for difficult-to-treat diseases to raise hypotheses about treatment for whom and when or to identify safety/ tolerability EMRs, (2) agreement on common outcome measures that have both research and clinical relevance, (3) payment to providers to obtain these measures if not part of current care (e.g., function at work, absenteeism, role function as parent, student, etc.), (4) training of clinicians who could participate in the basics of clinical research so that collaboration is facilitated, and (5) payment to clinics in the system for research time and effort if needed.

Conclusion

While major treatment advances have been realized from basic research, it is clear that simply making a new treatment available to clinicians is not sufficient to ensure its optimal, appropriate, and safe use. How, for whom, when, and in what context it is best to use the treatment, and the cost implications of these decisions, deserves higher emphasis in funding and prioritization than previously. A variety of design options are available. With systems of care now using electronic medical records, large practical clinical trials are feasible. One major hurdle remains: how to select the most important questions for prospective study to ensure results will change practice, enhance outcomes, improve cost efficiency, and/or make treatments safer.

To define these key questions, one must engage key stakeholders, focus on particular diseases, and engage care systems or develop specialized networks in which the research can be conducted. Finally, once the questions are defined, designs must be identified or developed to obtain the answers.

Institute leadership from across the NIH with critical input and collaboration from clinicians, patients, investigators, and payers is a prerequisite. Finally, either additional funding targeted at these questions or a shift in already very restricted resources is called for. Without these commitments, how, when, for whom, and in what setting a treatment is best will remain the “art of medicine,” rather than the science it could be.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

ACCOMMODATING GENETIC VARIATION AS A STANDARD FEATURE OF CLINICAL RESEARCH

Isaac Kohane, M.D., Ph.D.

Harvard Medical School


Large numbers of subjects are needed to obtain reproducible results relating disease characteristics to rare events or weak effects such as those measured for common genetic variants. These numbers appear to be much higher than the 3,000–5,000 that was characteristic of such studies only 5 years ago. The costs of assembling, phenotyping, and studying these large populations are substantial, estimated at $3 billion for 500,000 individuals. Fortunately, the informational by-products of routine clinical care can be used to bring phenotyping and sample acquisition to the same high-throughput, commodity price-point as is currently true of genotyping costs. The National Center for Biomedical Computing, Informatics for Integrating Biology to the Bedside (i2b2), provides a concrete and freely available demonstration of how such efficiencies in discovery research can be delivered today without creating an entirely parallel biomedical research infrastructure and at an order of magnitude lower cost.

Although genomics is poised to have a significant impact on clinical care, the medical system is relatively ignorant about genetics. A classic example is the surprising result of a recent survey that showed that although 30–40 percent of primary care practitioners had ordered a genetic test for cancer screening in the prior year, this was not due to expected predictors such as a patient’s family history or the education of the practitioner, but rather due to patient requests for the test (Wideroff et al., 2003). The interesting thing about the genomic era is that it poses all of the questions that we are asking about secondary use of data in sharper fashion and as such it is a useful lens to look at these problems of secondary use of data. Even when you go beyond the genetics and genomics the same issues come back again and again. Nonetheless, this brief overview will address how we might instrument the healthcare system for discovery research in the genomic era.

Determining true genetic associations is difficult as illustrated by a meta-analysis done by Hirschhorn of 13 studies of a single nucleotide polymorphism (SNP) that results in an amino acid substitution in the protein PPAR-gamma. This substitution has long been suspected as implicated with Type 2 diabetes susceptibility. The odds ratio reported by each of these individual studies is all over the map, and only when these data are considered in total, is it clear that their polymorphism is actually slightly protective for Type 2 diabetes (Figure 5-11) (Altshuler et al., 2000). This finding illustrates two key issues: (1) research on common variants will need

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-11 Comparison of studies for PPARγ Pro12Ala and Type 2 diabetes susceptibility.

FIGURE 5-11 Comparison of studies for PPARγ Pro12Ala and Type 2 diabetes susceptibility.

SOURCE: Adapted by permission from Macmillan Publishers, Ltd. Nature Genetics 26(1):76-80. Copyright © 2000.

to include the appropriate sample size. Rather than sample sizes such as 100 or even 1,000 patients, as in these 13 underpowered studies, research will require populations on the order of 10,000 patients; (2) the large number of SNP (current tests incorporate 500,000 SNPs) coupled with relatively inexpensive and fast analyses will likely result in an overwhelming number of misleading findings and associations. The commercialization of genomic sequencing and screenings will likely compound these issues.

A significant threat to genomic medicine therefore, is the phenomenon of the “incidentalome,” in which the dangers of a large N and small p(D) contribute to the discovery of multiple abnormal genomic findings. If all of these findings are pursued without thought, ramifications for clinicians, patients, and the health system will bring into question the overall societal benefit of genomic medicine (Kohane et al., 2006). For example, testing 100,000 individuals with a genetic test that is 100 percent sensitive and 99.99 percent specific, will lead to 10 false positives. If a commercially available DNA chip has 10,000 independent gene tests, 60 percent of the population would be falsely positive. Although current genetic tests have lower specificity and sensitivity (perhaps 80 percent and 90 percent, respectively) because their utilization in practice is limited and clinicians tend to order them when there is a clinical indication (thereby increasing the prior likelihood that the patient has the disease), we have fortunately not yet been assaulted by the tsunami of false-positive results. However,

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

the emerging commercial approach to enable broad population screens and conduct many tests in parallel without enriching first for risk, threatens to greatly increase the number of spurious findings. And, because it no more difficult to get a 100,000 SNP chip approved by the FDA as it is for a 100 SNP chip, pressures towards the incidentalome (the universe of all possible false-positive findings) are very substantial and will increasingly present significant challenges to providers, patients, and insurers.

To overcome these problems, the field should focus on approaches and opportunities to garner a large number of appropriate patients (N). The three prongs of instrumentation that are needed to efficiently reach a large N include high-throughput genotyping, high-throughput phenotyping, and high-throughput sample acquisition. The emphasis on efficiency is paramount given resource constraints. In the U.S. Department of Health and Human Services for example, the Secretary’s Advisory Committee on Genetics, Health and Society (SACGHS) recognizes the significant health value of having a 500,000 to 1 million subject study to understand the interaction between genes and environment; however, this study is estimated to cost about $3 billion. Likewise, the cost of a pediatric study launched by the NIH for merely 100,000 individuals will cost an estimated $1–$2 billion over the next two decades. Given the number of similar large-scale genomic studies that could be initiated in the coming decades, developing efficient and inexpensive approaches to obtain data of needed quality and quantity is of utmost importance.

With respect to the three prongs of instrumentation, only high-throughput genotyping is in place and with commoditization the price is rapidly dropping—currently $250–$500 for 500,000 SNPs. The remainder of my discussion will focus on efforts to bring greater efficiency and affordability to the processes of phenotyping and sample acquisition and, in particular, on several new open source tools that aim to help the healthcare enterprise better capture the information and bioproducts produced during the course of clinical care such that they can be used effectively for discovery research.

An important component of any analysis is being able to obtain the “right” populations though phenotyping. To develop an appropriate approach, we have collaborated with computer scientists and software engineers and are working to assess a wide range of phenotypes and diseases—from asthma to major depression, rheumatoid arthritis, essential hypertension, and other common diseases. The following example focuses on efforts to translate genetic findings to improve clinical outcomes in the treatment of asthma. Several colleagues had identified a collection of SNPs in populations in Costa Rica and China that were moderately distinguishing between asthmatics that were responders or nonresponders to glucocorticoid therapy. To determine the relevance of these findings to

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

clinical practice in Boston required the identification of the appropriate set of patients to study. How could we identify these patients through our computerized health records? Because for this type of analysis, billing codes are too coarse grained and biased, we used automated natural language processing to evaluate text of doctors’ notes in online health records. Improving this technique to the point that it was useful was quite challenging; but, ultimately we were able to quickly, reproducibly, and accurately stratify 96,000 out of 2.5 million patients for disease severity, pharmacoresponsiveness, and exposures. Now with cases and controls (from extrema) re-consented and biomaterials obtained, we were able to identify responders and nonresponders to glucocorticoid therapy. If this type of system can be implemented and successfully used across many systems, high-throughput phenotyping may be achievable at the national level. Indeed, over 15 large academic centers have adopted the i2b2 software (freely available under open source license) so there is some reason for optimism in this regard.

Another significant barrier is in obtaining the biosamples for any phenotyped population. That is, how do we find the samples to match the phenotyped patients just identified through natural language processing? Initial efforts to obtain samples and consent entailed outreach through primary care practitioners to patients, a process that was resource and time intensive. The newly developed Crimson system, pioneered by Dr. Bry at the Brigham and Women’s Hospital in Boston, is being tested as an alternative and more efficient way to unite patient phenotype with genotype data. This system takes advantage of the many biosamples collected by laboratories in the routine course of care but ultimately discarded after use. Crimson is able to identify when these samples match up with phenotyped populations (such as the 96,000 asthmatics identified in our previous example). The end result is efficient acquisition of real biological samples—that can be used for a number of genomic tests and biological assays—matched with a rich set of known phenotypes. We have obtained 8,000 samples to date, with over 5,000 released for analysis. The opportunities presented by these richly annotated biospecimens is substantial, whether through DNA analysis by gene array, genomewide association studies, or SNP analyses; the identification of new serum/plasma markers; auto-antibody studies; testing of new antibiotics or antiviral compounds; or metabolism studies of clinical isolates.

If these advances lead to high-throughput phenotyping and sample acquisition, within the decade, we can decrease costs of large-scale genomic studies significantly. In contrast to the estimated $3 billion needed for the SACGHS study of 1 million patients, in 3 years we might expect such a study to less than $150 million and take less than one-tenth of the time to execute. These order of magnitude changes will significantly change the number of studies that can or cannot be done.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Instrumenting the health enterprise has important implications outside of genomic research as well. Existing databases such as the data mart maintained by Partner’s Health Care can be used for analyses aimed at detecting safety or risk signals such as the increased cardiovascular risk in patients taking Vioxx (Brownstein et al., 2007). As we move forward, it is therefore important to consider how the healthcare enterprise can be used for both discovery research and for surveillance. Finally it is worth noting that health is not limited to the provision of healthcare, and personally controlled health records (Kohane and Altman, 2005; Riva et al., 2001; Simons et al., 2005) may provide us with the tools that will instrument the rest of the health care that occurs outside the provider-based healthcare system.

In summary, several specific actions will help to accelerate progress. The first is the increased investment in healthcare IT; these tools will not only will improve the quality of delivered health care but also will increase the quality of secondary uses of electronically captured data. Second, increased transparency in both regulation and patient autonomy is needed to resolve the many worries (often unjustified) about HIPAA that prevent the broader implementation of these systems and approaches. With appropriate education, HIPAA should not present an obstacle to research. Third, we need the continued development of an informatics-savvy healthcare research workforce that understands relationships between health information, genomics, and biology. And finally, the most important step is to create a safe harbor for methodological “bake-offs” that challenge researchers to experiment with large datasets analysis. For example, the protein-folding community has for nearly a decade sponsored contests that pit various methodologies against one another to see which can best predict, computationally, how a given protein sequence will fold. This type of safe harbor has led to innovation in computational methodologies. Yet these types of challenges and safe harbors do not exist for equally complex areas in clinical medicine—such as predicting risk of recurring breast cancer (e.g., the Oncotype or MammaPrint gene expression tests) and/or improving natural language processing approaches to phenotyping of patients. To have an open and transparent discussion about methodological strengths and weaknesses, data should be made available and these biomarkers and studies tested. However, there is no such test bed available for methodologists around the world seeking to improve the state of the art. For the safe and meaningful conduct of biomedical research, particularly in genomics, it is essential that we start testing our data, our methodologies, and our findings.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

PHASED INTRODUCTION AND PAYMENT FOR INTERVENTIONS UNDER PROTOCOL

Wade M. Aubry, M.D.

Senior Advisor

Health Technology Center


Coverage of health interventions has historically been a binary decision by Medicare and commercial health plans. Over the last two decades, however, the concept of phased introduction and payment for emerging technologies under protocol, or “coverage with evidence development (CED)” has evolved as a flexible or conditional alternative to a complete denial of coverage. An important early example of this approach from the 1990s was the support of commercial payers such as Blue Cross Blue Shield Plans for patient care costs of high-priority National Cancer Institute (NCI)-sponsored randomized clinical trials evaluating high-dose chemotherapy with autologous bone marrow transplantation (HDC/ABMT) compared to conventional-dose chemotherapy for the treatment of metastatic breast cancer. Importantly, the financial support for this investigational treatment was contractually facilitated by the Blue Cross Blue Shield Association (BCBSA) as a “Demonstration Project” operating outside of the usual medical necessity provisions in the health plan “evidence of coverage (EOC)” documents. Accrual of patients to the RCTs was slow because of the widespread availability of HDC/ABMT outside of research protocols, delaying the trials which would eventually report no benefit from the more toxic high-dose chemotherapy.

Other examples of CED can be found in the Medicare program (IOM, 2000) and include the Health Care Financing Administration (HCFA) (now Centers for Medicare & Medicaid Services [CMS])/FDA interagency agreement from 1995 allowing for coverage of Category B investigational devices (incremental modification of FDA-approved devices), coverage of lung volume reduction surgery (LVRS) for bullous emphysema under an NIH protocol (1996) (Mckenna, 2001), and the Medicare Clinical Trials Policy (CMS, 2000), under which qualifying clinical trials receive Medicare coverage for patient care costs under an approved research protocol. Over the past 4 years, Medicare CED has been formalized by CMS with a guidance document, a CED policy for implantable cardioverter defibrillators (ICDs) for the prevention of sudden cardiac death, and a Position Emission Tomography (PET) oncology registry for indications not previously covered by Medicare. In addition, the Medicare federal advisory committee established in 1999 for developing national coverage decisions (NCDs) was renamed the Medicare Evidence Development and Coverage Advisory Committee

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

(MedCAC), emphasizing the importance to CMS of developing better evidence to inform Medicare coverage decisions.

The concept of applying CED to commercial health plans has grown in interest over the past 2 years due to the Medicare CED experience but also as part of the debate over whether a national comparative effectiveness (CER) institute should be established. Under this idea, which has been advanced by Wilensky, prospective comparative studies generating new evidence would be included as well as systematic reviews or technology assessments of existing research (Wilensky, 2006). However, as per the experience of the Center for Medical Technology Policy (CMTP) over the past 2 years and of others interested in creating better evidence for decision makers, significant barriers remain in regard to further development of CED in the private sector. These include (1) health plan contracts (EOCs) defining medical necessity as not experimental or investigational, (2) ethical issues such as whether CED is really research, whether it is coercive, and whether it is fair (Pearson et al., 2006), (3) the difficulty in achieving multi-stakeholder consensus when funding depends on vendors such as medical device companies for research costs and health plans for patient care costs, (4) lack of a clear definition of what constitutes “adequate” evidence compared to what constitutes “ideal evidence” when designing the study protocol and end-points, (5) timing of CED in regard to existing coverage without restrictions (lack of incentives for sponsors of new technologies if coverage is already widespread), and (6) limitations of the number of studies that can be implemented under CED.

The story of HDC/ABMT for the treatment of breast cancer provides a good starting point to understand how similar phased introduction and payment for interventions under protocol might be used to facilitate next-generation studies of clinical effectiveness. HDC/ABMT emerged in the 1980s as a combination therapy for breast cancer that combined high-dose chemotherapy and autologous (self-donated) bone marrow transplantation based on the observation that higher doses of chemotherapy resulted in more complete and partial tumor response rates. Various factors led to a fateful branching of this procedure’s use. The two pathways or “systems” of use that emerged can be characterized as a (1) a “rational system” of evaluation—emphasizing systematic evaluation of evidence by technology assessments, clinical practice guidelines, and randomized clinical trials—and (2) a “default system” of clinical use—one that reflects uncoordinated action driven by Phase II studies, patient demand, physicians seeking better treatments for seriously ill patients while building financially successful bone marrow transplant programs, lawyers, media, entrepreneurs, and state and federal governments (Figure 5-12). Approximately 1,000 patients were treated “on protocol” in the evaluation of HDC/ABMT for breast cancer, in high priority NCI randomized controlled trials; whereas, simultaneously an

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-12 The timeline of the branching of the HDC/ABMT experience in breast cancer.

FIGURE 5-12 The timeline of the branching of the HDC/ABMT experience in breast cancer.

estimated 23,000–40,000 patients were treated off-protocol. After 10 years, the results of the on-protocol trials demonstrated that while HDC/ABMT positively affected surrogate outcomes such as response rates, these early markers did not translate into improved survival and ultimately conferred no benefit to patients (Jacobson et al., 2007).

Key issues in the HDC/ABMT story were access to new treatments—how innovations in medicine and promising treatments are made available to patients, what type of evaluations are necessary to demonstrate evidence of benefit for various circumstances, and what role the health insurers could play when patients demand access to treatments that do not meet evidence criteria. In this specific example, BCBSA created a mechanism or Demonstration Project outside of coverage that would allow its plans to participate in randomized trials evaluating the effects of HDC/ABMT on breast cancer. This was essentially a new organization housed at BCBSA in Chicago, which developed contracts with providers and health plans to cover patient care costs “outside of usual coverage and medical necessity provisions” for eligible BCBS Plan enrollees. Because of the broad utilization of HDC/ABMT off-protocol, however, it took a long time to recruit patients for the high priority RCTs that were funded by the Demonstration Project, despite the participation of 17 BCBS Plans and the Federal Employees Program (FEP) administered by BCBSA. From about 1991 to 1999 recruitment continued but dropped off significantly when, at the May 1999 annual meeting of the American Society of Clinical Oncology (ASCO), it was demonstrated that

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

in the two out of four high-priority trials reported, HDC/ABMT treatment resulted in no benefit to patients.

Lessons from the HDC/ABMT experience, such as early collaboration among investigators to implement needed trials with the support of payers, and independent medical review of individual cases as an appeal mechanism, remain relevant today. State mandates for coverage of HDC/ABMT for breast cancer, which numbered 15 during the mid-1990s, proved to be ill advised, as they circumvented the results of technology assessments (which showed evidence gaps) and contributed to the delay in the completion of the NCI randomized clinical trials. State mandates for coverage of qualifying clinical trials, however, have arguably promoted better evidence by funding patient care costs for well-designed clinical trials and facilitating their completion and reporting.

There continues to be public debate surrounding the evaluation, dissemination, and payment of costly medical technologies, many of them for life-threatening conditions such as cancer. This subject was highlighted by The New York Times in a series of summer 2008 front page articles under the series title The Evidence Gap. In their book False Hope: Bone Marrow Transplantation for Breast Cancer, the authors propose a public–private partnership for evaluation of medical procedures (Rettig et al., 2007). Because the HDC/ABMT treatment was a procedure that also used FDA-approved drugs at higher doses than standard of care, regulatory oversight does not fall under any existing governmental agency. A public–private partnership with a relevant institute at the NIH was proposed to fill this gap. In cooperation with insurers and patients, researchers would be asked to describe Phase II results with respect to the promise of particular technologies, evaluate the rationale for Phase III trials, and if necessary limit access to new procedures to these controlled trials. To address any patient concerns, review of individual cases would also be required. In addition to accelerating the production of timely data to the public on clinical effectiveness, additional benefits of this process include a mutual understanding between participants, the development of a shared interest in clinical effectiveness research, insurer funding to finance RCTs, and some protection from litigation for health plans. This concept of shared evaluation of the gaps in existing research and the design and financing of appropriate new studies could also be applied to private-sector initiatives using a neutral party rather than an NIH Institute as research coordinator.

Several lessons for coverage with evidence development can also be learned from experiences of Medicare. In 1965, Section 1862 of the Social Security Act clearly outlined Medicare’s policy regarding untested treatments, with statutory language that “no payment may be made for items or services which are not reasonable and necessary for the diagnosis and treatment of illness or injury or to improve the functioning of a malformed

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

body member.” This is the “reasonable and necessary” clause of the Medicare statute, the operational definition for which has varied over the years because no coverage regulations for Medicare have ever been formally adopted. As outlined in the Federal Register on January 30, 1989, under a notice of proposed rulemaking (NPRM) that was never finalized, Medicare generally determined that the service was safe and effective, not experimental or investigational, and appropriate. More recently, guidelines for evaluating effectiveness developed through the MedCAC and published on the CMS website4 include a determination that the evidence is adequate for improving net health outcomes and that the evidence is generalizable to the Medicare population. Decision memoranda for finalized NCDs also have created a form of “case law” for how CMS evaluates different services as “reasonable and necessary.”

“Coverage with conditions” in Medicare emerged in the 1990s, beginning with Category A and B investigational devices, in which an HCFA (now CMS)/FDA interagency agreement allowed coverage for devices with minor incremental improvements (Category B) but not for novel devices (Category A). This permissive policy opened the door for Medicare coverage with conditions for LVRS under an NIH protocol (The National Emphysema Treatment Trial [NETT]) to evaluate its clinical effectiveness compared to intensive nonsurgical management. This was followed by the Medicare Clinical Trials Policy in 2000 that expanded Medicare coverage for qualifying clinical trials. Finally, coverage with evidence development was formalized in a CMS guidance document in 2006 (CMS coverage website). The two best examples of CED to date are ICDs for prevention of sudden cardiac death with data collection by a registry managed by the American College of Cardiology (ACC) and PET for oncology indications that were not previously approved for Medicare coverage with data collection under a registry managed by the American College of Radiology Imaging Network (ACRIN). The CMS guidance document on CED specifies that for selected national coverage decisions limited funding would be made available to support needed studies under protocol. In essence, therefore, CMS has determined that for these interventions, and for others that may follow, phased introduction and payment under protocol is “reasonable and necessary” and that national non-coverage, or unrestricted coverage, is not.

These approaches attempt to contend with the essential problem in the delivery of medical care, that for many clinical situations, evidence is insufficient to inform decision making at multiple levels: by patients, physicians, delivery systems, and policy makers. Many organizations perform systematic reviews of evidence and decision modeling, such as the AHRQ

4

CMS coverage website, http://www.cms.hhs.gov/coverage.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Evidence-Based Practice Centers (EPCs), the BCBSA Technology Evaluation Center (TEC), the Drug Effectiveness Review Project (DERP) at Oregon Health and Science University (OHSU), Hayes, Inc., the ECRI Institute, the Cochrane Collaboration, and the Institute for Clinical and Economic Review (ICER). These assessments frequently identify evidence gaps on new and emerging technologies, concluding that there is not enough evidence or any relevant evidence available for decision making. Clearly there is a great need for new approaches to fill these evidence gaps, and many believe that approaches such as CED, particularly if expanded beyond CMS, offer a potential solution for at least some technologies. As a way to move this into the private sector and to ensure the opportunity for broad stakeholder input, the Center for Medical Technology Policy, a new nonprofit organization, is working with a broad group of interested parties to prioritize and facilitate research on promising emerging technologies and practices. The CMTP’s approach to creating new evidence has been termed “decision-based evidence-making” as a means of promoting evidence-based medicine (EBM) by increasing the supply of useful data. By designing practical clinical trials (Tunis et al., 2003) comparing new health interventions to relevant or existing alternatives in conjunction with CED, an attempt is being made to find an optimal balance between innovation, access, evidence, and efficiency in practice. The approach is collaborative with a strong commitment to including all stakeholders in decisions and seeks to promote rapid learning through pragmatic, prospective, simpler, faster, and cheaper research studies.

A vision for a new generation of studies and approaches to assess clinical effectiveness is illustrated in Figure 5-13. Phased introduction and pay-

FIGURE 5-13 Where the Center for Medical Technology Policy (CMTP) fits into assessing a new generation of clinical effectiveness studies.

FIGURE 5-13 Where the Center for Medical Technology Policy (CMTP) fits into assessing a new generation of clinical effectiveness studies.

SOURCE: Adapted from Steve Pearson’s building blocks model.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

ment for interventions under protocol can contribute to each of these steps and, as indicated in Figure 5-13, the CMTP is focused on specific activities within this vision. The process could be generally described as beginning with the systematic review of existing evidence to identify critical gaps in evidence, then moving into prioritization, design, funding, study implementation, and oversight. Post-analysis, evidence would be disseminated, resulting in evidence-based clinical guidelines, coverage and payment decisions, and, in some cases, cost-effectiveness analyses by other organizations (e.g., TEC, ICER, AHRQ or its EPCs) or by other individuals.

The “priorities for evidence development (PED)” initiatives at the CMTP (and elsewhere) aim to identify evidence gaps of important technologies for conditions having a significant burden of illness or cost, using systematic reviews, technology assessments, and other types of research. One such AHRQ-funded project at the CMTP started with a Stanford EPC comparative effectiveness review of existing research comparing percutaneous coronary intervention (PCI) with CABG and identified research priorities by supplementing expert and stakeholder opinion. In developing effectiveness guidance documents (EGDs) for selected topics, the CMTP is attempting to model what the FDA and CMS have done in trying to define clear standards for different classes of technologies. Resulting guidance documents will be similar in concept to FDA guidance on evidence needed for regulatory approval but will recognize that different technologies require different types of evidence development and that decision makers often need a higher level of evidence than the FDA requires, such as for medical devices. These guidance documents will expand on information that comes out of priorities for evidence development projects and will move these projects into facilitated research. They will address several key issues, such as (1) What will it take to adequately answer these questions? and (2) What has been learned about specific technologies that might apply to all technologies in that class? Effectiveness guidance documents in development include Gene Expression Profiling in the Management of Breast Cancer and Negative Pressure Wound Therapy for Chronic Non-Healing Wounds. Finally, the CMTP has two facilitative research workgroups focused on selected questions developed through PED. Selection criteria included decision-maker interest, pragmatic design, reasonable size, and reasonable duration. Current workgroups are assessing cardiac computed tomography angiography (CTA), for which CMS recently considered for CED but did not finalize an NCD, and other forms of cardiac imaging in the management of coronary artery disease and different modalities of radiation therapy (proton beam therapy and intensity modulated radiation therapy [IMRT]) in the treatment of early-stage prostate cancer. Comprised of researchers, NIH leaders, academic physicians, professional society representatives, health plans, clinical practitioners, and patient advocates, these groups are responsible for

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
FIGURE 5-14 Schematic model of clinical study coordination under CED.

FIGURE 5-14 Schematic model of clinical study coordination under CED.

outlining and funding clinical research; they develop study designs, contract out research, oversee actual research, and disseminate results. The radiation therapy study comparing side effect profiles of proton beam versus IMRT for prostate cancer has generated the most interest and momentum to date, with the multistakeholder group developing a draft protocol, operational plan, and budget to move forward. Figure 5-14 depicts the CED model with sponsors, a coordinating entity, and privacy protections.

Another key element in promoting CED is benefit language or alternative legal mechanisms allowing health plans to participate in CED without undermining basic medical necessity provisions of their evidence of coverage documents. Historically, the Demonstration Project mechanism at BCBSA for BCBS Plan support of patient care costs for HDC/ABMT breast cancer patients outside of routine coverage addressed this issue successfully. Recently, a conceptual framework for CED with model benefit language has been developed by the CMTP as an applied policy project through a grant from the California HealthCare Foundation (Center for Medical Technology Policy, 2009) and may help to accelerate health plan interest and willingness to participate in CED. Issues addressed using a multi-stakeholder process include technology selection criteria, CED research design criteria, plan participation criteria, possible pathways to incorporate CED within a plan of benefits, and plan language. A basic issue for plans is whether to participate in CED outside of coverage as a special project (e.g., the BCBSA Demonstration Project) or whether to define the CED program within the benefit plan (e.g., as part of a clinical trials policy). The CED model, in conjunction with appropriate plan benefit language, completes

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

the conceptual framework necessary for implementation of CED in commercial health plans.

In summary, although many challenges have limited progress to date on phased introduction and payment for interventions under protocol, there is optimism that the concept will continue to evolve because of Medicare experience with CED and private-sector interest. There continue to be significant barriers, however. It is difficult to reach multistakeholder consensus on study design and funding, and the most important evidence gaps may be ones that can’t be filled. While ideal evidence is well understood, adequate evidence remains undefined, timing is critical (as CED is ineffective if widespread coverage exists), and CED may not be enough to encourage the conduct and completion of important clinical studies. Several strategies might help to accelerate progress, including the public–private partnership recommended for medical procedures not governed by FDA regulation, and private–sector coordination of clinical studies under CED by neutral organizations such as the nonprofit Center for Medical Technology Policy. In addition, model benefit language allowing health plans to participate in CED without undermining basic medical necessity rules is critical to facilitating their participation. Operational strategies going forward include explicit ground rules for workgroups, and separate processes for evidence gap identification, prioritization, and selection for study design and funding. CED should complement—rather than compete with—the traditional research enterprise (researchers and funding mechanisms). Finally, it is critical to look to the future and to work earlier in the product development cycle to generate evidence before widespread dissemination of the intervention in question.

RESEARCH NETWORKS

Eric B. Larson, M.D., M.P.H.

Sarah Greene, M.P.H.

Group Health Center for Health Studies

Fulfilling the Potential of the Learning Healthcare System Through Emerging Research Networks

Recent publications have acknowledged and described the limitations of our current health research enterprise (Emanuel et al., 2004; Gawande, 2002, 2007; Lenfant, 2003; Tunis et al., 2003; Zerhouni, 2005b). These limitations include structural deficiencies, insufficient generalizability, and delays in initiation and implementation from the research review system. Multi-site and network-based studies can help to make research more generalizable; but they are particularly vulnerable to slow processes. By

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

further developing and supporting research networks that are embedded in healthcare systems, we believe we can accelerate progress toward optimal clinical care.

We need to redesign the paradigm for clinical effectiveness research to anchor it in emerging research networks that can serve as a “learning healthcare system.” The notion of a learning healthcare system has gained conceptual and operational traction as a way to meet the challenges of 21st-century medical care. This care could be increasingly tailored based on rapid advances in the “omics” (genomics, proteomics, and metabolomics) and enhanced understanding of gene–environment interactions and the complex mechanisms underlying treatment responses in both infectious and chronic diseases. Taken together, the learning healthcare system and a redesigned paradigm for clinical effectiveness research hold high promise to help to meet these challenges.

A proposal to redesign the clinical effectiveness research paradigm for a learning healthcare system could draw inspiration from several existing models. These include successful initiatives such as the Cooperative Oncology Groups, large cohort studies such as the well-known Nurses’ Health Study and Framingham Heart Study, as well as products of contract research organizations (CROs). They also include the large and growing work accomplished by emerging research networks in functioning delivery systems, such as the HMO Research Network and its several consortium projects already underway: the Cancer Research Network (Wagner et al., 2005), the Center for Education and Research in Therapeutics (CERT) (Platt et al., 2001), and the newly funded Cardiovascular Research Network. In this paper, we assert that emerging (and mature) networks in functioning delivery systems represent a unique opportunity, if contributing to a learning healthcare system is among the research goals. These networks have already made substantial contributions, and we believe they and their individual sites could become the mainstay of clinically relevant research that is ready to be applied to benefit both individual patients and the public’s health.

Why Is the Potential of Such Networks So Great?

Being embedded in functioning delivery systems optimizes the research network’s value in producing relevant and generalizable results. Population-based research is ideal for producing research results with the greatest potential for being of known generalizability and relevance. This contrasts with research from “convenience samples” (the predominant U.S. mode) or from highly specialized, typically referral-filtered populations. Examples abound. But even confining examples to the singular arena of diagnostic markers and management of Alzheimer’s disease, history is littered with

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

instances of findings that offered great hope and that both the scientific and lay communities greeted with enthusiasm, only to be ultimately found useless because they could not be confirmed and thus were not generalizable. Examples include platelet membrane fluidity (Zubenko et al., 1987), one of the first tests widely touted to be diagnostic of Alzheimer’s disease—now remembered only by investigators from that time and chagrined staffers from the National Institute on Aging and other NIH Institutes who drew attention to the result as “news” of a “major breakthrough.”

Looking back on over two decades of working in community-based populations, we see that much of our early work consisted not of finding new and valid markers but rather of simply demonstrating that markers from convenience samples lacked generalizability to the true population of interest. Similarly, in many instances, drug safety concerns were not evident even in very large trials from convenience samples, typically persons carefully screened and not typical of community-based everyday patients. These concerns then became evident in community-based populations, providing “poster children” for our system’s failure to detect toxic effects of widely used drugs such as COX-2 inhibitors (e.g., rofecoxib) (Psaty and Furberg, 2005, 2007), thioglitazones, and epoetin alfa. Some treatments (e.g., tissue plasminogen activator for stroke) appeared effective at reducing morbidity from acute and chronic diseases in carefully conducted clinical trials, but were then reported to have dramatic adverse consequences when translated into practice (Katzan et al., 2000). Now, “pharmacovigilance” and “pharmaco-surveillance” are gaining considerable traction—and attraction—as a means of examining and ultimately preempting similar adverse drug events. A research network based in delivery systems can serve as a ready-made apparatus for this important postmarketing medication surveillance activity.

What Makes a Well-Constructed Integrated Care-Delivery System So Favorable for Producing Generalizable and Relevant Research?

First, and perhaps most critical, is the ability to conduct research—including randomized controlled trials—based in a population leading their lives as usual. Even in instances when sampling cannot be random, a population base lets researchers determine whether any characteristics differ between the population studied and the base population. This enables them to ask, even in a selected population, whether findings are generalizable and, if not, how so. One recent example involved an autopsy study set at Group Health. Subjects who end up receiving autopsy are known to be highly selected, so epidemiologists are traditionally instructed not to use autopsy data to develop inferences for more general populations. However, Sebastien Haneuse, a biostatistician working with Sonnen et al. (2007), has developed a method to determine susceptibility to selection bias in autopsy

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

studies using a weighting scheme comparing characteristics of the living participants to those undergoing autopsy, thereby allowing adjustment if bias is present (Haneuse, 2008).

Secondly, modern integrated care delivery systems are pioneering technological and structural advancements to improve care. Web-based patient portals, secure messaging between patients and providers, and the accompanying transformation of the doctor–patient interaction may lead to dramatic changes in market dynamics, by lowering cost and ultimately improving care and health outcomes. Researchers are testing patient-directed behavior change interventions that could be integrated into the health plans’ patient-facing websites. Research networks embedded in health plans have conducted RCTs and quasi-experimental studies of computerized physician order entry (CPOE) systems, including studies of various types of alerts and “academic detailing” (one-on-one education about use—and often overuse—of treatments such as medications) to reduce prescribing errors. Newer features include unfettered access to specialists, benefit redesign, and development of the “medical home” model to improve continuity of care. These innovative, testable features—if demonstrated to be successful—could become extensible platforms for U.S. healthcare reform. If not, they should be abandoned. Delivery system-based researchers are contributing to the dialogue about the infrastructure of national health information. It is crucial to develop structures, function, and standards that both meet clinical needs and facilitate a robust research enterprise. Interoperability of healthcare data (allowing data sharing among disparate systems) goes hand-in-glove with the development of this national infrastructure, and the experience of these learning healthcare systems and their researchers—who typically collaborate on multi-site, cross-platform data exchange—can inform these conversations (U.S. Department of Health and Human Services, 2008a, 2008b).

A third notable characteristic of a well-constructed, integrated care-delivery system that favors generalizable and relevant research is that it is “ecological.” As a functioning system, it subjects effectiveness research to a setting that is a real, living, and breathing organization whose primary and overarching purpose is to deliver health care. Because of this purpose—which cannot be subjugated to meet the convenience that investigators often expect—their research is much more likely to be pragmatic and to reflect the extant clinical conditions in which care is (and will be) delivered. Healthcare research in these real-time learning laboratories ensures that healthcare systems and national priorities interact with each other. The reciprocal knowledge exchange between these two spheres, especially when conducted in a way that promotes both organic and systematic implementation of new knowledge, can greatly accelerate advances in health care through opportunities for translation.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Research in learning healthcare systems also affords opportunities to study not only what care should be delivered, but how it is (and should be) delivered—that is, what characteristics of providers, policies, and systems affect delivery and implementation of the research. For instance, Taplin and colleagues examined the occurrence of late-stage breast and cervical cancers in environments where women had access to screening. Their study showed the complex interplay of guidelines, contributors to effective follow-up of abnormal screens, and surprisingly more women than expected who refused care, even after learning they had suspicious lesions (Taplin et al., 2004). In another example, Simon et al. conducted a pragmatic clinical trial of antidepressants that arguably produced the best information that a clinician and patient might use to base selection of antidepressants for an individual patient or for practice guidelines. After randomly assigning more than 500 Group Health patients with depression to receive fluoxetine or tricyclic medications, the researchers found no difference in clinical or quality-of-life outcomes or overall treatment costs. They concluded that patients’ and physicians’ preferences are an appropriate basis for selecting initial treatment (Simon et al., 1996).

One reason trials like that of Simon et al. might be more likely to be conducted in functioning delivery systems like Group Health is the shared desire of those in the delivery system and the research unit to improve research, health care, and, of course, health outcomes. The collective goal of a learning healthcare system is to establish a reliable apparatus of evidence-based critical decision making. Over the decades, Group Health has moved from a strong commitment to the “ideal of research” to a pragmatic realization that it needs good research on which to base clinical decisions. Neither the clinical nor research enterprise can afford any more of the high-profile disasters that have occurred when drugs with demonstrated success in RCTs (e.g., rofecoxib and rosiglitazone) have been revealed to be too dangerous for general use because of inadequacies in the original efficacy and effectiveness research. Both researchers and clinicians realize the across-the-board risk to the clinical system and research enterprise of not anticipating and addressing these quality problems. They jointly realize that research in functioning delivery systems is an important avenue for authentic testing of effectiveness, safety, outcomes, and interactions.

Another often-overlooked characteristic of functioning care-delivery systems is the ability to exploit what we call the “bidirectionality” of research. Traditionally, the research and policy communities have stressed the need for research to go from “bench to bedside”—or from lab to clinical practice. A distinct advantage of research sites and networks embedded in functioning delivery systems is that ideas can emanate from those at the bedside: Clinicians identify critical deficiencies of care that can be researched and improved; they are poised to test novel treatment ideas, while cham-

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

pioning these ideas and forming ready partnerships with research teams. Similarly, researchers can refine and adapt strategies from the published literature, providing important confirmatory studies or rigorous evaluation of a natural experiment, often in larger and more representative populations. Patients—or, more typically, patient advocacy groups—may point out deficiencies or special needs that suggest research projects. Most importantly, research in a community-based delivery system can yield insights into real-world issues of highest priority to the target population.

The recently funded Clinical and Translational Science Award (CTSA) partnership between the University of Washington, Group Health, and the Northwest American Indian/Alaskan Native Network affords an unparalleled opportunity to surface the tribes’ preeminent research priorities and to apply tools and strategies that the Group Health and university researchers devise to address them. We at Group Health were assuming that this network would want us to study accidents, gun safety, and maternal health in their communities. These are all areas in which we have substantial prior experience, including in American Indian and Alaskan Native communities. However, we were astounded to hear, when we spoke with them in person, that the first priority of all of the tribes was methamphetamine abuse, which they told us is destroying the life of their communities. They said, “You can study what you want, as long as you start with meth.”

Ultimately, these research examples are not only bidirectional but also adaptive and iterative, as befits a more real-world and less-controlled setting. This does not detract from scientific rigor; rather, it means the protocol is more likely to be calibrated for real-world conditions. Results can be translated more effectively, since the research was conducted in the setting where the findings are applied.

What Is Our Vision to Guide the Next Generation of Studies and Exploit the Natural Advantages of Research Networks in Functioning Integrated Care-Delivery Systems?

Three general principles underlie our vision:

  1. Bi-directionality—with research flowing seamlessly across bench, bedside, and community—will become an accepted aspect of most, if not all, funded health research.

  2. The learning healthcare system can be seen as a catalyst, partner, and test bed for research.

  3. The infrastructure needed to rapidly ramp up new research studies will evolve to meet the demands of this more complex environment.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

We emphasize that the RCT will still be the cornerstone of bi-directional research. Since Archie Cochrane’s seminal writing (Cochrane, 1971), we have benefited greatly from widespread acceptance that the RCT provides the most reliable evidence for judgment of effect. Effectiveness RCTs should be pragmatic, efficient, and ideally population based for better generalizability. Researchers should do more than simply communicate their results through academic manuscripts. If set in a delivery system, they have a direct route, and indeed a responsibility, to communicate results to providers and usually to participants. Consider how quickly research is translated into practice when a pharmaceutical company promotes a new drug RCT. The goal is to match this speed when we translate into practice any research that involves improving care. Yet we have not consistently done this well. An example is shared decision making for prostate surgery. This was shown in 1995 to improve outcomes and reduce costs, an ideal result (Wagner et al., 1995). However, it was not adopted or used in the delivery system where the research was conducted: Group Health.

However, traditional RCTs, which assign single patients randomly to a prespecified treatment or intervention, are expensive and time consuming. They also may be impractical for addressing many important questions. Thus, other types of studies can be valuable and informative if conducted in a well-constructed delivery system. Examples include cluster randomized trials and disease registries. Cohort studies linked to legacy medical records and electronic medical records (EMRs) in stable populations (e.g., Group Health [Smith et al., 2002] and Mayo Clinic) are quite useful for time series analyses, correlations, and quasi-experimental research using observational data generated from clinical practice—especially so-called natural experiments that occur as practice changes are instituted or external environment changes affect medical care and outcomes. We took advantage of a natural experiment when a pilot project deploying the Advanced Medical Home in a single clinic was initiated and we rapidly developed our Advanced Medical Home study. A revival of idealized primary care, the Advanced Medical Home involves a physician and healthcare team committing to serving as the home base for as much of their patients’ medical care as they can provide—and as the coordinators of other care as needed (American College of Physicians, 2006). The rationale behind this model is that this coordination promises to help control costs while improving health outcomes and patient and provider satisfaction. Very preliminary results from our study suggest that the Advanced Medical Home improves the satisfaction of patients and providers without increasing costs. A study called Content of Care is another important example, in which we are using automated data to identify and address high-cost drivers of care across populations—and unwarranted variations in practice between physicians and medical centers.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

What Are Some Emerging and Uniquely Important Areas Where the Theoretical Advantages of Research Networks Set in Delivery Systems Might Be Especially Valuable?

Challenging areas involve detecting drug side effects (Brown et al., 2007), vaccine safety (Hinrichsen et al., 2007), and emergence of antibiotic-resistant infectious agents. These challenges also represent an opportunity: Can research networks in functioning systems improve translation by both producing valid research findings while minimizing false starts and also by detecting side effects or changes in treatment effects more quickly after deployment? Proven examples include the Vaccine Safety Datalink (VSD) project (Centers for Disease Control and Prevention, 2008; Thompson et al., 2007), which is in the process of being emulated for infectious disease biosurveillance, e.g., using HMO Research Network sites to detect changes in antibiotic resistance among sexually transmitted diseases. Directly observing the dissemination of key clinical findings in practice also provides an effective window on translation. The up-to-the-minute, comprehensive data systems of these research networks lend themselves to examining changes in treatment, such as the use of aromatase inhibitors for adjuvant breast cancer therapy following reports of this successful therapeutic approach among referral populations in cancer trials (Aiello et al., 2008).

Genomics represents a unique opportunity for research in integrated care-delivery systems to exploit the features that make such research relevant and generalizable. Personalized medicine is an increasingly popular term in the health sector; but realizing its true promise will require working through many operational issues around the data, along with significant transformation in how care is delivered. Privacy issues and ownership considerations abound as large quantities of genomic data are being collected, analyzed, and stored. State-based regulations are likely to play a major role as data stewardship becomes a larger part of this conversation. Housing these data in delivery system-based research networks offers such clear-cut advantages as:

  • Known and diverse population base

  • Avoidance of referral filters

  • Established and typically trusting relationship between patients and their providers in the care-delivery system

  • Empiric study of consent

  • Well-developed EMR to obtain phenotype information

EMRs promise a much more efficient way to determine phenotypes for research and also will be uniquely helpful when and if we can “tailor” treatment and especially prevention in a personalized way (i.e., based on known

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

genetic risk or therapeutic responsiveness). But clinical science is hard-pressed to keep up with the pace of marketing forces and natural curiosity driving consumers to seek this information and act on it (Harmon, 2008). Notably, the National Human Genome Research Institute has significant work remaining to develop genomewide array studies based in existing cohorts. Behavioral and sociocultural examinations are accompanying the basic and preclinical research, but much work remains to fully understand the ramifications of collecting and leveraging genomic data, much less tailoring treatment based on these unique characteristics.

Is a Culture Change Under Way?

The “omics” revolution portends a cultural shift. NIH Director Zerhouni describes medicine that is not only preventive but also preemptive. He enunciates a new vision for translational research in recent publications (Zerhouni, 2005a). One outcome is the NIH Roadmap for Medical Research, a paradigm for re-engineering clinical research, which begat the NIH-funded CTSAs. This program aims to “develop a national system of interconnected clinical research networks capable of more quickly and efficiently mounting large-scale studies.” One consequence of this effort is a nascent culture change and, in places, works in progress—in institutions choosing to “re-engineer” their clinical and translational research programs. Some are realizing the potential of bringing together research networks in integrated healthcare systems with university-based scientists. Reviewers of CTSA grant proposals often highlight these interfaces as particularly strong features of applications.

Given the magnitude of the CTSA program and the lofty goals related to national systems of interconnected clinical research networks, the outcomes of this Institute of Medicine (IOM) workshop should aspire to inform the NIH’s CTSA program. Indeed, the IOM’s proposed redesign of the clinical effectiveness research paradigm ideally would address challenges the NIH will face as it aims to re-engineer the massive biomedical research enterprise we currently enjoy in the United States. This reaffirms our second principle: that the learning healthcare system can be viewed as a catalyst, partner, and test bed for clinical research.

We believe our third principle is central to any discussion of a new vision or paradigm for research, whether in a learning healthcare system or any other setting. To meet the complex needs of researchers, care providers, and the patients we serve, the operational infrastructure needed to rapidly ramp up will need to evolve to meet the demands of new research studies in this more complex environment. The infrastructure “renovations” should consider the full gamut of opportunities to render research more efficient, including:

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
  • Research review by Institutional Review Boards and similar ethics committees: Harmonizing regulations across federal agencies is a pivotal first step; developing stronger federal guidance about avoiding duplicative reviews of multi-institutional studies is another necessary action.

  • Creating repositories of measures, surveys, and other indices, with standardized information about how these measures are used, to avoid reinventing measures de novo.

  • Templates for common research processes such as gaining HIPAA authorization and developing data use agreements and similar data-sharing operations.

  • A knowledge bank of effective participant recruitment strategies, analogous to the Cancer Control PLANET (Plan, Link, Act, Network with Evidence-based Tools) that the National Cancer Institute developed (National Cancer Institute Cancer Control PLANET, 2008).

  • Harmonized manuscript submission procedures adopted by all publishers of medical journals.

  • Continued attention to the architecture of health information—how it is collected, stored, and exchanged.

Clinical developments are outpacing our ability to implement these needed innovations. Thoughtful reconsideration of the research process, maintaining the appropriate level of attention to patient privacy, confidentiality, security, and the doctor–patient compact, will help us to close the gap between research advances and their deployment. If the nascent culture change leads to sustainable operational infrastructure, the next generation of research studies can successfully exploit the myriad advantages of emergent research networks in healthcare systems, as long as equal attention is given to the philosophical and practical tenets we have outlined here. Emerging research networks can form a reliable basis for such learning healthcare systems, which have the potential not only to accelerate the translation of research but also to ensure that it confers true benefits to patients and the public health.

REFERENCES

Aiello, E. J., D. S. Buist, E. H. Wagner, L. Tuzzio, S. M. Greene, L. E. Lamerato, T. S. Field, L. J. Herrinton, R. Haque, G. Hart, K. J. Bischoff, and A. M. Geiger. 2008. Diffusion of aromatase inhibitors for breast cancer therapy between 1996 and 2003 in the cancer research network. Breast Cancer Research Treatment 107(3):397-403.

Altshuler, D., J. N. Hirschhorn, et al. 2000. The common PPAR γ Pro12Ala polymorphism is associated with decreased risk of type 2 diabetes. Nature Genetics 26(1):76-80.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

American College of Physicians. 2006. The Advanced Medical Home: A Patient-centered, Physician-guided Model of Health Care—Policy Monograph.

American Psychiatric Association. 2000a. Diagnostic and Statistical Manual of Mental Disorders, 4th ed. Washington, DC: American Psychiatric Press.

———. 2000b. Practice guideline for the treatment of patients with major depressive disorder (revision). American Journal of Psychiatry 157(4 Supl):1-45.

Brown, J. S., M. Kulldorff, K. A. Chan, R. L. Davis, D. Graham, P. T. Pettus, S. E. Andrade, M. A. Raebel, L. Herrinton, D. Roblin, D. Boudreau, D. Smith, J. H. Gurwitz, M. J. Gunter, and R. Platt. 2007. Early detection of adverse drug events within population-based health networks: Application of sequential testing methods. Pharmacoepidemiology and Drug Safety 16(12):1275-1284.

Brownstein, J. S., M. Sordo, I. S. Kohane, and K. D. Mandl. 2007. The tell-tale heart: Population-based surveillance reveals an association of rofecoxib and celecoxib with myocardial infarction. PLoS ONE 2(9):e840.

Byar, D. P. 1980. Why data bases should not replace randomized clinical trials. Biometrics 36(2):337-342.

Center for Medical Technology Policy. 2009. Coverage for Evidence Development: A Conceptual Framework. Oakland: California HealthCare Foundation. http://cmtpnet.org/cmtp-research/applied-policy-and-methods/coverage-with-evidence-development/20090108%20-%20CMTP%20-%20CED%20Issue&20Brief.pdf/view (accessed June 21, 2010).

Centers for Disease Control and Prevention. 2008. CDC Vaccine Safety Datalink Project. http://www.cdc.gov/od/science/iso/vsd/ (accessed July 18, 2008).

Centers for Medicare & Medicaid Services. 2000. Medicare Coverage-Clinical Trials Program. http://www.cms.hhs.gov/ClinicalTrialPolicies/Downloads/programmemorandum.pdf (accessed June 2008).

Cochrane, A. 1971. Effectiveness and Efficiency: Random Reflections on Health Services. London: Royal Society of Medicine Press.

Crismon, M. L., M. Trivedi, T. A. Pigott, A. J. Rush, R. M. Hirschfeld, D. A. Kahn, C. DeBattista, J. C. Nelson, A. A. Nierenberg, H. A. Sackeim, and M. E. Thase. 1999. The Texas Medication Algorithm Project: Report of the Texas Consensus Conference panel on medication treatment of major depressive disorder. Journal of Clinical Psychiatry 60(3):142-156.

Emanuel, E. J., A. Wood, A. Fleischman, A. Bowen, K. A. Getz, C. Grady, C. Levine, D. E. Hammerschmidt, R. Faden, L. Eckenwiler, C. T. Muse, and J. Sugarman. 2004. Oversight of human participants research: Identifying problems to evaluate reform proposals. Annals of Internal Medicine 141(4):282-291.

Emrich, H. M. 1990. Studies with oxcarbazepine (trileptal) in acute mania. International Clinical Psychopharmacology (5):83-88.

Fava, M., A. J. Rush, M. H. Trivedi, A. A. Nierenberg, M. E. Thase, H. A. Sackeim, F. M. Quitkin, S. Wisniewski, P. W. Lavori, J. F. Rosenbaum, and D. J. Kupfer. 2003. Background and rationale for the sequenced treatment alternatives to relieve depression (STAR*D) study. The Psychiatric Clinics of North America 26(2):x, 457-494.

Fava, M., A. J. Rush, J. E. Alpert, G. K. Balasubramani, S. R. Wisniewski, C. N. Carmin, M. M. Biggs, S. Zisook, A. Leuchter, R. Howland, D. Warden, and M. H. Trivedi. 2008. Difference in treatment outcome in outpatients with anxious versus nonanxious depression: A STAR*D report. American Journal of Psychiatry 165(3):342-351.

Fontanarosa, P. B., and C. D. DeAngelis. 2002. Basic science and translational research in JAMA. Journal of the American Medical Association 287(13):1728.

Gawande, A. 2002. Complications: A Surgeon’s Notes on an Imperfect Science. New York: Metropolitan Books.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

———. 2007. Better: A Surgeon’s Notes on Performance. New York: Metropolitan Books.

Haneuse, S. 2008. Adjustment for selection bias in nueropathological studies of dementia; accounting for selection bias in a community-based neuropathological study of dementia. Paper presented at Alzheimer’s Association International Conference on Alzheimer’s Disease.

Harmon, A. 2008. The DNA age: Gene map becomes a luxury item. New York Times. March 4, 2008.

Hinrichsen, V. L., B. Kruskal, M. A. O’Brien, T. A. Lieu, and R. Platt. 2007. Using electronic medical records to enhance detection and reporting of vaccine adverse events. Journal of the American Medical Informatics Association 14(6):731-735.

IOM (Institute of Medicine). 2000. Extending Medicare Reimbursement in Clinical Trials. Washington, DC: National Academy Press.

Jacobson, P. D., R. A. Rettig, and W. M. Aubry. 2007. Litigating the science of breast cancer treatment. Journal of Health Politics, Policy, and Law 32(5):785-818.

Katzan, I. L., A. J. Furlan, L. E. Lloyd, J. I. Frank, D. L. Harper, J. A. Hinchey, J. P. Hammel, A. Qu, and C. A. Sila. 2000. Use of tissue-type plasminogen activator for acute ischemic stroke: The Cleveland area experience. Journal of the American Medical Association 283(9):1151-1158.

Kohane, I. S., and R. B. Altman. 2005. Health-information altruists—a potentially critical resource. New England Journal of Medicine 353(19):2074-2077.

Kraemer, H. C., G. T. Wilson, C. G. Fairburn, and W. S. Agras. 2002. Mediators and moderators of treatment effects in randomized clinical trials. Archives of General Psychiatry 59(10):877-883.

Laje, G., S. Paddock, H. Manji, A. J. Rush, A. F. Wilson, D. Charney, and F. J. McMahon. 2007. Genetic markers of suicidal ideation emerging during citalopram treatment of major depression. American Journal of Psychiatry 164(10):1530-1538.

Lavori, P. W., A. J. Rush, S. R. Wisniewski, J. Alpert, M. Fava, D. J. Kupfer, A. Nierenberg, F. M. Quitkin, H. A. Sackeim, M. E. Thase, and M. Trivedi. 2001. Strengthening clinical effectiveness trials: Equipoise-stratified randomization. Biological Psychiatry 50(10):792-801.

Lenfant, C. 2003. Shattuck lecture—clinical research to clinical practice—lost in translation? New England Journal of Medicine 349(9):868-874.

Lumley, T. 2002. Network meta-analysis for indirect treatment comparisons. Statistics in Medicine 21(16):2313-2324.

March, J. S., S. G. Silva, S. Compton, M. Shapiro, R. Califf, and R. Krishnan. 2005. The case for practical clinical trials in psychiatry. American Journal of Psychiatry 162(5):836-846.

McKenna, R. J., A. Gelb, and M. Brenner. 2001. Lung volume reduction surgery for chronic obstructive pulmonary disease: Where do we stand? World Journal of Surgery 25: 231-237.

Murphy, S. A., L. M. Collins, and A. J. Rush. 2007. Customizing treatment to the patient: Adaptive treatment strategies. Drug and Alcohol Dependence 88(Supl 2):S1-S3.

National Cancer Institute Cancer Control PLANET. 2008. Plan, Link, Act, Network with Evidence-based Tools. http://cancercontrolplanet.cancer.gov/ (accessed March 4, 2008).

Neaton, J. D., S. L. Normand, A. Gelijns, R. C. Starling, D. L. Mann, and M. A. Konstam. 2007. Designs for mechanical circulatory support device studies. Journal of Cardiac Failure 13(1):63-74.

Pearson, S. D., F. G. Miller, and E. J. Emanuel. 2006. Medicare’s requirement for research participation as a condition of coverage: Is it ethical? Journal of the American Medical Association 296(8):988-991.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Pineau, J., M. G. Bellemare, A. J. Rush, A. Ghizaru, and S. A. Murphy. 2007. Constructing evidence-based treatment strategies using methods from computer science. Drug Alcohol Depend 88(Supl 2):S52-S60.

Platt, R., R. Davis, J. Finkelstein, A. S. Go, J. H. Gurwitz, D. Roblin, S. Soumerai, D. Ross-Degnan, S. Andrade, M. J. Goodman, B. Martinson, M. A. Raebel, D. Smith, M. Ulcickas-Yood, and K. A. Chan. 2001. Multicenter epidemiologic and health services research on therapeutics in the HMO Research Network Center for Education and Research on Therapeutics. Pharmacoepidemiology and Drug Safety 10(5):373-377.

Psaty, B. M., and C. D. Furberg. 2005. Cox-2 inhibitors—lessons in drug safety. New England Journal of Medicine 352(11):1133-1135.

———. 2007. The record on rosiglitazone and the risk of myocardial infarction. New England Journal of Medicine 357(1):67-69.

Ray, W. A. 2003. Evaluating medication effects outside of clinical trials: New-user designs. American Journal of Epidemiology 158(9):915-920.

Ray, W. A., P. B. Thapa, and P. Gideon. 2002. Misclassification of current benzodiazepine exposure by use of a single baseline measurement and its effects upon studies of injuries. Pharmacoepidemiology and Drug Safety 11(8):663-669.

Rettig, R., P. Jacobson, C. Farquhar, and W. M. Aubry. 2007. False Hope: Bone Marrow Transplantation for Breast Cancer. New York: Oxford University Press.

Riva, A., K. D. Mandl, D. H. Oh, D. J. Nigrin, A. Butte, P. Szolovits, and I. S. Kohane. 2001. The personal internetworked notary and guardian. International Journal of Medical Informatics 62(1):27-40.

Rush, A. 1999. Linking efficacy and effectiveness research in the evaluation of psychotherapies. In Cost Effectiveness of Psychotherapy. A Guide for Practitioners, Researchers and Policymakers, edited by N. Miller and K. Magruder. New York: Oxford University Press. Pp. 26-32.

———. 2005. Algorithm-guided treatment in depression: TMAP and STAR*D. In Therapieresistente depressionen—aktueller wissensstand und leitlinien für die behandlung in klinik und praxis, edited by M. Bauer, A. Berghofer, and M. Adli. Berlin-Heidelberg-New York: Springer. Pp. 459-476.

Rush, A., and D. Kupfer. 1995. Strategies and tactics in the treatment of depression. In Treatments of Psychiatric Disorders. Vol. 1, edited by G. Gabbard and S. Atkinson. Washington, DC: American Psychiatric Press. Pp. 1349-1368.

Rush, A. J., and R. F. Prien. 1995. From scientific knowledge to the clinical practice of psychopharmacology: Can the gap be bridged? Psychopharmacology Bulletin 31(1):7-20.

Rush, A. J., C. M. Gullion, M. R. Basco, R. B. Jarrett, and M. H. Trivedi. 1996. The inventory of depressive symptomatology (IDS): Psychometric properties. Psychological Medicine 26(3):477-486.

Rush, A. J., M. H. Trivedi, H. M. Ibrahim, T. J. Carmody, B. Arnow, D. N. Klein, J. C. Markowitz, P. T. Ninan, S. Kornstein, R. Manber, M. E. Thase, J. H. Kocsis, and M. B. Keller. 2003. The 16-item quick inventory of depressive symptomatology (QIDS), clinician rating (QIDS-C), and self-report (QIDS-SR): A psychometric evaluation in patients with chronic major depression. Biological Psychiatry 54(5):573-583.

Rush, A. J., M. Fava, S. R. Wisniewski, P. W. Lavori, M. H. Trivedi, H. A. Sackeim, M. E. Thase, A. A. Nierenberg, F. M. Quitkin, T. M. Kashner, D. J. Kupfer, J. F. Rosenbaum, J. Alpert, J. W. Stewart, P. J. McGrath, M. M. Biggs, K. Shores-Wilson, B. D. Lebowitz, L. Ritz, and G. Niederehe. 2004. Sequenced treatment alternatives to relieve depression (STAR*D): Rationale and design. Controlled Clinical Trials 25(1):119-142.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Rush, A. J., M. H. Trivedi, S. R. Wisniewski, A. A. Nierenberg, J. W. Stewart, D. Warden, G. Niederehe, M. E. Thase, P. W. Lavori, B. D. Lebowitz, P. J. McGrath, J. F. Rosenbaum, H. A. Sackeim, D. J. Kupfer, J. Luther, and M. Fava. 2006. Acute and longer-term outcomes in depressed outpatients requiring one or several treatment steps: A STAR*D report. American Journal of Psychiatry 163(11):1905-1917.

Rush, A. J., S. R. Wisniewski, D. Warden, J. F. Luther, L. L. Davis, M. Fava, A. A. Nierenberg, and M. H. Trivedi. 2008. Selecting among second-step antidepressant medication monotherapies: Predictive value of clinical, demographic, or first-step treatment features. Archives of General Psychiatry 65(8):870-880.

Schenker, N., and T. E. Raghunathan. 2007. Combining information from multiple surveys to enhance estimation of measures of health. Statistics in Medicine 26(8):1802-1811.

Simon, G. E., M. VonKorff, J. H. Heiligenstein, D. A. Revicki, L. Grothaus, W. Katon, and E. H. Wagner. 1996. Initial antidepressant choice in primary care. Effectiveness and cost of fluoxetine vs tricyclic antidepressants. Journal of the American Medical Association 275(24):1897-1902.

Simons, W. W., K. D. Mandl, and I. S. Kohane. 2005. The ping personally controlled electronic medical record system: Technical architecture. Journal of the American Medical Informatics Association 12(1):47-54.

Smith, N. L., P. J. Savage, S. R. Heckbert, J. I. Barzilay, V. A. Bittner, L. H. Kuller, and B. M. Psaty. 2002. Glucose, blood pressure, and lipid control in older people with and without diabetes mellitus: The Cardiovascular Health Study. Journal of the America Geriatric Society 50(3):416-423.

Sonnen, J. A., E. B. Larson, P. K. Crane, S. Haneuse, G. Li, G. D. Schellenberg, S. Craft, J. B. Leverenz, and T. J. Montine. 2007. Pathological correlates of dementia in a longitudinal, population-based sample of aging. Annals of Neurology 62(4):406-413.

Spiegelhalter, D. J., K. R. Abrams, J. P. Myles. 2004.2004. Bayesian Approaches to Clinical Trials and Health-care Evaluation. West Sussex, England: John Wiley & Sons, Ltd.

Stern, S. L., A. J. Rush, and J. Mendels. 1980. Toward a rational pharmacotherapy of depression. American Journal of Psychiatry 137(5):545-552.

Sung, N. S., W. F. Crowley, Jr., M. Genel, P. Salber, L. Sandy, L. M. Sherwood, S. B. Johnson, V. Catanese, H. Tilson, K. Getz, E. L. Larson, D. Scheinberg, E. A. Reece, H. Slavkin, A. Dobs, J. Grebb, R. A. Martinez, A. Korn, and D. Rimoin. 2003. Central challenges facing the national clinical research enterprise. Journal of the American Medical Association 289(10):1278-1287.

Taplin, S. H., L. Ichikawa, M. U. Yood, M. M. Manos, A. M. Geiger, S. Weinmann, J. Gilbert, J. Mouchawar, W. A. Leyden, R. Altaras, R. K. Beverly, D. Casso, E. O. Westbrook, K. Bischoff, J. G. Zapka, and W. E. Barlow. 2004. Reason for late-stage breast cancer: Absence of screening or detection, or breakdown in follow-up? Journal of the National Cancer Institute 96(20):1518-1527.

Thompson, W. W., C. Price, B. Goodson, D. K. Shay, P. Benson, V. L. Hinrichsen, E. Lewis, E. Eriksen, P. Ray, S. M. Marcy, J. Dunn, L. A. Jackson, T. A. Lieu, S. Black, G. Stewart, E. S. Weintraub, R. L. Davis, and F. DeStefano. 2007. Early thimerosal exposure and neuropsychological outcomes at 7 to 10 years. New England Journal of Medicine 357(13):1281-1292.

Trivedi, M. H., A. J. Rush, M. L. Crismon, T. M. Kashner, M. G. Toprac, T. J. Carmody, T. Key, M. M. Biggs, K. Shores-Wilson, B. Witte, T. Suppes, A. L. Miller, K. Z. Altshuler, and S. P. Shon. 2004a. Clinical results for patients with major depressive disorder in the Texas Medication Algorithm Project. Archives of General Psychiatry 61(7):669-680.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

Trivedi, M. H., A. J. Rush, H. M. Ibrahim, T. J. Carmody, M. M. Biggs, T. Suppes, M. L. Crismon, K. Shores-Wilson, M. G. Toprac, E. B. Dennehy, B. Witte, and T. M. Kashner. 2004b. The inventory of depressive symptomatology, clinician rating (IDS-C) and self-report (IDS-SR), and the quick inventory of depressive symptomatology, clinician rating (QIDS-C) and self-report (QIDS-SR) in public sector patients with mood disorders: A psychometric evaluation. Psychological Medicine 34(1):73-82.

Trivedi, M. H., M. Fava, S. R. Wisniewski, M. E. Thase, F. Quitkin, D. Warden, L. Ritz, A. A. Nierenberg, B. D. Lebowitz, M. M. Biggs, J. F. Luther, K. Shores-Wilson, and A. J. Rush. 2006a. Medication augmentation after the failure of SSRIS for depression. New England Journal of Medicine 354(12):1243-1252.

Trivedi, M. H., A. J. Rush, S. R. Wisniewski, A. A. Nierenberg, D. Warden, L. Ritz, G. Norquist, R. H. Howland, B. Lebowitz, P. J. McGrath, K. Shores-Wilson, M. M. Biggs, G. K. Balasubramani, and M. Fava. 2006b. Evaluation of outcomes with citalopram for depression using measurement-based care in STAR*D: Implications for clinical practice. American Journal of Psychiatry 163(1):28-40.

Tunis, S. R., D. B. Stryer, and C. M. Clancy. 2003. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association 290(12):1624-1632.

U.S. Department of Health and Human Services. 2008a. American Health Information Community. http://www.hhs.gov/healthit/community/background/ (accessed March 4, 2008).

———. 2008b. Office of the National Coordinator for Health Information Technology: Mission. http://www.hhs.gov/healthit/onc/mission/ (accessed March 4, 2008).

U.S. Department of Health and Human Services, Public Health Service, and Agency for Health Care Policy and Research. 1993. Clinical Practice Guideline, Number 5: Depression in Primary Care: Volume 2. Treatment of Major Depression. Rockville, MD: U.S. Department of Health and Human Services, Public Health Service, and Agency for Healthcare Research and Quality.

Wagner, E. H., P. Barrett, M. J. Barry, W. Barlow, and F. J. Fowler, Jr. 1995. The effect of a shared decisionmaking program on rates of surgery for benign prostatic hyperplasia. Pilot results. Medical Care 33(8):765-770.

Wagner, E. H., S. M. Greene, G. Hart, T. S. Field, S. Fletcher, A. M. Geiger, L. J. Herrinton, M. C. Hornbrook, C. C. Johnson, J. Mouchawar, S. J. Rolnick, V. J. Stevens, S. H. Taplin, D. Tolsma, and T. M. Vogt. 2005. Building a research consortium of large health systems: The cancer research network. Journal of the National Cancer Institute Monographs (35):3-11.

Wideroff, L., A. N. Freedman, L. Olson, C. N. Klabunde, W. Davis, K. P. Srinath, R. T. Croyle, and R. Ballard-Barbash. 2003. Physician use of genetic testing for cancer susceptibility: Results of a national survey. Cancer Epidemiology, Biomarkers & Prevention 12(4):295-303.

Wilensky, G. 2006. Developing a center for comparative effectiveness information. Health Affairs 25(6):w572-w585.

Wisniewski, S. R., M. Fava, M. H. Trivedi, M. E. Thase, D. Warden, G. Niederehe, E. S. Friedman, M. M. Biggs, H. A. Sackeim, K. Shores-Wilson, P. J. McGrath, P. W. Lavori, S. Miyahara, and A. J. Rush. 2007. Acceptability of second-step treatments to depressed outpatients: A STAR*D report. American Journal of Psychiatry 164(5):753-760.

Woolf, S. H. 2008. The meaning of translational research and why it matters. Journal of the American Medical Association 299(2):211-213.

Zerhouni, E. A. 2005a. Translational and clinical science—time for a new vision. New England Journal of Medicine 353(15):1621-1623.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×

———. 2005b. U.S. biomedical research: Basic, translational, and clinical sciences. Journal of the American Medical Association 294(11):1352-1358.

Zubenko, G. S., M. Wusylko, B. M. Cohen, F. Boller, and I. Teply. 1987. Family study of platelet membrane fluidity in Alzheimer’s disease. Science 238(4826):539-542.

Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 267
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 268
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 269
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 270
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 271
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 272
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 273
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 274
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 275
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 276
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 277
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 278
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 279
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 280
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 281
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 282
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 283
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 284
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 285
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 286
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 287
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 288
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 289
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 290
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 291
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 292
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 293
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 294
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 295
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 296
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 297
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 298
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 299
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 300
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 301
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 302
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 303
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 304
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 305
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 306
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 307
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 308
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 309
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 310
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 311
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 312
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 313
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 314
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 315
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 316
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 317
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 318
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 319
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 320
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 321
Suggested Citation:"5 Moving to the Next Generation of Studies." Institute of Medicine. 2010. Redesigning the Clinical Effectiveness Research Paradigm: Innovation and Practice-Based Approaches: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12197.
×
Page 322
Next: 6 Aligning Policy with Research Opportunities »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!