National Academies Press: OpenBook
« Previous: 1 Introduction
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 3
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 4
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 5
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 6
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 7
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 8
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 9
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 10
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 11
Suggested Citation:"2 Generating Evidence for Decision Making." Institute of Medicine. 2009. Systems for Research and Evaluation for Translating Genome-Based Discoveries for Health: Workshop Summary. Washington, DC: The National Academies Press. doi: 10.17226/12691.
×
Page 12

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 Generating Evidence for Decision Making DOES THE TYPE OF DECISION BEING MADE INFLUENCE THE EVIDENCE NEEDED? Steven Teutsch, M.D., M.P.H. County of Los Angeles Department of Public Health Decisions affecting health care must be acceptable and legitimate to the people they will affect, Teutsch began. The legitimization of health policy decisions requires prospective agreement about the evidentiary standards that will be used. This is a deliberative and inclusive process to develop an understanding of the different types of decisions to be made, and the nature and importance of the evidence that is appropriate for each. There is no simple formula or prescription for decision making. Each decision is based not only on the evidence, but also the context in which each decision is being made. Transparency of the process is also important, so that it is clear what information was used in making the decision. Evidentiary Threshold The translational process can be viewed as moving from gene discovery to application in a health context, to health practice, and finally to under- standing the health impact (Figure 2-1). The critical step in translation is the development of an evidence-based guideline that allows the technology to move from research into clinical or public health practice. A key question 

 SYSTEMS FOR RESEARCH AND EVALUATION T1 T2 T3 T4 Gene Health Health Health Discovery Application Practice Impact RESEARCH Evidence-based PRACTICE Guideline FIGURE 2-1 The translational process. SOURCE: Teutsch, 2009. in developing guidelines, Teutsch said, is how high the evidence bar should be. By employing a lower threshold, technologies can move more rapidly from research into practice. The consequences are that less information is Figure 1 available on the clinical validity of the technology, and almost no informa- tion is available about the clinicalR01538 lack of information can lead to use. This negative insurance coverage vector, editable the potential for increased decisions. There is harms because less is known about the technology, but also the potential for increased benefits by providing the technology sooner to those who may need it. Requiring a lower evidentiary bar means a greater dependence on models and expert opinion. Because technologies can enter practice more easily, a lower bar might stimulate innovation, thereby making more tech- nologies available. If the evidentiary bar is high, more will be known about the validity and utility of the technology, and payers can make better decisions about reimbursement. On the other hand, a higher threshold for evidence makes moving technologies into practice more difficult, which can potentially lower the incentive for innovation. More is known about the technology, resulting in a diminished potential for harms, but it will take a longer time to bring the product to those who can benefit from it. When making an evidence-based decision, several questions must be answered: • What decision must be made? •  ow does the nature of that decision affect the evidentiary stan- H dards that should be applied? • What are the relevant contextual issues?

GENERATING EVIDENCE FOR DECISION MAKING  •  ow will information (both scientific and contextual) be integrated H and applied? • What processes are needed to legitimize the decision process? There is a dynamic relationship between evidence-based decision mak- ing and evidence review and synthesis (Figure 2-2). Decisions may per- tain to regulation, coverage, guidelines, quality improvement metrics (e.g., pay-for-performance), or individual care decisions made by a clinician and/or patient. The decision maker should first frame the key questions to be answered and determine the level of rigor required. Then evidence reviewers should synthesize data from studies as well as desired economic information. With quantitative scientific evidence in hand, the decision makers should also consider budget constraints, values and preferences, equity issues, acceptability, and other contextual issues before making a decision. Quantitative Information for Decision Making Quantitative information needed for decision making includes data on effectiveness, such as the level of certainty there will be an impact, and the magnitude of the effect, or net benefit. Cost and cost-effectiveness data are Economic Information Framing Studies Key Questions Budget Rigor Required Constraints 1 Evidence-Based Evidence Review Decision Making Values/ (Coverage, Regulations, and Synthesis 2 Quality Improvement Preferences Guidelines, Physician Evidence and Patient Decisions) Equity Review Acceptability 3 Decisions FIGURE 2-2 Dynamic relationship between evidence review and synthesis and evidence-based decision making. SOURCE: Teutsch and Berger, 2005. Figure 2 R01538 vector, editable

 SYSTEMS FOR RESEARCH AND EVALUATION High Comparable Incremental Superior Certainty Limited Unproven/Potential Certainty Low Uncertain Certainty Equal Small Net Large Net Benefit Benefit Benefit FIGURE 2-3 Comparative clinical effectiveness matrix. SOURCE: Developed by the America’s Health Insurance Plans (AHIP) Evidence Based Medicine Roadmap Group, Personal communication, S. Pearson, Institute for Clinical and Economic Review (ICER), July 9, 2009. Figure 3 R01538 also important, as are any data regarding how the new technology compares vector, editable effectiveness are usu- to existing alternatives. Clinical effectiveness and cost ally assessed in relationship to therapeutic or diagnostic alternatives. A matrix, such as the one under development by America’s Health Insurance Plans, can be useful to help payers compare two technologies with regard to net benefit and certainty (Figure 2-3). Technologies that have large net benefit and high certainty would be good candidates for coverage. On the other hand, products with limited or low certainty and equal net benefit are not ready for broad use. Some will have incremental benefits, but high certainty, and others will have new technology that is unproven, but has potential. Different insurance groups are likely to make different cover- age decisions. Payers should be able to articulate what their criteria are, or how high the evidentiary bar is going to be, so a technology developer can decide whether to invest in developing the technology. The key effectiveness questions relate to the following:

GENERATING EVIDENCE FOR DECISION MAKING  • Efficacy: Can the technology work in controlled conditions? • Harms: What are the possible harms? • Effectiveness: Does it work in practice? • Trade-offs: What is the balance of harms and benefits? •  omparative effectiveness: Does it work better than alternatives C currently in use? •  ubpopulations: Are there specific groups for whom it is likely to S be a technology of choice? As one example of a framework to determine how high the evidentiary bar should be for clinical management decisions, Teutsch cited the work of Djulbegovic and colleagues (2005) on cancer. The framework lays out proposed evidentiary standards for clinical applications as a function of treatment goals and acceptable regret. Considering the various goals of treatment—including cancer prevention in healthy individuals, palliative therapies, procedures that offer incremental improvement in terms of sur- vival, or curative measures—how much certainty is needed before a tech- nology should be used? How much regret will there be if the technology used is ineffective or even harmful? In the prevention arena, Teutsch said, the evidentiary bar is very high because the interventions are being delivered to people who are otherwise healthy. The Evaluation of Genomic Applications in Practice and Pre- vention (EGAPP) working group, established by the Centers for Disease Control and Prevention, recently published its methods for evidence-based evaluation of genetic tests (Teutsch, 2009). Genome-based products first were categorized by application: diagnostic, screening, risk assessment and susceptibility, prognostic, or predicting therapeutic response. EGAPP then established the criteria that would be used when assessing clinical validity and utility issues (Table 2-1). One approach to answering the quantitative questions is the ACCE model for evaluating data on emerging genetic tests. The model breaks down the information needed into four main areas (from which the name is derived): Analytic validity, Clinical validity, Clinical utility, and Ethical, legal, and social implications (Haddow and Palomaki, 2004). At the center of the circle in Figure 2-4 is the disorder to which the genetic test will be applied, and the setting in which the testing will be done. From there, an analytic framework is constructed by answering more than 40 targeted questions in each of the 4 areas. EGAPP has been working within the ACCE framework to articulate the evidentiary standards that could or should be applied to evaluation of genetic tests. Table 2-2 presents a hierarchy of data sources and study designs for the analytic validity, clinical validity, and clinical utility compo-

Natural History Clinical Pilot Sensitivity Prevalence Trials Clinical  Specificity SYSTEMS FOR RESEARCH AND EVALUATION PPV NPV TABLE 2-1  Categories & GeneticDisorder Ethical, Legal, of Test Applications and Some Health Characteristics Implications Social of How Clinical Validity and Utility Are Assessed & Risks (safeguards & impediments) Setting Penetrance Application Clinical Validity Clinical Utility Analytic Assay Diagnosis Sensitivity Association with disorder Improved clinical outcomes Robustness Analytic Quality Usefulness for decision making Specificity Control End of diagnostic odyssey Monitoring Economic & Evaluation Disease screening Evaluation Association with disorder Improved health outcome Usefulness for decision making Risk assessment/ Education Facilities Association with future Improved health outcomes susceptibility disorder Prognosis of diagnosed Association with natural Improved health outcomes, or disease history outcomes of value to patients, based on changes in patient management Figure 4 Predicting treatment Association with a state that R01538 Improved health outcomes response relates to drug efficacy or or adherence based on drug vector, Drug Experiences Adverse editable selection or dosage scaled as landscape above SOURCE: Adapted from Teutsch et al., 2009. portrait below Effective Quality Intervention Assurance (Benefit) Natural History Clinical Pilot Sensitivity Prevalence Trials Clinical Specificity PPV NPV Ethical, Legal, & Disorder Social Implications & Health (safeguards & impediments) Setting Penetrance Risks Analytic Assay Sensitivity Robustness Analytic Quality Specificity Control Monitoring Economic & Evaluation Evaluation Education Facilities FIGURE 2-4 The ACCE method for multidisciplinary evaluation of genetic tests. SOURCE: CDC, 2007.

GENERATING EVIDENCE FOR DECISION MAKING  TABLE 2-2  Hierarchies of Data Sources and Study Designs for the Components of Evaluation Level Analytic Validity Clinical Validity Clinical Utility 1 Collaborative study Well-designed Meta-analysis of RCTs Summary data from longitudinal cohort well-designed external studies proficiency testing Validated clinical decision rule 2 Other proficiency Well-designed case- A single RCT testing control studies Well-designed peer- reviewed studies Expert panel reviewed FDA summaries 3 Less well-designed Lower quality case- Controlled trial without peer-reviewed studies control and cross- randomization sectional studies Cohort or case-control Unvalidated clinical study decision rule 4 Other research, Case series Case series clinical laboratory or Other research, Other studies, clinical manufacturer data clinical laboratory or laboratory or manufacturer Studies on manufacturer data data performance of Consensus guidelines Consensus guidelines the same basic Expert opinion Expert opinion methodology SOURCE: Teutsch, 2009. nents of evaluation. Looking at clinical utility, for example, meta-analysis of randomized controlled trials (RCTs) would be the strongest form of evidence. A good single RCT may be adequate, but less strong. The list then covers other study designs that are progressively less desirable, such as controlled trials that are not randomized, or cohort studies, with case series or expert opinion being the least desirable form of evidence. Contextual Information for Decision Making Numerous contextual issues can inform the decision to introduce a test into practice. Clinical applications differ widely, and it is important to consider the severity of the condition, subgroup differences, the avail- ability of alternatives, the severity and frequency of harms, and the risk of overuse or inappropriate use of the test. Economics is also considered from

10 SYSTEMS FOR RESEARCH AND EVALUATION a contextual perspective. Many decision makers are interested not only in cost-effectiveness, but also budget impact, budget constraints, and value. Legal and ethical considerations include federal and state regulatory con- straints, as well as issues of precedent, and regret as a result of introducing or not introducing a test. Feasibility of the test in question refers to the current level of use, the infrastructure required to use the test properly, and the acceptability of the test to all partners and stakeholders, particularly patients. Decisions should be made in the context of the preferences and values of those who are going to be affected by the decision. Finally, there are administrative issues, such as options for targeting or limiting the use of the test to patients who would benefit most, and how to consider possible further evidence. Decision-Factor Matrix In the end, Teutsch said, a systematic process is needed to ensure fairness and reasonableness in decision making. This process includes: clear “rules of the road” for the technology developers, patient advocacy groups, and others; a deliberative process incorporating both quantitative and qualitative or contextual information; transparency; and an appeals processes so that when other issues arise, they can be addressed, and the decision changed where appropriate. Teutsch presented a draft of a decision matrix, plotting different deci- sions that are likely to be made for any test or technology against a set of quantitative and qualitative information that might need to be generated. His example (Figure 2-5) suggests that a regulator may be primarily inter- ested in efficacy, safety, and the legal and ethical constraints. These aspects, however, would be less likely to impact individual decisions. Rather, effec- tiveness, as well as cost, may be of great interest in practice. Each type of user will have important criteria, some secondary considerations, and other information that may not be directly relevant. The important point, Teutsch said, is that different decision makers require different kinds of information, and it is important to be able to generate that information for them. In refining the approach to standards of evidence, Teutsch said in con- clusion, it will be important to rethink the hierarchy of evidence in terms of the many different applications and new types of evidence. When is it appropriate to use predictive modeling, for example? Another critical issue is how research efforts are aligned with application needs. The evolving role of observational data must be accommodated, and appropriate methods must be used to make better decisions when the evidence is insufficient.

GENERATING EVIDENCE FOR DECISION MAKING 11 Regulation Coverage Guidelines Quality Individual Improvement Decisions Efficacy Safety Effectiveness Comparative Effectiveness Cost/ Cost Effectiveness Clinical Situation Legal/Ethical Values/Preferences Admin.* Feasibility Stakeholders FIGURE 2-5 Example of a hypothetical decision-factor matrix. * Administrative feasibility of management, e.g., limiting coverage to people who meet specific criteria. Legend: White: primary consideration. Light grey: secondary consideration. Dark grey: minor or no consideration. SOURCE: Teutsch, 2009. Figure 5 R01538 vector, editable DISCUSSION Wylie Burke, M.D., Ph.D. Moderator A question was asked as to whether the appeals process mentioned by Teutsch would address passive challenges, such as a need for change iden- tified as a result of horizon scanning, as well as active challenges. Teutsch responded that there may be information that was not taken into consid- eration in the original decision, and the appeals processes can help address that issue. But in general, one should be proactive about the information generation process. In trial design, for example, it is important to ensure representation from the appropriate groups, and that may require participa- tion of the affected groups in the development of the study. A participant noted that the methodology outlined focuses on the test or the technology itself, and asked if the questions would change when the

12 SYSTEMS FOR RESEARCH AND EVALUATION focus was on whether or not to screen for a condition. Teutsch responded that one needs to have a specific clinical scenario in mind, and that assess- ments should not be done in the abstract. Another participant expressed concern about the decision matrixes considering low efficacy and harm as if they were similar in impact, and suggested that a distinction be made. Teutsch said the vocabulary varies, but in his perspective, efficacy refers to benefits, and effectiveness refers to the balance of the benefits and potential harms. On some occasions, risk of substantial harm may be acceptable because of the potential for substantial benefits, while at other times the equation will be different. He agreed there is a need to be clear about whether one is talking about benefits or harms, and to whom they accrue.

Next: 3 Creating Evidence Systems »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!