Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 466
C Approaches to Evaluation Design The Institute of Medicine committee recommends that the Centers for Medicare and Medicaid Services (CMS) undertake formal evaluations to assess the Quality Improvement Organization (QIO) program as a whole, as well as the effectiveness of the individual interventions occurring within the work of the contract, as discussed in Recommendation 7 in Chapter 5. This recommendation is important for many reasons, including the need to evaluate whether goals have been reached, as well as to learn which types of interventions are most effective. Many types of study designs exist, and each one has its own strengths and weaknesses. Challenges exist with all design types, including the identi- fication of "cases" and "controls" (sampled from appropriate providers), refined definition of the "disease" (quality improvement), and confounding factors (such as the voluntary nature of the program). One overarching limitation is due to the fact that the QIO program itself is voluntary and thus presents issues of selection bias The following sections present a brief definition of each design model, some strengths and weaknesses of the design model, and specific sugges- tions as to how the study design might be applied to the QIO program. Multiple studies will be needed because of the complexity of the QIO pro- gram and the multitude of provider settings and intervention targets. CMS will need to consider not only how to use these studies to evaluate indi- vidual interventions but also how to assess the program as a whole, which will be much more complicated and which will require multiple approaches. 466
OCR for page 467
APPENDIX C 467 CASE-CONTROL STUDY DESIGN A case-control study is a retrospective study that attempts to link an effect to a cause. In the typical clinical study, one might look at the relation- ship between exposure to a drug and the risk of cancer. In such an example, one assembles "cases" (those who have the disease) and "controls" (those who do not have the disease). Cases and controls are then compared with respect to their "exposures" to relevant agents that might be causally re- lated to the disease. One then obtains the relative risk of being exposed to an agent in those who have the disease. Given this framework, the case-control design can be used to evaluate the impacts of QIO interventions in the following way: the selected cases would demonstrate improvements in quality (the "disease"), whereas the controls would not demonstrate any improvements. The cases and the con- trols are both providers. The exposure is QIO intervention activities. A population-based case-control study would examine cases with the disease from a specified geographic area, with the controls also sampled from the same area. A hospital-based case-control study would sample the cases and the controls from the same hospital. One needs to control for any other provider or environmental charac- teristics that could confound the results, such as factors that might be inde- pendent predictors of improvements in quality, independent of the QIOs themselves. One would thus need to make sure that the cases and the con- trols match on variables that are related to improvements in quality, such as participation in other quality improvement efforts or willingness to partici- pate. (This might mean that both cases and controls would have to be sampled from a population of providers who volunteered to work with QIOs.) As a result of the case-control study, the relative risk of demonstrat- ing improvements in quality when the case is exposed to the QIO program is calculated. One challenge is how the "disease" is defined. Several options exist and would best be tailored to the specific goals described in the scope of work (SOW). For example, if one of the goals of the SOW is to improve the rate of use of beta-blockers in hospitalized patients with myocardial infarction, that outcome would be the equivalent of the "disease" under study. For a program as complex as the QIO program, a number of key outcomes need to be evaluated, so more than one study is needed. Therefore, the SOW must clearly define these desired outcomes so that an evaluation can truly represent what the QIOs were charged with accomplishing. Another challenge in designing the case-control study (as is also the case in many of the study designs described below) will be to identify the cases and the controls. More than one study is necessary to focus on the different settings and groups that are "exposed" to the QIOs. Cases and
OCR for page 468
468 APPENDIX C controls must be sampled from the same provider setting at the state, na- tional, or other level, as appropriate. Because the QIO program is multidimensional, the "exposure" aspect of the study will have to be carefully developed. For example, an "expo- sure" might be whether the providers have been engaged in projects with QIOs. The other essential aspect of defining the exposure would be to achieve more granularity, which would include the quality, intensity, and characteristics of the QIO interventions. The more precise the definition of this exposure becomes, the less the risk involved in obtaining confounding variables that bias the results. Confounding factors have made it difficult to evaluate quality im- provement interventions that are multifaceted and that take place in dy- namic, complex systems types of environments. Cases and controls must be matched on variables that, independently of the QIO program, might lead to improvements in quality. Examples include public reporting and payment policies. Other variables on which cases and controls may be matched are those that may not necessarily be causally related to quality improvement, such as provider size and provider location. RANDOMIZED CONTROLLED TRIAL In a randomized controlled trial, researchers randomly assign patients to an experimental intervention or an alternative treatment (placebo or stan- dard treatment). Randomized treatment allocates controls for potential measured and unmeasured confounding factors, making this experimental design the "gold standard" for the evaluations of treatments if it is properly powered and well performed (Cook et al., 1995). In the QIO program, a randomized controlled trial could be done on a large scale. Again, the variable of "willingness to participate" must be con- sidered. If the entire population to be randomized includes only those pro- viders who are willing to participate with the QIOs, then providers who want to work on quality improvement but who are assigned to the control group would have to be willing to not receive QIO assistance. The use of a randomized controlled trial design in the evaluation of quality improvement interventions faces other challenges. First, the unit of intervention is often at the provider, clinic, or hospital level, so the level of randomization must also be at this higher level. Thus, the availability of a study sample that is large enough to adequately test the intervention can be an issue. Second, interventions cannot be blinded to the subject receiving it or to those delivering the assistance. As discussed above, the "treatment" is assistance from the QIO, and the "control" equals no QIO assistance. Thus, care must be taken to control for cross-contamination of the control arm
OCR for page 469
APPENDIX C 469 (receipt of assistance from other sources) as well as avoid bias in the evalu- ation of study end points. Other limitations of confounding factors may be applicable, as may the factor of "readiness for change." The literature includes a growing number of examples of randomized controlled trials of quality improvement interventions. Kiefe and colleagues performed a successful randomized controlled trial of provider feedback among clinicians in Alabama (Kiefe et al., 2001). Similarly, Ferguson and colleagues performed a successful national randomized controlled trial of bypass surgery quality interventions to promote the adoption of process measures among 359 hospitals (Ferguson et al., 2003). NONEQUIVALENT CONTROL GROUP STUDY DESIGN In the nonequivalent control group design, subjects are not randomly assigned to a control or an experimental group. Instead, an intervention group is chosen, and a second group not receiving the intervention is chosen as a control group. The primary risk associated with this design is that the control group may be far from equivalent to the experimental group. The prototypical use of this design has been in the education field, in which one classroom is used as the experimental group and another is used as the control group. In those cases it is assumed that students are randomly as- signed to the classrooms, and hence, there is good reason to believe in the strong similarity among groups. This study design might be applicable to evaluations of intervention assistance in the QIO program, since participation in the "experiment" is voluntary and those not asking for assistance can be used as the control group. Randomization to the provision of provider assistance does not oc- cur, and although the participating and nonparticipating provider groups are similar, they will not have the exact same characteristics. One strategy might be to compare two regions that are very similar (e.g., in socioeco- nomic status) and randomly select the region that would receive the inter- vention and use the other region as a nonequivalent control group. Provider settings such as nursing homes are not randomly assigned to a geographic region. Hence, the dissimilarity risk is much higher, and it would be impor- tant to carefully choose the regions so that the nursing homes and the pa- tients are as similar as possible. This can be a powerful design when there is good reason to believe in the similarity of regions and the likelihood exists that external forces would not affect one region differently from the other. However, this assumption may not be able to be made about many QIOs. For example, a QIO might be asked to evaluate the effectiveness of an intervention to reduce pressure sores in nursing home patients. If a state is large enough, the intervention
OCR for page 470
470 APPENDIX C could be initiated with nursing homes in one region of the state that could act as the experimental group and nursing homes in another region that could act as the control group by not participating in the intervention. This might also be applied to the comparison of the results for a region in one state with the results for a region in another state. The intervention must be carefully documented as to what was actually done and, to the extent pos- sible, must be standardized during both time periods. Furthermore, many of the confounding variables (including readiness for change) and difficulty with definitions discussed in the previous examples apply to this type of study design as well. CROSSOVER STUDY DESIGN In the crossover study, researchers randomly assign half of the interven- tion cases to receive a treatment initially and the other half is used as a control. After some period of time, the control group begins to receive the treatment. In the QIO program, the crossover study could be used for all the pro- viders who request QIO assistance. Specifically, among those providers re- questing assistance, QIOs could randomly assign half of the providers to receive the assistance intervention in the first year, and at the end of the year the evaluation could assess the impact of the intervention on those providers compared with the impact of the intervention on providers that did not receive the assistance. Then, in the second year the two groups "cross over," with the second group receiving the intervention assistance with follow-up assessment of its impact. This design is likely to be particularly useful because at least some, if not all, of the QIOs do not have the resources and staff needed to meet a large demand for the provision of technical assistance all at once. By stag- gering assistance activities, it becomes possible not only to target the re- sources so that an intervention can be implemented well but also, at the same time, to create a control group for more rigorous assessment of whether the intervention makes a difference. In this case, as discussed for the other examples, the successful imple- mentation of such a design requires a sufficient number of providers, half of whom are willing to wait for the intervention assistance. Checks need to be made to be sure that the randomly assigned providers are comparable on important characteristics (as discussed in previous examples). Also, the in- tervention must be carefully documented as to what was actually done and, to the extent possible, standardized during both time periods. To induce those providers who do not receive the initial intervention assistance to participate in the evaluation, CMS might consider providing some financial assistance to the groups agreeing to participate in the evaluation.
OCR for page 471
APPENDIX C 471 QUALITATIVE RESEARCH AND ANALYSIS Qualitative research is an approach to data collection and analysis that focuses on understanding the particularities of specific situations and streams of events by using open-ended data collection methods, such as interviews, focus groups, and observations; by generating highly detailed and contextualized descriptions; and by analyzing data, which are typically in the form of text and, sometimes, images, to identify patterns and themes. All of the previously described studies should include qualitative analysis as a part of their design. Two significant uses of qualitative methods advance understanding of the effectiveness of QIOs in general and the specific interventions used by QIOs. The first is to use qualitative methods, alone or in combination with other data collection methods, to document in detail the implementation of interventions. The second is to use qualitative methods to explore the insti- tutional and community environments in which the QIOs work; the charac- teristics of these environments can be viewed as "covariates" to their ability to make progress. The purpose of documentation is twofold: first, to support the replica- tion (or avoidance) of particular interventions, and second, to assess the "fidelity" of the intervention in comparison with the intent of the interven- tion. For example, if a QIO uses a collaborative to promote quality im- provement activities on a specific aspect of performance, it would be useful to document exactly how the collaborative operated, including how the institutions and the participants in the collaboratives were recruited, the content of their interactions with the QIO and with each other, and the experiences that they report as a result of their participation. Designs of collaboratives vary widely; therefore, if a particular collaborative method is effective, the design should be assessed, disseminated, and replicated. Varia- tion in the circumstances of interventions is to be expected. Some variation will be inconsequential, but it is possible that other variations will signifi- cantly influence the outcomes. A sophisticated evaluation design involves qualitative documentation of the implementation, such that outcomes could be linked to implementation in a systematic (although not necessarily a quantitative) way. Qualitative work is often used in situations in which no clear hypoth- eses about the factors that influence both processes and outcomes exist or when there is no valid or reliable method for the measurement of those factors. A good example of the use of qualitative methods is the work of Bradley and colleagues in their study of the institutional factors that influ- enced hospitals' successful quality improvement efforts to promote the use of beta-blockers (Bradley et al., 2001). Using these methods, this research team has made a substantial contribution to early knowledge of these fac-
OCR for page 472
472 APPENDIX C tors. It is noteworthy that work like this can in fact be replicated and, after a time, be used as a foundation for more quantitative measurement and analysis. SUMMARY As discussed here, CMS and the QIOs may design multiple types of studies, including those discussed above, to evaluate the effectiveness of interventions and the success of the QIOs. Considering the complexity of the QIO program and the environment under which it operates, no one study type is without challenges or weaknesses. In fact, combined ap- proaches might compensate for some of those weaknesses. Unlike studies of clinical disease or environmental exposure, these studies are confounded by the voluntary nature of the program, the differences in provider settings, variations among interventions, and ethical issues of having control groups that are denied assistance for quality improvement. Although studies of individual, specific interventions could be done with relative ease, designing a comprehensive evaluation of the program overall for the 9th SOW and beyond will be most challenging for the reasons mentioned here. Several different types of study design may need to be included to obtain an accu- rate picture of the overall success of the program. Each type of study design should be considered and should be employed with rigor. Although these evaluations are difficult to design, they provide important, ongoing feed- back for the management of the program as well as contribute valuable information to the quality improvement community as a whole. REFERENCES Bradley EH, Holmboe ES, Mattera JA, Roumanis SA, Radford MJ, Krumholz HM. 2001. A qualitative study of increasing beta-blocker use after myocardial infarction: Why do some hospitals succeed? Journal of the American Medical Association 285(20):26042611. Cook DJ, Sackett DL, Spitzer WO. 1995. Methodologic guidelines for systematic reviews of randomized control trials in health care from the Potsdam consultation on meta-analysis. Journal of Clinical Epidemiology 48(1):167171. Ferguson TB Jr, Peterson ED, Coombs LC, Eiken MC, Carey ML, Grover FL, DeLong ER. 2003. Use of continuous quality improvement to increase use of process measures in patients undergoing coronary artery bypass graft surgery: A randomized controlled trial. Journal of the American Medical Association 290(1):4956. Kiefe CI, Allison JJ, Williams OD, Person SD, Weaver MT, Weissman NW. 2001. Improving quality improvement using achievable benchmarks for physician feedback: A randomized controlled trial. Journal of the American Medical Association 285(22):28712879.
Representative terms from entire chapter: