Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 84
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence 5 Evaluation of Training Efforts As summarized in the earlier chapters, there has been some increased attention paid to training health care providers about child abuse and neglect, intimate partner violence, and, to a lesser extent, elder maltreatment. Descriptions of family violence curricula and training models for health professionals and experiences with their implementation have been published (e.g., Dienemann et al., 1999; Ireland and Powell, 1997; Spinola et al., 1998; Thompson et al., 1998; Wolf and Pillemer, 1994). Attempts have been made to document the extent to which clinicians actually receive instruction in how to identify and respond to patients involved in these situations. Surveys of practicing clinicians have found that considerable segments of health professionals have had little or no training in this area. Some studies have found modest positive correlations between individuals’ reported involvement in training and family violence assessment and management practices (Currier et al., 1996; Flaherty et al., 2000; Lawrence and Brannen, 2000; Tilden et al., 1994). Although this observed relationship cannot be mistaken for evidence that these practices are a direct product of training, it does suggest more careful examination of what is known about the effectiveness of family violence curricula and other training strategies on clinician behaviors and indicates the need for more explicit examination of causation. At present, claims regarding what training is needed and how it should be carried out far outnumber the studies that provide empirical evidence to support them. Similar to many other areas of health provider training, several factors are most likely to contribute to this shortage of information. For example, accreditation criteria and other pressures on health professional schools place constraints
OCR for page 85
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence on curricular content; limited funding interferes with evaluation; and legal, ethical, and patient barriers complicate evaluation efforts (e.g., Gagan, 1999; Sugg et al., 1999; Waalen et al., 2000). Although this lack of evaluation is not unique to family violence training, increasing the number and quality of training opportunities in family violence has consistently been cited as central to narrowing the gap between recommended practices and professional behavior. To understand what improvements should be made, a strong evidential base for deciding how best to educate providers in this area is needed. This chapter examines the available research base concerning the outcomes and effectiveness of family violence training. First, we summarize the search strategy used to locate and include evaluations of training interventions and then describe the characteristics of the training strategies and models that have been assessed, along with the basic features of the evaluation measures and designs. Finally, we discuss the inferences we can confidently draw from these studies so as to guide future training efforts. Due to the dearth of published studies on elder abuse training, the focus is on outcomes and effectiveness of child abuse and intimate partner violence training. SEARCH STRATEGY Four bibliographic databases were systematically searched for studies that evaluated training efforts in family violence and were published prior to November 2000. These included MEDLINE, PsycInfo, ERIC, and Sociological Abstracts. Search terms included family violence, domestic violence, intimate partner violence, elder abuse/neglect, and child abuse/neglect coupled with training, assessment, evaluation, detection, and identification as both subject terms and text words. These searches were augmented by published bibliographies (i.e., Glazer et al., 1997). The reference lists of all chosen articles also were screened for additional studies.1 This strategy identified 64 potential studies, the majority of which focused on intimate partner violence training (n = 38, or 59 percent). Another 31 percent (n = 20) addressed training efforts in child abuse and neglect, while only 9 1 The unpublished literature was also examined for evaluation efforts, including formal committee requests to outside groups (e.g., relevant professional associations, government agencies, foundations, and advocacy groups). This uncovered the recent evaluation of the WomanKind program sponsored by the Centers for Disease Control and Prevention (Short et al., 2000), which was included in the set of studies reviewed. The evaluation of the Family Violence Prevention Fund training initiative has not yet been completed.
OCR for page 86
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence percent (n = 6) focused on elder maltreatment training.2 Each was then reviewed to determine whether it met three inclusion criteria: Relevant training population. Training participants had to include students pursuing degrees or practitioners in one or more of the six health professions chosen by the committee, i.e., physicians, nurses, dentists, psychologists, social workers, and physician assistants. Formal training effort. The training evaluated had to be a formal educational intervention. This includes degree-related and continuing education courses, modules, clinical rotations, seminars, workshops, and staff training sessions but excludes training that was explicitly focused on clinical audits, feedback, or detailing. Also included were studies that assessed the use of a formal screening protocol, given that these efforts often involved highly organized training regarding information about family violence and was grounded in explicit models of instruction and behavior change (e.g., Harwell et al., 1998; Short et al., 2000; Thompson et al., 2000). Quantitative outcome measure(s). A key requirement was that data were collected and reported on one or more quantitative measures of relevant outcomes related to responding to family violence. Outcome domains included: (a) knowledge, attitudes, beliefs, and perceived skills concerning family violence; (b) behaviors and performance associated with screening for abuse and case findings; and (c) practices and competencies needed to provide abuse victims with appropriate care (e.g., information, referrals, or case management).3 Studies that focused on examining participant satisfaction were excluded, as were evaluations that employed only qualitative approaches. Application of these criteria resulted in a pool of 44 articles.4 Because three 2 The study (Currier et al., 1996) evaluated trauma training, which included both intimate partner violence and child abuse, and the Thompson et al. (2000) evaluation of training for primary care providers assessed identification and management of violence for adults 18 or older, including elderly patients. Given that more attention was paid to intimate partner violence, both studies were assigned to this category. 3 Measures of identification and intervention were limited to those that did not rely on provider self-report surveys. Studies using diaries completed by providers on a daily basis, however, were included. 4 Two studies (Seamon et al., 1997; Weiss et al., 2000) dealt with the training of emergency medical technicians, a population that was not one of the professions targeted by the committee. Two studies of child abuse and neglect training programs were excluded, based on their training interventions. One involved a statewide educational program of mailings, workshops, and other activities for dentists, but the analysis did not distinguish between those who actually reported receiving materials and participating in the workshops (Needleman et al., 1995). Socolar et al.’s (1998) randomized trial evaluated the impact of feedback and audit strategies on physicians participating in a statewide child
OCR for page 87
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence reported additional follow-up data on interventions included in this group, a slightly smaller number of training efforts were actually evaluated (n = 41). Supporting the relative recency of interest in family violence training is the fact that only 7 percent (n = 4) appeared prior to 1990. The final set of 41 evaluations resulted in a pool that was even more heavily populated by studies of intimate partner violence training. This area has received the most attention, with 30 (73 percent) of the studies assessing programs in this area.5 With the exception of four studies that reported outcomes of an elder abuse training session, the remainder (n = 7, or 17 percent) examined child maltreatment training efforts. The lack of evaluative information on elder abuse training may be partly a function of the relatively recent emphasis placed on the need for screening and the lack of available training opportunities. However, the reasons underlying the limited attention paid to evaluating child abuse training efforts are less clear. Descriptions of training strategies appeared in the literature more than 20 years ago (e.g., Hansen, 1977; Venters and ten Bensel, 1977), although published research on training did not surface until much later (1987). Despite the results of surveys conducted in the late 1990s that continued to report noticeable numbers of health care professionals who felt ill-equipped to fully address child abuse cases and labeled their training in this area as insufficient (e.g., Barnard-Thompson and Leichner, 1999; Biehler et al., 1996; Wright et al., 1999), efforts to assess training remain few in number. For example, in our search, we found 20 studies that described some type of training effort in child maltreatment for health professionals, but only 7 studies met the committee’s criteria for selection. abuse program, which fell outside the definition of formal training that was used. Another 14 studies either did not provide any evaluative data concerning the program, restricted their examination to qualitative observations, or collected information on such outcomes as participant satisfaction (Bul-lock, 1997; Delewski et al., 1986; Gallmeier and Bonner, 1992; Hansen, 1977; Ireland and Powell, 1997; Krell et al., 1983; Krenk, 1984; Nelms, 1999; Pagel and Pagel, 1993; Reiniger et al., 1995; Thurston and McLeod, 1997; Venters and ten Bensel, 1977; Wielichowski et al., 1999; Wolf and Pillemer, 1994). Finally, two studies had as their focus the development and assessment of new measures for assessing training rather than the observed outcomes of the training itself (Dorsey et al., 1996; Kost and Schwartz, 1989). 5 Summaries of these studies in terms of training characteristics, outcomes assessed, evaluation designs, measurement strategies, and major results are provided in Appendix F for intimate partner violence training and Appendix G for child abuse training evaluations. For each type of outcome, studies are ordered by training target population (e.g., medical students, residents and fellows, emergency room staff, and providers in other health care settings).
OCR for page 88
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence TYPES OF TRAINING EFFORTS EVALUATED Selected characteristics of the training efforts evaluated in the 37 studies of intimate partner violence and child abuse training are summarized in Table 5.1. Overall, training programs on intimate partner violence that were subjected to some formal evaluation targeted a more diverse group of training populations. For example, no study examined outcomes of child abuse training efforts for medical students; in contrast, 13 percent of the intimate partner violence evaluations examined formal medical school courses, modules, and other inten TABLE 5.1 Overview of Training Interventions Assessed in the Evaluations of Intimate Partner Violence and Child Abuse Training Area of Family Violence Addressed Intimate Partner Violence (n = 7) Child Abuse (n = 30) Total (n = 37) Characteristic N % N % N % Training population: Medical students 4 13.3 0 0.0 4 10.8 Residents or fellows 6 20.0 3 42.9 9 24.3 Emergency department staff (e.g., nurses, physicians, and social workers) 13 43.3 0 0.0 13 35.1 Staff in other health care settings (e.g., primary care and maternal health clinics) 7 23.3 0 0.0 7 18.9 Other (e.g., child protective services workers and participants from several disciplines) 0 0.0 4 57.1 4 10.8 Length of training: Less than 2 hours 9 30.0 0 0.0 9 24.3 2-4 hours 7 23.3 0 0.0 7 18.9 5-8 hours 2 6.7 4 55.1 6 16.2 More than 8 hours 5 16.7 3 42.9 8 21.6 Not specified 7 23.3 0 0.0 13 35.1 Training strategy: Didactic only 15 50.0 0 0.0 15 40.5 Didactic and interactive 11 36.7 7 100.0 18 48.6 Not specified 4 13.3 0 0.0 4 10.8 Training included screening form: 13 43.3 0 0.0 13 35.1 Training included other enabling devices (e.g., local resources list, checklists, and anatomically correct dolls) 12 40.0 1 12.5 13 35.1 Note: Percentages are column percentages and may not total 100.0 percent due to rounding.
OCR for page 89
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence sive instructional strategies (Ernst et al., 1998, 2000; Haase et al., 1999; Jonassen et al., 1999; Short et al., 2000).6 Providers in emergency departments and general health care settings are typically one of the first points of contact for abuse victims, and professional organizations have stressed the need to improve the identification and management of intimate partner violence (e.g., American College of Emergency Physicians, 1995; American College of Nurse Midwives, 1997; American Medical Association, 1992). Consequently, a substantial portion of intimate partner violence training evaluations have examined programs for emergency department staff (43 percent), and nearly one-quarter have involved providers in other organized health care settings (23 percent). Five (71 percent) of the seven training evaluations in child abuse were directed at professionals who are most likely to encounter child maltreatment cases— namely, pediatric residents and child protective services workers (Cheung et al., 1991; Dubowitz and Black, 1991; Leung and Cheung, 1998; Palusci and McHugh, 1995; Sugarman et al., 1997). No assessments of intimate partner violence or child maltreatment training efforts designed for the dental or physician assistant professions have been conducted. Previous research on continuing medical education (e.g., Davis et al., 1999) has shown that if training is likely to have any impact on behavior, strategies that involve interaction among trainers and participants are important (see Chapter 6). These strategies have been a part of all child abuse training that has been subjected to any formal assessment (see Table 5.1). In contrast, only about 37 percent of the intimate partner violence training programs incorporated interactive instructional strategies, ranging from practice interviewing to group development of appropriate protocols and strategies for their implementation (e.g., Campbell et al., 2001). Providing participants with materials that they can use in their clinical practice (e.g., assessment forms and diagnostic aids) also has been shown to facilitate the translation of what was learned from training into specific behaviors in the health care setting (see Chapter 6). A noticeable portion of the intimate partner violence evaluations was targeted at assessing outcomes associated with the introduction of a screening protocol that also involved training staff in its use. Approximately two-fifths of evaluated intimate partner violence training efforts provided additional “enabling” materials for use in clinical practice. Examples include posters or pocket-sized cue cards with screening questions or other checklists that were part of the materials provided to residents (Knight and Remington, 2000) and emergency department or health clinic staff (Fanslow et al., 1998; Roberts et al., 1997; Thompson et al., 2000). The dissemination of assessment 6 A study by Palusci and McHugh (1995) did include medical students, but they accounted for a small proportion of the participants (2 individuals, or 13 percent of the 15 participants).
OCR for page 90
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence forms and other materials by child abuse training efforts was much less common. Of the seven training evaluation studies, one program provided participants with anatomically correct dolls for use in assessment (Hibbard et al., 1987). ASSESSING THE AVAILABLE EVIDENCE Understanding the effectiveness of family violence training programs necessitates estimating the unbiased effects of training (i.e., the impact of training above and beyond the influence of other variables that may have contributed to the observed outcomes). It is well known that this is best achieved by randomized field experiments in which individuals are randomly assigned to groups. This design, if successfully executed, controls nearly all common threats to internal validity (e.g., selection, history, and maturation). However, randomization alone is not sufficient if these efforts are to be truly informative. Evaluation designs must also: (1) use outcome measures that are reliable, valid, and sensitive to change over time; (2) demonstrate that the training intervention was implemented as planned and that participants’ experiences differed noticeably from those who did not receive such training; and (3) have sufficient sample sizes to allow statistical detection of group differences if they exist.7 Despite the strengths of randomized designs in determining program effectiveness, their execution in the field is not easy, and problems that are likely to introduce unexpected threats to internal validity can occur. For example, extended follow-up measurement waves—a desirable design component for examining how long training outcomes are sustained—also increases the chances that some study participants may not respond to later assessments. The resulting attrition may differ among study groups. Depending on its nature and magnitude, this differential attrition can either exaggerate or diminish the observed group differences. Historical threats to internal validity can be introduced by unanticipated events, such as the introduction of new reporting requirements, the enactment of laws that mandate education, or increased attention by the media to family violence, all of which are beyond the control of the evaluator (see Campbell et al., 2001, for examples of these). Another problem occurs when settings permit interaction and contact among training group participants and their counterparts who did not receive such training (e.g., sharing of what was learned, or what is 7 Statistical pooling of outcome results was not performed. Although such meta-analyses have provided valuable insight into the impact of problem-based learning and continuing education in medicine (e.g., Davis et al., 1999; Vernon and Blake, 1993), the small number of rigorous studies precluded this. In addition, data were not always reported for use in calculating effect sizes.
OCR for page 91
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence known as contamination). Members of the “no training” or “usual circumstances” comparison groups also may actually receive some relevant training through professional organizations or their own reading. Especially when the training is lengthy and involves multiple components, training participants themselves may not attend all sessions, complete homework assignments, and so forth (see Short et al., 2000). All of these circumstances narrow the difference that is likely to be found between groups and can lead to misleading conclusions when training is not monitored for both the intervention and comparison groups. Essentially, randomized designs then end up as quasi-experiments, and the ability to determine the net impact of training is reduced. In some circumstances, randomization may not even be feasible, and quasi-experimental designs are the only alternative. These can involve the use of a comparison group that was not constructed by random assignment or the assessment of outcomes only for a training intervention group before training and at multiple points thereafter (time-series or cohort designs). Although these are unlikely to provide unbiased estimates of intervention effects, sophisticated statistical modeling procedures now exist for taking into account some pretreatment and post-treatment selection biases, given that the necessary information is collected as part of the study (e.g., Lipsey and Cordray, 2000; Murray, 1998). Along with other design features, such nonexperimental studies, if well done, can add to the knowledge base about training (e.g., evidence for a relationship between training and the observed outcomes). For this reason, these were included in our review of evaluation studies. The most common training evaluation has involved the assessment of changes for the training participants only, typically before and immediately after training. Unfortunately, this is the weakest quasi-experimental design, as it yields little information on either the net effects of training or its relationship to outcomes. However, these studies can address the question “Did the expected improvements in knowledge, attitudes, and/or beliefs occur?” For example, did individuals who participated in the training show an increase in knowledge and self-confidence about treating family violence? This might be viewed as the first question of interest in any causal assessment. Results from such studies also may partly inform expectations about where improvements in performance may or may not be reasonable to expect and how long any observed gains might be sustained. Furthermore, if reliable change in outcomes is repeatedly not found, attention can be directed at understanding the reasons for these no-difference findings (e.g., poor engagement of participants, unreliable or insensitive measures, loss of organizational support for identification and management of family violence, poorly designed training curricula) so as to improve the development of training strategies and the choice and measurement of outcomes in the future. It also is possible that some of these studies were less subject to competing rival explanations for the observed changes due to other design features (e.g., very short pretest/posttest intervals and multiple pretest observations). Thus, the com-
OCR for page 92
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence mittee reviewed studies of this type to identify whether any general conclusions about the expected outcomes of training could be drawn.8 CHARACTERISTICS OF THE EVALUATION AND RESEARCH BASE As previously noted, the outcomes of interest to the committee included those related to knowledge, attitudes, and beliefs; outcomes associated with screening and assessment of family violence (e.g., rates of asking about abuse, percentages of cases identified, and adequacy of documentation); and other patient outcome indicators (e.g., referrals made for individuals who were victims of violence). Table 5.2 summarizes the degree to which the 37 evaluations assessed each of these outcomes. There was a clear difference in the attention paid to the three outcome domains, depending on the type of training. About 57 percent of intimate partner violence training evaluations measured improvements in knowledge, attitudes, and beliefs. Given that a frequent goal of training was to implement a standard assessment protocol successfully, the evaluations paid considerable attention to determining changes in the frequency of screening and case finding (70 percent). A much smaller proportion of studies (27 percent) attempted to assess other changes in clinical practices. For example, the extent to which patient charts included a safety assessment and body map completed by emergency department staff was examined by Harwell et al. (1998), and changes in information and referral practices were assessed by Shepard et al. (1999) for public health nurses, Wiist and McFarlane (1999) for prenatal health clinic staff, and Short et al. (2000) for emergency department, critical care, and perinatal staff. Using an index for rating quality of care by medical record review, Thompson et al. (2000) tracked changes in both training intervention and comparison sites. The evaluation conducted by Campbell et al. (2001) was unique in attempting to assess quality of care in terms of both medical record review and patient satisfaction ratings. Moreover, this was one of the few studies to measure the extent of organizational support (e.g., commitment) for detecting and treating victims of intimate partner violence. In contrast, evaluations of child abuse training focused primarily on investigating whether knowledge, attitudes, and beliefs improved. Assessment of other 8 Admittedly, this group would be skewed toward those studies that observed the expected changes. Even with this limitation, however, it would have been useful to derive average effect sizes for these observed changes and compare their magnitude with that obtained in more rigorous studies. If similar magnitudes for these two groups had been found, this would have been informative. However, such an analysis was precluded once again due to the lack of necessary information (e.g., some reported means but no standard deviations, and others only reported overall statistical significance levels but no other statistics on group performance). Although not peculiar to this literature (e.g., Gotzsche, 2001, and Orwin and Cordray, 1985), a gap prevents this type of quantitative comparison.
OCR for page 93
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence TABLE 5.2 Outcomes Examined in Evaluations of Intimate Partner Violence and Child Abuse Training Area of Family Violence Addressed Intimate Partner Violence (n = 30) Child Abuse (n = 7) Total (n = 37) Characteristic N % N % N % Outcome domain:a Knowledge, attitudes, or beliefs (KAB) 17 56.7 6 75.0 23 62.2 Screening and identification of abuse 21 70.0 0 0.0 21 56.8 Other clinical skills (e.g., appropriate documentation and referrals) 8 26.7 2 25.0 10 27.0 Number of different outcome domains assessed: KAB only 9 30.0 5 71.4 14 37.8 Screening and detection of abuse only 9 30.0 0 0.0 9 24.3 Other clinical skills or outcomes only 0 0.0 1 14.3 1 2.7 KAB and screening/detection only 4 13.3 0 0.0 4 10.8 Screening/detection and other clinical only 5 16.7 0 0.0 5 13.5 KAB and other clinical only 0 0.0 1 14.3 1 2.7 KAB, screening, and other clinical 3 10.0 0 0.0 3 8.1 Note: Percentages are column percentages. aBecause a study can assess multiple outcomes, the percentages do not total 100.0 percent. outcomes was not only infrequent but also more indirect. Cheung et al. (1991) used vignettes to rate the competency of trained protective services workers in case planning, goal formulation, and family contract development. These same researchers also assessed overall competency as indicated by supervisor job ratings (Leung and Cheung, 1998). Evaluators of training efforts on intimate partner violence were more likely to measure multiple outcomes in the same study: 30 percent of the evaluations in this area reported findings on two outcomes, and another 10 percent assessed outcomes in all three domains. In contrast, only one (14 percent) of the seven child abuse studies gathered data on outcomes in more than one domain (Cheung et al., 1991). Measurement of Outcomes How outcomes are measured can influence what can be learned from evaluations. For example, unreliable measures can reduce the ability to detect intervention effects and therefore effectively decrease the power of a design (Lipsey, 1990). Even when gains among training participants and group differences are found, the measures used may have poor construct validity, serving as only pale
OCR for page 94
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence surrogates of the relevant outcomes. These issues are especially relevant to research on family violence, given that study authors have frequently developed their own knowledge tests, attitude questionnaires, and chart review forms to assess practitioner attitudes and practices but either failed to assess their psychometric properties or reported marginal results, e.g., internal consistencies of less than 0.70 (e.g., Finn, 1986; Saunders et al., 1987). Among the 16 evaluations that examined improvements in knowledge, attitudes, and beliefs about intimate partner violence, all but two developed their own measures. However, slightly less than half of these presented no data on the reliability (e.g., internal consistency) of these instruments, although total scores and subscale scores were derived. The remainder either referred readers to previously published data on the measures or provided their own assessments of internal consistency (the preferred strategy), which were generally at acceptable levels (Cronbach a = 0.70 or higher). The most concerted efforts at instrument development have been carried out by Short et al. (2000), Maiuro et al. (2000), and Thompson et al. (2000). In Short et al.’s (2000) evaluation of the domestic violence module for medical students at the University of California, Los Angeles, not only were both the internal consistency and test-retest reliability examined for the knowledge, attitudes, beliefs, and behaviors scale that she developed, but also attention was paid to assessing the construct validity of the intervention itself (i.e., expert ratings of whether it contained the appropriate content and utilized a problem-based approach and varied training methods). Maiuro and colleagues (2000) developed a 39-item instrument to assess practitioner knowledge, attitudes, and beliefs, and self-reported practices toward family violence identification and management. This instrument exhibited internal consistency (a = 0.88), content validity, and sensitivity to change and was later used by Thompson et al. (2000) to assess training outcomes for primary health clinic staff. When protocols for asking individuals about intimate partner violence were utilized, Campbell et al. (2001) and Covington and Dalton et al. (1997), Covington and Diehl et al. (1997) used items from the Abuse Assessment Screen, which has been investigated as to its validity (Soeken et al., 1998). Thompson et al. (2000) used items that had been validated by McFarlane and Parker. Clinical skills (e.g., asking about intimate partner violence or correctly diagnosing abuse) in medical students and residents were assessed with standardized patient visits and case vignettes, with two exceptions; Knight and Remington (2000) used a patient interview to determine whether trained residents had asked the woman about intimate partner violence, and Bolin and Elliott (1996) had residents report daily on the number of conversations they had about intimate partner violence with the patients seen. With regard to measuring screening prevalence, identification rates, documentation, and referrals, evaluations of intimate partner violence training relied on reviewing patient charts. The typical practice was to use standardized forms
OCR for page 97
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence vulnerable to competing hypotheses related to self-selection caused by attrition from measurement and the occurrence of other events that affected the magnitude of group differences. Because individual studies may examine different outcomes with different designs (e.g., incorporate a comparison group for selected outcomes only), Table 5.3 describes the evaluations in terms of the design used and the outcome of interest. As noted earlier, one-group, pretest-posttest designs were the most common when knowledge, attitudes, and beliefs were of interest. Approximately 63 percent of the studies assessing knowledge in both intimate partner violence and child abuse training relied on this design, and of this group, two-fifths limited their study to examining only changes that occurred immediately after the training session had concluded. The remaining studies were more ambitious, either incorporating a comparison group that received no training or a different type of training, but assignment to these comparison groups was nonrandom. In addition, the most rigorous studies randomly assigned individuals or training sites (e.g., clinics or hospitals) to receive or not to receive the training of interest. For outcomes involving the identification of abused women, approximately one-third of the studies measured rates of screening, case finding, or both before and between 4 days and 12 months after staff training had occurred. Another 10 percent included lengthier follow-ups. Slightly more than two-fifths of the evaluations also collected similar screening and case-finding data from one or more comparison sites where staff did not receive such training; this group was nearly equally split between studies that managed to randomly assign sites to either a training or no-training group and those that did not randomize. A similar pattern pertained to evaluations that tracked other types of clinical outcomes. TRAINING OUTCOMES AND EFFECTIVENESS In general, the designs for most evaluations have effectively limited their contributions to enhancing the knowledge base with regard to the impact of training on health professionals’ responsiveness to family violence. The variation in sophistication and rigor previously described must be taken into account when summarizing what is known about the effectiveness of family violence training. Because the large majority have used weak quasi-experimental designs (i.e., one-group, pretest and posttest), they can at best provide information for the much simpler question regarding whether the outcomes expected by training faculty actually occurred. The remaining paragraphs attempt to summarize the evidence provided by the evaluations conducted to date. The majority of attention is paid to evaluations of intimate partner violence training, given the greater amount of available information. Outcomes for knowledge, belief, and attitudes; screening and identification; and other clinical outcomes are summarized separately. Because of the small number of child abuse evaluation studies and the even smaller
OCR for page 98
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence TABLE 5.3 Designs Used by Evaluations to Assess Training Outcomes by Type of Training and Outcome Domain Intimate Partner Violence Training Child Abuse Training Knowledge, Attitudes, and Beliefs (n = 16) Screening and Detection (n = 21) Other Clinical Outcomes (n = 8) Knowledge, Attitudes, and Beliefs (n = 6) Other Clinical Outcomes (n = 2) Type of Design N % N % N % N % N % Two-group, randomized (individual patients or practitioners or group randomization) Posttest only 0 0.0 3 14.3 1 12.5 0 0.0 0 0.0 Pretest and posttest 2 12.5 1 4.8 1 12.5 0 0.0 0 0.0 Pretest, posttest, and follow-up 1 6.3 1 4.8 1 12.5 0 0.0 0 0.0 Two- or three-group, nonequivalent comparison group Posttest only 1 6.3 2 9.5 0 0.0 0 0.0 0 0.0 Pretest and posttest 2 12.5 1 4.8 0 0.0 1 16.7 0 0.0 Pretest, posttest, and follow-up 1 6.3 1 4.8 2 25.0 1 16.7 1 50.0 Cohort Pretest and posttest (cohorts of patients) 0 0.0 7 33.3 1 12.5 0 0.0 0 0.0 Pretest, posttest, and follow-up (cohorts of patients) 0 0.0 2 9.5 1 12.5 0 0.0 0 0.0 One group Pretest and posttest 4 25.0 2 9.5 0 0.0 2 33.3 1 50.0 Pretest, posttest, and follow-up 6 37.5 1 4.8 0 0.0 2 33.3 0 0.0 Note: Percentages are column percentages. Because a few studies employed different designs for different outcomes (e.g., a one-group pretest and posttest to measure knowledge in training participants and a two-group nonequivalent comparison group design to assess clinical skills), the design used to assess each outcome domain rather than for the study itself was reported.
OCR for page 99
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence set of elder abuse training evaluations, brief summaries of what can be gleaned from published efforts in these two areas are presented below. Child Abuse Training Summarizing what is known about child abuse training efforts is difficult, given the small number of studies (n = 7) and their heterogeneity in terms of the professionals trained, the type of training delivered, and the designs themselves. The majority of evaluative data focuses on improvements in knowledge but restricts examination to training participants only (see Appendix G). In all cases, individuals who attended the training (whether they are residents in pediatrics, physicians, nurses, or caseworkers) exhibited increased knowledge levels, more appropriate attitudes, and perceived self-competency to manage child abuse cases. Such gains were typically measured immediately after training completion. For example, in two studies of child protective services workers, the trainees’ perceptions about their ability to identify abuse and risk, along with attitudes about the value of family preservation and cultural differences, also improved after enrolling in a 3-month training program (Leung and Cheung, 1998), and greater ability in case planning, goal formulation, and family contract development was observed for individuals who had a 6-hour seminar in these skills (Cheung et al., 1991). Relative to a comparison group, Dubowitz and Black (1991) found stronger improvement in knowledge, attitudes, and skills (including perceptions about their competency to manage child abuse cases) among pediatric residents who had attended several 90-minute sessions on child abuse immediately after training. However, with the exception of perceived self-competency, these differences were no longer evident at the 4-month follow-up. Palusci and McHugh (1995) also found that medical students, residents, fellows, and attending physicians who participated in a clinical rotation on child abuse had higher knowledge scores, on average, than their counterparts in other rotations. Once again, however, assessment was limited to immediately after the rotation had ended. In both these cases, the degree to which differences at the pretest between the training and intervention groups may have contributed to these group differences was not well examined. For more direct indicators of clinical competency, Leung and Cheung (1998) found that child protective services workers who had received three months of focused training on child abuse improved between their six-month, nine-month, and first annual evaluation and between their first annual and second annual evaluation as measured by supervisor job performance ratings that covered such behaviors as case interviewing and documentation). At the same time, no significant differences between their performance and that of more seasoned workers without such formal training were found. The above set of findings provides neither a broad nor strong evidence base
OCR for page 100
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence on which to understand the outcomes and effects of child abuse training. Although training in this area appears to instill greater knowledge, appropriate attitudes, and perhaps self-efficacy for dealing with child abuse cases, these within-group changes have mainly been observed immediately after the conclusion of training. The extent to which they are sustained or can confidently be attributed to the training interventions themselves is unclear. Training on Intimate Partner Violence The degree to which health professionals involved in training on intimate partner violence actually change their knowledge, attitudes, and beliefs was addressed by 13 of the 15 training evaluations.10 Typically, these evaluations did not go beyond examining changes before and after training, and posttests were usually administered immediately upon training completion or shortly thereafter (within one month).11 In all but one evaluation (Knight and Remington, 2000),12 statistically reliable differences between the pretest and posttest were found. Such gains were observed across a wide range of training interventions (ranging from a one-hour lecture to one or more days), questionnaires, and populations (medical students, residents, hospital staff, and community providers). Apparently, participants take something away from even a relatively brief exposure to material on family violence, but what the something is, how it changes with the content, nature, and length of training, how long it remains with them, and whether it was a direct result of training are not clear. More informative are the seven evaluations that paid some attention to measurement issues (e.g., multi-item scales with acceptable levels of internal consistency) and had complete assessment data on the majority of training participants (70 percent or more). Many of these studies also had multiple or extended post-baseline assessments, and three collected outcome and other relevant data on comparison groups, two of which were designed as randomized field experiments. On the whole, pretest-posttest gains similar to those previously described were observed. In the Jonassen et al. (1999) study, medical students who completed an intensive interclerkship module (2 or 3.5 days) showed increases in knowledge, attitudes, and perceived skills at the time of completing the module, and these gains had not significantly eroded six months later. With the exception 10 One study (Varvaro and Gesmond, 1997) involving emergency department house staff did not perform statistical analyses due to small sample sizes. Another study (Ernst et al., 1998) did report pre-post differences on 2 of 14 knowledge items but did not consider that this may be associated with the number of comparisons that were performed. 11 Appendix F lists the evaluation studies and their characteristics regarding knowledge, attitudes, and belief outcomes for training on intimate partner violence. 12 This “no-difference” finding in attitudes is most likely attributable to significant problems with respondent carelessness and a desire to complete the surveys quickly.
OCR for page 101
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence of perceived skills, similar results were found by Kripke et al. (1998) for a 4-hour workshop attended by internal medicine residents. Nearly 2 years after a focused, 2-day training workshop, emergency department staff evinced less blaming attitudes toward victims and were more knowledgeable about intimate partner violence and their role in addressing this problem than prior to training (Campbell et al., 2001). Furthermore, this group, which worked in hospitals that were randomly assigned to the training intervention, outperformed their counterparts at other hospitals who had not received the training. Evaluations conducted by Short et al. (2000) and Thompson et al. (2000) provide a more differentiated picture of which attitudes and beliefs undergo the most modification. Medical students at the University of California, Los Angeles, who enrolled in a 4-week domestic violence module showed statistically reliable gains in knowledge, attitudes, and beliefs at the completion of the module and also improved more than medical students enrolled at a nearby school who did not have any organized opportunities for training on intimate partner violence. Further analyses highlighted that this improvement was primarily a function of increases in perceived self-efficacy—namely, the ability to identify a woman who had been abused and intentions to screen regularly upon becoming practicing clinicians. No such change was observed in other knowledge and attitude domains (e.g., how appropriate it is for physicians to intervene in these situations). Similarly, primary care team members also experienced increased feelings of self-efficacy with regard to treating intimate partner violence victims both 9 and 21 months after an intensive training session (Thompson et al., 2000). This was in sharp contrast to staff in other clinics who had been randomly assigned not to receive the workshop and whose self-confidence in handling this problem decreased between the baseline and the nine-month follow-up period. Training participants also changed markedly and outperformed their comparison group counterparts in two other attitude domains—namely, fear of offending victims and provider or patient safety concerns in their interactions and stronger feelings that necessary organizational supports were in place. Improvements in Screening and Identification Rates Increased knowledge and more appropriate attitudes are important, but the ultimate goal is for professionals to translate these into their daily practice. Of the 18 evaluations that examined one or more of these behaviors, 7 collected data on variables related to asking or talking about intimate partner violence with their patients, and 11 monitored changes in case finding (e.g., the percentage of patients seen who were positively identified as victims of intimate partner violence).13 13 Appendix F summarizes the studies on intimate partner violence screening and identification rates that included some type of training intervention.
OCR for page 102
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence In terms of explicitly inquiring about intimate partner violence, three of the five studies with data on this outcome found significantly higher percentages of patients asked about intimate partner violence after staff had participated in workshops or other staff training. For example, Knight and Remington (2000) observed that four days after hearing a lecture, internal medicine residents more frequently asked patients about intimate partner violence, based on reports of patients seen in their practice. Such changes do not seem limited to the short term but were also found 6 to 9 months later for community health center staff (Harwell et al., 1998) and primary care team members (Thompson et al., 2000). Moreover, this latter study demonstrated that such improvement did not occur among staff in teams that were randomly assigned not to receive such training. The training in all three studies provided either formal assessment protocols or a laminated cue card with screening questions. The other randomized field experiment (Campbell et al., 2000) found promising gains 24 months after a training intervention among emergency department staff and in contrast to their comparison group counterparts. The one study that showed no differences at a 6-month posttest involved a 4-hour training strategy aimed at internal medicine residents, but it involved no protocol or other enabling materials (Kripke et al., 1998). The evidence on whether more frequent screening by practitioners is accompanied by increased case finding, however, is somewhat more mixed. A total of 13 evaluations monitored changes in relevant variables. Based on follow-ups conducted anywhere between 1 and 12 months after training, 7 (or 54 percent) of the evaluations found that the percentage of women who were positively identified as abused increased significantly in those emergency departments or clinics in which staff had received intimate partner violence training. In all these efforts, a protocol again was included as part of the training. Four evaluations found no reliable change, and both programmatic and methodological factors most likely contributed to these results. In the evaluation of training for internal medicine residents, identification rates did not change, and no protocol or screening materials were provided as part of the training (Kripke et al., 1998). The other three evaluations did involve such forms. Among community health center staff, Harwell et al. (1998) found no change in the proportion of cases that were confirmed as intimate partner violence, but they did find that a greater percentage was suspected of it. Thompson et al. (2000) also found a 30 percent improvement in case finding, but this was not statistically reliable, most likely because of low statistical power and problems in medical record review. Finally, Campbell et al. (2001) found no statistically significant gains in the proportion of patients who self-reported intimate partner violence and had it documented in their charts; at the same time, this also may have been because of small sample size, events that may have increased relevant practices in the comparison sites (e.g., legislation on mandatory reporting and education), and the time required for modification of chart forms to facilitate reporting. Furthermore, among the five studies that employed comparison hospitals or
OCR for page 103
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence clinics, only two evaluations reported greater case finding in the intervention groups, but these increases may have been due to selection bias. No differences surfaced in those with randomized controls. Overall, the above results suggest that if training is to result in increased screening for intimate partner violence, it must include instruction in and use of screening protocols and other types of standardized assessment materials. Clearly attesting to this are the findings from McLeer et al. (1989), who reported that the sizable increase in screening that followed training and protocol use essentially disappeared eight years later when administrative policy changed and the necessary infrastructure to support screening no longer existed. In addition, Larkin and his colleagues (2000) found dramatic improvements in screening rates by nursing staff, but only after disciplinary action for not screening was instituted as emergency department policy; neither training nor the availability of a protocol had previously enhanced screening in this site. Finally, Olson et al.’s (1996) work, while revealing a rise in domestic violence screening after a stamped query was placed on each patient’s chart, also found that the addition of formal training following chart stamping produced no further improvement. Essentially, the net contribution made by training itself to screening and identification is less clear. Improvements in Clinical Outcomes Other clinical outcomes associated with identification include such behaviors as assistance in planning a course of action, providing referrals, and providing appropriate and quality care. Seven evaluations included measures relevant to these outcomes.14 In general, there is some suggestion that training may be associated with staff’s more frequently providing referrals for abused women. Harwell et al. (1998) found that trained community health center staff more often completed safety assessments (which had been provided as part of the training) and referred individuals to outside agencies. Wiist and McFarlane (1999) found similar results with regard to referrals for pregnant women who had been identified as intimate partner violence cases as did Fanslow et al. (1998, 1999) with emergency department staff. Although Shepard et al. (1999) did not find such gains with regard to trained public health nurses, the percentage of intimate partner violence cases that were provided information did significantly increase. In their randomized group trial, Campbell et al. (2001) found that patients were more satisfied with the care that they had received by trained emergency department staff compared with those in clinics in which staff had not received the training. This study also was unique in its measurement of institutional change: an index assessing departmental commitment to detecting inti- 14 Appendix F describes the evaluations that examined these outcomes.
OCR for page 104
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence mate partner violence victims was stronger in the departments that participated in the training. Thompson et al. (2000), however, found no differences between intervention and comparison primary teams with regard to ratings of the quality of management as determined by record review. Saunders and Kindy (1993) also found no improvement among internal medicine and family practice residents in terms of history taking and planning. In general, it may be that the materials provided do assist, particularly in terms of referrals. The reasons underlying the lack of differences in other variables may be several. These include site variation in implementing the necessary supports for system change, other events that may have contributed to increases in appropriate practices and weakened the difference between the training and comparison groups (Campbell et al., 2001), and problems resulting in accurately measuring certain outcomes such as quality of care (Thompson et al., 2000). Training on Elder Abuse As previously noted, the training of health professionals to identify elder abuse and neglect and intervene appropriately has received little attention in the literature. Descriptions of formal curricula and training models are few in number. Thus, it is not surprising that formal published evaluations of training efforts are also lacking. The committee’s literature search uncovered only four studies that explicitly provided any evaluative information on the outcomes of such training. These efforts were quite heterogeneous in terms of the recipients of training, the training provided, and the way in which outcomes were examined. Both Jogerst and Ely (1997) and Uva and Guttman (1996) reported data on the outcomes of resident training in elder abuse screening and management. Each study focused on a different specialty and training strategy. Whereas a home visit program to improve the skills of geriatric residents for carrying out elder abuse evaluations was the focus of Jogerst and Ely’s work, Uva and Guttman provided data associated with a 50-minute didactic session for emergency medicine residents. Training for diverse groups of professionals was described and assessed by Vinton (1993) in her study of half-day training sessions of caseworkers, and Anetzberger et al. (2000) reported on the use of a 2.5-day training program that involved a formal curriculum—A Model Intervention for Elder Abuse and Dementia—that was delivered to adult protective services workers and Alzheimer’s Association staff and volunteers. Although all authors interpreted their findings as highlighting the benefits of training in terms of improved knowledge, level of comfort in handling elder abuse and neglect, and other outcomes (e.g., self-perceived competence), none of the four studies provided clear evidence regarding training effectiveness. For example, Vinton (1993) and Anetzberger et al. (2000) restricted their assessment to only pretest and posttest measurement of training participants immediately
OCR for page 105
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence after training. Jogerst and Ely (1997) did employ a comparison group consisting of an earlier cohort who had not participated in the home visit program. With the exception of age, these groups were similar in terms of gender, type of practice, age of patients, and number of patients seen per week. Residents who had participated in the home visit rotation were more likely to rate their abilities to diagnose elder abuse and evaluate other important aspects (home environment) higher than the earlier cohort who did not have these training experiences. However, the latter group was more likely to have made home visits and to have provided statements regarding guardianships for their patients. Whether this was due to simply the added time in practice or to differences in patient mix or clinician skills cannot be determined from this design and its execution, and thus the effects of training (or lack thereof) remain ambiguous. Uva and Guttman (1996) randomly assigned emergency residents to one of two groups, either: (a) to take a 10-item survey addressing their confidence in accurately recognizing elder abuse, level of comfort, and knowledge of how to report suspected cases and then attend a 50-minute educational session or (b) to participate in the session and then complete the survey. The two groups noticeably differed in terms of their confidence about detection and knowledge of reporting. While less than one-quarter of the residents who were administered the pretest trusted their skills in identification and knew to whom reports should be made, all residents who completed the questionnaire after training did so. Twelve months later, residents in both groups who responded to a follow-up survey all believed that they could identify and report elder abuse. Although a randomized design was used, this study is not very informative due to the lack of a comparison or control group and quite limited outcome measurement (i.e., assessment of knowledge and perceived self-confidence were each limited to one item). Consequently, the knowledge base about the outcomes and effects of elder abuse training is sparse. Although these four studies conclude that training is beneficial, more comprehensive and rigorous assessments are needed in order to determine the types of training that are effective. Moreover, efforts to examine training for other populations, including medical students, nurses, and others, remain to be carried out. QUALITY OF THE EVIDENCE BASE A previous National Research Council and Institute of Medicine report (1998) concluded that the quality of the existing research base on family violence training interventions is “insufficient to provide confident inferences to guide policy and practice, except in a few areas. Nevertheless, this pool of studies and reviews represents a foundation of research knowledge that will guide the next generation of evaluation efforts and allows broad lessons to be derived” (p. 68). Unfortunately, the situation with regard to our evidence base on the
OCR for page 106
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence associated outcomes and effectiveness of family violence training interventions is no different. The research and evaluation base on family violence training interventions is mixed in terms of potentially contributing to understanding training effectiveness and the relationship between training and outcomes. This is especially true with regard to elder abuse training (for which there were too few studies to review systematically) and child abuse training. In terms of the latter, although descriptions of training strategies are available, there have been only a handful of attempts to provide corresponding evaluative information. When assessments have occurred, they have nearly all focused on gains in knowledge, and the majority have employed designs that cannot speak even to how training and outcomes may be related. Furthermore, no study was conducted in such a way that confident inferences could be made about the training intervention’s effectiveness on patient outcomes. The picture is somewhat more promising with regard to training on intimate partner violence. More than two dozen evaluation studies were located, although their methodological quality varied enormously. Again, assessing changes in knowledge, attitudes, and beliefs received the most attention, but concerted attempts have also been made to document changes in screening, identification, and other relevant clinical outcomes that are associated with training, particularly that which accompanies or includes the use of a screening protocol and other forms. Moreover, a small number of randomized field experiments have been conducted that can be used to address questions surrounding the effectiveness of training, and when such designs were not logistically possible (e.g., randomizing medical students to courses), there are notable instances of quasi-experimental designs that employ strong measurement strategies, measure differences in training participation, and attempt to address how well other rival explanations are ruled out. As previously noted, several factors work against launching a concerted effort to improve the number of evaluations that can be conducted and to enhance how they are done. However, it is important to continue evaluating family violence training in ways that can contribute to the knowledge base about the outcomes of these efforts (even if in small increments). Clearly, these must include efforts to document outcomes and the effectiveness of training in child and elder maltreatment. The topic of child abuse and neglect offers an instructive example of evaluation needs. Training efforts for child abuse began to be described in the late 1970s, mandatory reporting requirements now exist, a handful of states require mandatory education in these reporting requirements and child abuse, and there is a national center devoted to addressing child abuse and neglect. However, only seven formal assessments, all of which suffered from methodological weaknesses, could be found. Training efforts in intimate partner violence also can benefit from more serious scrutiny. The available evidence appears reasonably consistent in sug-
OCR for page 107
Confronting Chronic Neglect: The Education and Training of Health Professionals on Family Violence gesting that training is positively associated with greater knowledge about family violence, stronger feelings of comfort and self-efficacy about interacting with battered women, and greater intentions to screen for intimate partner violence. When training is grounded in models of behavior change and how individuals learn, the data allow more confident determination of a link between training and increases in knowledge, attitudes, and behavioral intentions. Furthermore, for those training efforts aimed at practitioners, participants typically outperform their counterparts who did not receive such training in terms of increased rates of screening and identification—at least in the short term and up to two years after training. The same can be said for outcomes associated with safety planning, referrals for necessary services, and other clinical variables (e.g., patient satisfaction). The available evidence also strongly indicates that training by itself, however, is not sufficient in terms of producing the desired outcomes. Unless the clinical settings display commitment to having their staff address the problem of family violence and provide the resources to do it, the effects of training will be short lasting and possibly erode over time. This suggests that training cannot be seen as a one-shot endeavor (e.g., a course in medical or social work school) and must include those who are responsible for creating the necessary infrastructure to support and reward practitioners for paying attention to identifying and intervening with family violence victims. Although the evidence for this conclusion derives mostly from evaluations of intimate partner violence training efforts, it is likely that the same could be said about child and elder maltreatment training activities. CONCLUSIONS Evaluation of the impact of training in family violence on health professional practice and effects on victims has received insufficient attention. Few evaluative studies indicate whether the existing curricula are having the desired impact. When evaluations are done, they often do not utilize experimental designs (randomized controlled trials and group randomized trials) necessary to determine training effectiveness. Also lacking are high-quality quasi-experimental designs necessary to provide a more complete understanding about the relationship of training to outcomes. In addition to effective training on family violence, a supportive environment appears to be critically important to producing desirable outcomes.
Representative terms from entire chapter: