benefit from attention to service design, program implementation, and assessment experiences in related fields (such as substance abuse and teenage pregnancy prevention). These experiences could reveal innovative methods, common lessons, and reliable measures in the design and development of comprehensive community interventions, especially in areas characterized by individualized services, principles of self-determination, and community-wide participation.

Improving on study design and methodology is important, since technical improvements are necessary to strengthen the science base. But the dynamics of the relationships between researchers and service providers are also important; a creative and mutually beneficial partnership can enhance both research and program design.

Two additional points warrant mention in a broad discussion of the status of evaluations of family violence interventions. First, learning more about the effectiveness of programs, interventions, and service strategies requires the development of controlled studies and reliable measures, preceded by detailed process evaluations and case studies that can delve into the nature and clients of a particular intervention as well as aspects of the institutional or community settings that facilitate or impede implementation. Second, the range of interactions between treatment and clients requires closer attention to variations in the individual histories and social settings of the clients involved. These interactions can be studied in longitudinal studies or evaluations that pair clients and treatment regimens and allow researchers to follow cohorts over time within the general study group.

Assessing The Limitations Of Current Evaluations

The limitations of the empirical evidence for family violence interventions are not new, nor are they unique. For violence interventions of all kinds, few examples provide sufficient evidence to recommend the implementation of a particular program (National Research Council, 1993b). And numerous reviews indicate that evaluation studies of many social policies achieve low rates of technical quality (Cordray, 1993; Lipsey et al., 1985). A recent National Research Council study offered two explanations for the poor quality of evaluations of violence interventions: (1) most evaluations were not planned as part of the introduction of a program and (2) evaluation designs were too weak to reach a conclusion as to the program's effects (National Research Council, 1993b).

The field cannot be improved simply by urging researchers and service providers to strengthen the standards of evidence used in evaluation studies. Nor can it be improved simply by urging that evaluation studies be introduced in the early stages of the planning and design of interventions. Specific attention is needed to the hierarchy of study designs, the developmental stages of evaluation research and interventions, the marginal role of research in service settings, and



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement