National Academies Press: OpenBook

Surveying Victims: Options for Conducting the National Crime Victimization Survey (2008)

Chapter: 3 Current Demands and Constraints on the National Crime Victimization Survey

« Previous: 2 Goals of the National Crime Victimization Survey
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 41
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 42
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 43
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 44
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 45
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 46
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 47
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 48
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 49
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 50
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 51
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 52
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 53
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 54
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 55
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 56
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 57
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 58
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 59
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 60
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 61
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 62
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 63
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 64
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 65
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 66
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 67
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 68
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 69
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 70
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 71
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 72
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 73
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 74
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 75
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 76
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 77
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 78
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 79
Suggested Citation:"3 Current Demands and Constraints on the National Crime Victimization Survey." National Research Council. 2008. Surveying Victims: Options for Conducting the National Crime Victimization Survey. Washington, DC: The National Academies Press. doi: 10.17226/12090.
×
Page 80

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

–3– Current Demands and Constraints on the National Crime Victimization Survey T HIS CHAPTER BUILDS ON THE DISCUSSION of historical goals in Chapter 2 by examining some contemporary issues and challenges facing the measurement of victimization, in particular the demands and constraints placed on the National Crime Victimization Survey (NCVS). We begin in Section 3–A by discussing survey nonresponse, an emerging challenge facing modern surveys of all types, including federal surveys like the NCVS. Section 3–B discusses basic challenges of self response in measur- ing victimization, including discussion of crimes that are not well measured in police reports and that are inherently hard to measure: the capability of the NCVS to provide information on these is at once a great strength of the survey and a major, ongoing technical challenge. Section 3–C expands the discussion of analytical flexibility from the previous chapter to include issues of flexibility in measuring new types of victimization as well as changes in basic methodology, including subnational estimation, to meet user needs. We then turn to a basic underlying question—What is the value of mea- suring victimization?—and consider how the cost of the NCVS compares with various benchmarks in Section 3–E. Section 3–F turns to basic issues related to the coexistence of two related measures of crime in the NCVS and the Uniform Crime Reports (UCR): the general correspondence of the two series over time and resulting questions about the need for two independent 41

42 SURVEYING VICTIMS measures. We conclude in Section 3–G, considering both the historical goals of the NCVS (Chapter 2) and the challenges described in this chapter to assess the basic utility of the NCVS. 3–A CHALLENGES TO SURVEYS OF THE AMERICAN PUBLIC 3–A.1 The Decline in Response Rates With a response rate of 91 percent among eligible households (84 percent of eligible persons) as of 2005, the NCVS enjoys response and participation rates that are highly desirable relative to other victimization and social sur- veys. However, the NCVS response rates have declined over the past decade; in 1996, the NCVS household- and person-level response rates were 93 and 91 percent, respectively (Bureau of Justice Statistics, 2006a). Figure 3-1 il- lustrates the recent growth in the noninterview rate in the NCVS and one component of that rate in particular: refusals by anyone in the contacted household to participate in the survey. The figure presents these noninter- view and refusal rates for both initial contacts (interview 1, conducted by personal visit) and for all data collection in the year (including telephone and personal interviews for contacts 2–7 with sample addresses); the initial and aggregate rates generally track each other closely. The decline in response rates is a situation faced by almost all house- hold surveys in the United States (Groves et al., 2002). For instance, the General Social Survey, a cross-sectional household survey conducted by the National Opinion Research Center at the University of Chicago, has expe- rienced a declining response rate in recent years, from response rates in the high 70s (percentages) and a peak of 82 percent in 1993 to 70–71 percent in 2000–2006.1 Remedies to address declines in response rate continue to be developed. The Substance Abuse and Mental Health Services Adminis- tration, contracting with the Research Triangle Institute, implemented a $30 incentive in 2002 in order to induce respondents to return questionnaires for the National Survey of Drug Use and Health due to declining response rates. The highly detailed National Health and Nutrition Examination Sur- vey, conducted by Westat, has experienced a similar reduction in response. There is little evidence that the loss of response rate over time is primar- ily a function of what organization conducts the survey: many of the federal surveys collected by the U.S. Census Bureau (including the NCVS) for var- ious sponsors have shown declines as well. Atrostic et al. (2001) describe measures of nonresponse for six federal surveys (including the NCVS) be- tween 1990 and 1999, documenting consistent declines in response; Bates (2006) updates the series through 2005. An example cited in those works 1 See http://www.norc.org/Projects+2/GSS+Facts.htm [8/20/07].

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 43 Figure 3-1 Noninterview and refusal rates, National Crime Victimization Survey, 1992–2005 NOTE: Refusal rate is defined as the number of eligible interviewing units not interviewed because occupants refused to participate divided by the total number of eligible interviewing units. Refusals are a component in the noninterview rate, which also includes interviews not completed due to other reasons (e.g., language difficulty or no one at home). Noninterviews are termed “Type A” results. Rates are based on unweighted data. SOURCES: Data from Bates (2006); definitions from Atrostic et al. (2001). is the Consumer Expenditure Diary (CED) survey, data from which are an input used to derive the consumer price index. Between 1991 and 2003, the initial nonresponse rate (failure to respond to the first interview, which, like the NCVS, must be done by personal visit) steadily increased from about 15 percent to about 30 percent; in 2005, the CED had an overall nonresponse rate of 31.1 percent (Bureau of Labor Statistics, 2007:87). A notable exception to the pattern of declining response rates in fed- eral surveys is the American Community Survey (ACS), the replacement for the traditional decennial census long-form sample that asked census respon- dents for additional social and demographic information. However, the ACS also holds a distinct advantage over other Census Bureau surveys because— inheriting from the decennial census—responses to the ACS are required by law (and respondents are so advised). A test conducted by fielding the ACS with wording on the mailing materials suggesting that response is volun- tary (e.g., “Your Response Is Important to Your Community”) rather than mandatory (e.g., “Your Response Is Required by Law”) demonstrated a rad- ically reduced mail response rate: an overall drop of 20.7 percentage points (Griffin et al., 2003, 2004).

44 SURVEYING VICTIMS The fact that declines in response rates are not isolated to private surveys rather than federal surveys (or vice versa) suggests that large-scale changes in the relationship between survey data collectors (generally) and the U.S. public have occurred in recent years. While there is no convincing empirical evidence to test alternative theories of the causes of the decline, the most popular hypotheses include: • the lack of trust in the institutions requesting the survey participation; • confusion in potential respondents’ minds between marketing ap- proaches and survey participation requests; • loss in discretionary time at home due to increased commute time to work and other out-of-home activities; • increase in the sheer volume of survey participation requests, making participation in any particular survey less novel; and • the increased investment in electronic and other devices to prevent strangers from contacting the public. There are two principal forms of nonresponse, each of which appears to have separate causes. The first is unit nonresponse: for a household survey like the NCVS, this is nonresponse that arises because the household at a particular address could not be contacted or declined to participate at all. The inability to contact U.S. households is driven both by apparent increases in the out-of-home activities of the public and by changes in how the public views approaches from strangers. There are now more walled subdivisions, locked multiunit structures, and intercom systems that permit residents to control the access of strangers to their housing units. For telephone con- tact, answering machines and “caller ID” features permit residents to limit telephone contact to those persons they wish to talk to. Hence, populations that invest in these housing features and appliances are disproportionately not contacted. These tend to be urban dwellers, younger, more transient persons, and those who live alone. This broader, structural form of non- response is inherent to all surveys. As we discuss in Section 5–B, it is an open question whether the administration of the NCVS by the U.S. Census Bureau is a net positive or negative (or neither) in affecting unit response. The second type of nonresponse is within-unit nonresponse: given that contact is successfully made at an address, do all the survey-eligible persons at that address cooperate and answer the survey questions? There is evidence that persons who are interested inherently in the announced topic of the sur- vey tend to respond (Groves et al., 2004). There is also evidence that women cooperate more prevalently than men (DeMaio, 1980); that urban dwellers cooperate less frequently than those in rural areas (Groves and Couper, 1998); and that those who live alone and middle-aged persons are less coop-

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 45 erative. To the extent that person nonresponse relies on interest in the sur- vey’s topic area, the purview of the NCVS presents complications on both ends of a continuum. For people who have been victimized—particularly highly sensitive crimes like sexual assault—interviewers may face a difficult task in building rapport so that respondents are willing to talk about their experiences. Likewise, interviewers have to be trained to handle the oppo- site situation: people who have not experienced recent victimization and hence attempt to bow out of the survey because they think it irrelevant. Survey design features may have some role in affecting response rate, or at least in curbing the loss of response rate. There is evidence that longitudinal surveys of persons—in which multiple contacts are made with the same households and people, forging longer term “relationships” be- tween interviewers and subjects—have experienced lower declines in par- ticipation. The NCVS is a longitudinal survey of addresses, not persons, and thus may be affected by turnover of individual persons or families at sampled addresses. However, nonmovers—people who remain at the same address over time—can experience up to seven NCVS requests and thus—conceptually—the NCVS response rate should enjoy some benefit from those repeated contact efforts. As Lepkowski and Couper (2002) note, however, the propensity to respond in later waves of a longitudinal survey is dependent on the enjoyment of the prior wave. If NCVS respondents in one wave find the survey less than pleasant, there may be lower propensity to respond in the next wave. Telephone surveys appear to suffer more dramatic nonresponse rate in- creases than face-to-face surveys. This finding primarily comes from evi- dence from random-digit-dialed surveys (Curtin et al., 2000). The NCVS does use the telephone for waves 2–7 of interviewing, but, given this is com- mon to other longitudinal surveys, it is unlikely that the use of the telephone in NCVS interviewing is, in itself, a principal cause of lower response rates. It is important to note that nonresponse rates are only proxy indicators of one aspect of the quality of NCVS estimates. The key issue is whether the propensity to be successfully measured among NCVS sample members is correlated with the likelihood of victimization. Tests conducted along- side the British Crime Survey (BCS) and the Scottish Crime Survey (SCS) provide useful evidence along these lines. Lynn (1997) describes a BCS ex- periment that urged nonrespondents to provide some limited information; people who said that they did not want to be interviewed were pressed to give very short answers about the extent of recent victimizations against them. These capsule assessments were found to be consistent with victim- ization estimates among people who completed the survey. Similar find- ings were registered in an SCS test documented by Hope (2005); that test also compared responses gained by face-to-face interviewing compared with telephone response, since a change to telephone collection was being con-

46 SURVEYING VICTIMS sidered for the 2004 administration of the SCS. In the test, face-to-face and telephone interviews were conducted in parallel; the face-to-face interviews recorded a 67 percent response rate compared with 49 percent by telephone, with the difference attributed to refusals to be interviewed. When victim- ization estimates from the two modes of administration were compared, the telephone rates were found to be consistently higher, raising the possibil- ity that the telephone administration had the effect of oversampling persons with incidents to report. A follow-up test recontacted some respondents from the first survey and compared victimization estimates for the group of people who responded on the first contact with those who had their refusals “converted” to responses in the second pass. Victimization rates were found to be lower for the “converted” group than the initial respondents, corrob- orating the hypothesis that refusals are more likely to include nonvictims (people with no incidents to report) rather than victims. 3–A.2 The Rise in Survey Costs From the fiscal and operational standpoint, the major consequence of in- creased nonresponse is increased survey costs. These are incurred when the data collection effort seeks to maximize response rates, devoting field re- sources to repeated attempts to contact households for interviews. Cost in- flation would be more modest if only one call were made to each household. In the NCVS and other surveys seeking high-quality estimates, repeated call- backs and efforts at persuasion are introduced on sample cases that have not yet been interviewed. Repeated calls in face-to-face surveys require the in- terviewer to drive to the sample unit and attempt contact. If no one in the household is at home, another call—often on another trip—is required. If a contact is achieved but the householder is reluctant to participate at the time, another call is made. What results from such a recruitment protocol is that noninterviews require more effort than interviews; the cost of a failure is larger than the cost of a successful interview. As the difficulty of making contact and gaining cooperation increases over time, the costs of the total effort increase if response rates are to be maintained. In short, attempting to achieve high response rates in a survey of a pop- ulation presenting growing difficulty in making contact and gaining cooper- ation will lead to cost inflation. 3–A.3 The Linkage Between Response Rates and Nonresponse Error It is traditional to attempt to maximize response rates in an effort to reduce nonreponse error. This flows from a simple deterministic view of nonresponse error in a sample mean (like the number of victimizations re- ported divided by the number of persons) as a function of nonresponse rates and the difference between respondent and nonrespondent means. Increas-

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 47 ingly, empirical studies have shown that a stochastic view of nonresponse is more appropriate, viewing each decision to be a respondent as subject to uncertainty. In this view, high correlation between the likelihood of partici- pating and the survey measures produces nonresponse bias in such descrip- tive statistics. Which NCVS estimates might illustrate such links to response propensities is at this point an unknown question. Some NCVS estimates might be biased from the nonreponse and others might not. New studies are appropriate to gauge what value BJS should place on high NCVS response rates, given both the current design and for future alternative designs. Given the ubiquity of the nonresponse problem across federal surveys, recent U.S. Office of Management and Budget (2006b) guidelines call for analyses of nonresponse bias when either unit or item nonresponse hits cer- tain levels. The NCVS unit response rate is such that this threshold has not been crossed; still, we know of no effort by BJS or the Census Bureau to mount a full nonresponse bias study for the NCVS. 3–B CHALLENGES OF SELF-RESPONSE IN MEASURING VICTIMIZATION 3–B.1 Cognitive Challenges: Telescoping and Forgetting As noted above, the NCVS emerged from the National Crime Survey only after several years of conceptual development and methodological re- search. The research was path-breaking in that it helped launch what is sometimes called the cognitive aspects of survey measurement (CASM) movement (aided by a Committee on National Statistics workshop, National Research Council, 1984). Much of the labor of that redesign effort (Bider- man et al., 1986) targeted improved reporting among respondents to the NCS. Importing key notions from cognitive processing models, it was noted that autobiographical reports were fraught with weaknesses. Memories were viewed as being formed at an “encoding” step, in which sense-based obser- vations were retained, often in a manner that was heavily dependent on the situation during the experience of the events. Not all encoded memories were easily retrieved upon a desire to do so. The studies found that “forget- ting” events that did occur was a challenge to the survey. Consistent with long-standing results from cognitive psychology it was found that events that did not induce emotional reactions (“nonsalient”), those that happened frequently, and those that occurred far back in time tended to be underre- ported. Thus, “forgetting” was a problem for the NCS. Research on context-dependent recall suggested that individual words and mentions of types of related events were effective “cues” to memory recall. Much of the research on the screener questions, therefore, was at- tempting to improve the rate of reporting of incidents as a way to attack

48 SURVEYING VICTIMS the problem of “forgetting.” The “short cue” version of the instrument that resulted from the research attempted to provide a rich set of cues for each victimization type. In this regard, a marriage between the incident report and the screening questions was key. The screener questions were designed to maximize recall, even at the risk of overreporting incidents through du- plicate reports about the same event or misdating of an event that occurred outside the reference period. The role of the incident reports was to dupli- cate those reports. The finding that forgetting was a function of the salience of the event to the person and the length of time since the event implied that smaller vic- timizations occurring further back in time were most fraught with reporting errors. The length of the reference period (the time from the start of the eligible time period for events to be in scope to the end of the period) and the length of the recall period (the time between the start of the in-scope period and the day of the interview) were issues that could affect the qual- ity of reports. Longer periods yielded poorer reports (Miller and Groves, 1985; Czaja et al., 1994), generally a mix of forgetting and misdating events. The redesign recommended a 6-month reference period, a recommendation based on the findings of increased measurement error due to forgetting and telescoping in 12-month reference periods. There was another antidote to misdating or telescoping errors, which was already in place in the NCS—the use of a bounding interview. A bound- ing interview in the context of the NCS was the first wave interview with each respondent, in which events in the 6-month reference period before the interview were reported. No data from the bounding interview were used in estimation (another recommendation stemming from findings of forward telescoping errors). Instead, the events reported in the bounding interview were made known to the second wave interviewer to verify that a incident reported in that interview was not a duplicate of a report in the first, bound- ing interview. This was thought to reduce forward telescoping errors in the NCS estimates. Some research in the redesign focused on whether the data from the bounding interview might be integrated through statistical models into the estimates, but that never led to such a recommendation. The panel notes that the design features of the reference period, the cuing mechanisms of the screener questions, the nature of the incident reports, and the use of the bounding interview technique are mutually connected. It is difficult to evaluate one of these features without simultaneously considering the others.

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 49 3–B.2 Measuring Hard-to-Measure Crimes A key conceptual strength of the NCVS is its ability to elicit informa- tion about victimization incidents that are not reported to police. This is particularly the case for such personally sensitive crimes as rape and domes- tic violence, as well as simple assault and other incidents that are acts of violence but that victims may decline to report (or judge that they are not sufficiently severe to report) to authorities. The 1992 redesign concentrated on improving the screening questionnaire to more effectively and accurately probe respondents to recall and report such incidents, doing so by increas- ing the density of cues and using multiple frames of reference. Both the pre- and postredesign questionnaires emphasized the need to broach questions in language that is accessible and understandable (and not steeped in legal jargon) in order to boost cooperation and accurate recall. However, improvements to the screening and cuing procedures leave open the question of whether reporting of hard-to-measure crimes is full and complete and whether other approaches may be preferable. Concep- tually, the measurement of sensitive crimes through personal interviewing is a sounder approach than reliance on police reports, but it is important to consider that some crimes that are not reported to police may not be re- ported to interviewers, either. From the technical perspective, some hard-to- measure crimes—notably domestic violence—present continuing measure- ment challenges due to their high frequency; determining an accurate count is a formidable difficulty, and detailed information on specific incidents even more so. In this section, we briefly discuss the challenges in getting accurate survey reports in two areas: measurement of rape and domestic violence and description of repeated (series) victimizations. Rape, Domestic Violence, and Simple Assault Several researchers have reported that the NCVS yielded lower estimates of the incidence of rape and domestic violence than other surveys. For exam- ple, before the 1992 switch to a redesigned instrument, the National Crime Survey produced estimates of domestic violence that were an order of mag- nitude smaller than those produced by other surveys.2 Similar results obtain 2 Bachman and Taylor (1994) compared the then-available estimates of family violence against women from the NCVS to results from the National Family Violence Survey (NFVS). The cross-sectional NFVS was conducted twice, in 1975 and 1985, by the Family Research Laboratory of the University of New Hampshire and reached a sample of 2,143 and 6,002 households in the two administrations, respectively. The survey suggested that around 160 per 1,000 married couples experienced at least one “violent incident” in 1975 and 1985. By com- parison, the NCVS—which did not ask specifically about violence by family members before the redesign—yields an estimate of the annual rate of famly violence against women of just 3.2 per 1,000. However, the two survey estimates are not directly comparable because they frame

50 SURVEYING VICTIMS Table 3-1 Rape and Assault Rates, National Crime Victimization Survey and National Violence Against Women Survey, 1995 NVAWS NCVS Adjusted NCVS Rape 8.7 1.9 2.6 Intimate Partner Assault 44.2 6.6 26.7 Assault 58.9 25.8 80.4 NOTES: Rates per 1,000 population. “Adjusted NCVS” are estimates calculated by including the reported count of incidents in series victimizations in the estimates. SOURCE: Rand and Rennison (2005). for rape: the pre-1992 NCS instrument did not define rape for the respon- dent and did not directly ask respondents whether they had been victims of attempted or completed rape (Bachman and Taylor, 1994:506). The re- designed NCVS asked respondents more directly about family violence and rape. The new instrument asks specifically about violence or threats per- petrated by “a relative or family member.” The redesigned instrument also asks more directly about “unwanted sexual activity,” including from those who are well known to respondents. A range of probes also distinguishes between verbal and other threats, attempts, and completed rapes. Post-redesign research comparing results from the new NCVS instrument with previous versions has largely focused on broader crime categories (Kin- dermann et al., 1997) or differences by analytic groups (Cantor and Lynch, 2005) and not on specific crimes like rape or domestic violence. Still, sev- eral studies compared NCVS rates with those generated by other surveys on these categories. Rand and Rennison (2005) contrasted rape and as- sault rates in the 1995 NCVS with those from the National Violence Against Women Survey (NVAWS), a telephone survey of U.S. adults. They obtained the estimates for annual incidence shown in the NCVS and NVAWS columns of Table 3-1. Rand and Rennison (2005) suggest several explanations for the discrep- ancy between the data sources: the NVAWS may elicit more victimizations by asking about rapes and assaults more explicitly: the NVAWS may be more vulnerable to telescoping, in which incidents outside the reference period are included; or the two data sources may diverge because of their measure- ment of recurring victimization. As a one-time, single-interview survey, the NVAWS had no capacity for bounding responses, “suggesting that [NVAWS] incidents differently: the NFVS instrument asked specifically about “conflict” among family members rather than more detailed probes about violent incidents.

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 51 estimates are likely inflated to some unknown degree” (Rand and Rennison, 2005:274). (Response rates in the NVAWS are also much lower than the NCVS, although it is difficult to know what biases might result.) The NCVS records as a single series victimization a group of six or more victimizations that were similar in nature but difficult for the respondent to recall indi- vidually. Rand and Rennison (2005:275) estimate that series victimizations account for about 10 percent of violent incidents against women. BJS publi- cations exclude series victimizations from annual estimates. After adjusting for age and crime types and counting the number of incidents among series victimizations, Rand and Rennison (2005) obtained the estimated rates of annual incidence reported in the “Adjusted NCVS” column of Table 3-1. Despite the adjustment for series victimization, rates of rape and intimate partner assault are lower in the NCVS than in the NVAWS. However, Rand and Rennison (2005:279) found that the difference is statistically significant only in the case of intimate partner assault. The discrepancy between the data sources is largest for intimate partner violence, suggesting that at least part of the divergence may be due to the classification of intimate partners rather than the measurement of victimization. Research on the NCVS redesign also suggests that measurement of sim- ple assaults (without a weapon resulting in minor injury) also depends closely on the survey instrument. With a broader screening interview that cued re- spondents to consider events they might not define as crimes, the redesigned survey recorded roughly twice the number of simple assaults than the old NCVS (Lynch, 2002). While it is difficult to gauge whether there is still underreporting of less serious personal crime in the NCVS, research on the redesign underlines the sensitivity of estimates to the survey instrument. The possibility that such crime types as sexual victimization and domes- tic violence may still be underreported in the standard personal interview context—despite improvements in cuing and screening—highlights the im- portance of researching means for incorporating self-response options in the NCVS. These include such approaches as web administration and turning the computer laptop around for parts of an interview so that respondents read and answer some questions without interaction with the interviewer. We discuss these further in Chapter 4. Repeated Victimizations Repeated victimizations may be underestimated in the NCVS because of the way in which series victimizations are handled. As described in Sec- tion C–3.d, NCVS interviewers collect specific information (using an In- cident Report form) for each victimization incident reported by a respon- dent except in instances when six or more very similar incidents occurred within the 6-month reference period. In those cases, a single incident form

52 SURVEYING VICTIMS is completed based on the details of the most recent incident. BJS excludes these series victimizations from its standard NCVS estimates, although basic counts of series and nonseries victimizations are tabulated (see, e.g., Bureau of Justice Statistics, 2006a:Table 110). Prior to the NCVS redesign in 1992, the threshold for defining a series victimization was three or more similar incidents. The change in threshold provides for somewhat fuller accounting of crime types in which repeated victimization may occur; as in the previous section, domestic violence and intimate partner violence are examples in which this may apply. We know of no research that has estimated the effect of the redesign on the reporting of series victimizations—that is, over and above the emphasis on more ef- fective screening and elicitation of incidents, whether the NCVS instrument is more likely to generate reports of crimes for which series victimization rules would apply. Still, the manner in which series victimizations are col- lected and counted is an important methodological concern, one that leads to concern about whether some crimes are underestimated as a result. The Rand and Rennison (2005) results indicate that individually count- ing series victimizations can help bring the NCVS more into line with other surveys. The scope and effect of series victimizations are also analyzed by Lynch et al. (1998) and Planty (2007); Planty and Strom (2007) compare the effects of different counting rules with resulting instability in the estimates. Some have used the panel design of the NCVS to estimate repeat victim- ization. This is a difficult analysis because residential mobility contributes to attrition from the panel, and victimization contributes to residential mobil- ity. Naive panel estimates may underestimate repeat victimization because they undercount victimization of those who have moved and been lost from the survey. Ybarra and Lohr (2002) impute victimization rates to respon- dents who are lost to residential mobility. They obtain very high repeated rates for violent crime and domestic violence, but these estimates are highly sensitive to the missing data model. 3–C FLEXIBILITY IN CONTENT AND METHODOLOGY In Section 2–D we discussed the long-standing goal of analytic flexibil- ity of the NCVS, being able to accommodate different types of products. In this section, we expand the discussion of flexibility to include emerging issues in topic areas covered by the NCVS (principally through the use of supplements) and in general methodology. 3–C.1 Is the NCVS Flexible Regarding Changes in Victimization? As a survey-based method of data collection, the NCVS has the capacity to be a relatively timely and flexible instrument for gathering information

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 53 about “new” types of crime that are of concern to the public. Since its in- ception, the NCVS survey instrument has added new measures of criminal victimization and improved existing measures; this was particularly the case with the 1992 redesign, which was intended to improve the survey’s mea- sures of rape and sexual assault, nonstranger violence, and other “gray area” victimizations. However, the most common option to provide flexibility in topical coverage in the NCVS has been through the addition of supplemen- tal questionnaires, most often at the behest of other government agencies. School violence is one example of a type of victimization for which periodic supplements to the NCVS have been developed and administered, in this case with the cooperation and sponsorship of the National Center for Ed- ucation Statistics. Conducted in 1999, 2002, and 2005, the School Crime Supplement provides estimates of crime independent of the statistics gath- ered by police or by the schools. Over time, some of these supplemental questions have migrated into the main NCVS content, as with questions re- lated to hate crimes. In theory, a survey is a relatively nimble data collection vehicle—certainly compared with official-records methodology, in which changes in data col- lection depend on the cooperation of the myriad local agencies that assemble raw data—and so the NCVS instrument (or individual modules) should be able to be rapidly moved from concept to data collection. In practice, how- ever, this process has often taken quite considerable amounts of time. For instance, the measurement of hate crimes using the NCVS began in response to a White House announcement in 1997 that directly offered the NCVS as the instrument of choice for estimating this crime. Research and devel- opment of questions using multiple rounds of focus groups and cognitive testing began soon thereafter. Nonetheless, the final set of questions was not administered to the full sample until 2000 (Lee et al., 1999; Lewis, 2002; Lauritsen, 2005). Some of the delay resulted from the complexity of the issue: for example, some focus group participants had trouble deciphering the hate crime terminology, others were unclear about the kinds of evidence that were necessary for such a designation, and some felt that queries about sexual orientation should not be asked. Still other factors that contributed to the delay resulted from the fact that the survey had not yet been fully computer-automated because of persistent budget difficulties. To some extent, the perceived slowness in implementing new measures and rigidity in approach have been attributed to the Census Bureau as the data collector for the NCVS and other federal surveys. Certainly, major change does not occur easily or quickly in the bureau’s flagship product, the decennial census—for instance, the switch to the mail (rather than personal visit) as the principal collection mode for the 1970 census was preceded by major tests dating back to 1948. More recently, the 2006 full-scale imple- mentation of the bureau’s American Community Survey followed a decade

54 SURVEYING VICTIMS of pilot testing and a midscale implementation as an experiment in the 2000 census (National Research Council, 2006). With specific regard to the NCVS and other demographic surveys, some delay in fielding changed questions is almost certainly due to what is typically considered a good thing: the Cen- sus Bureau’s keen attention to cognitive testing in order to try to ensure that questionnaires are clear to respondents. For a survey like the NCVS that asks many questions about hard-to-define (and hard-to-discuss) concepts without seeming legalistic in tone, cognitive tests and other pretesting can be par- ticularly valuable. In addition, some time is required for the U.S. Office of Management and Budget to review, process, and clear proposed survey forms, as they are required to do by law. In comments to our panel, officials in charge of the British Crime Sur- vey (BCS) noted that they can typically add and change items on the BCS questionnaire within months from the time a decision is made in the United Kingdom’s Home Office. By comparison, even though the survey is now fully computerized, Census Bureau representatives noted that a two year lead time should be considered typical. In practical terms, the slowness of the process at the Census Bureau has made the NCVS less flexible than vic- timization surveys in other countries and, in turn, less responsive to short- term needs for information about victimization and its outcomes. However, the trade-off between rapid turnaround and end data quality is admittedly complex. Methodological Issues With the 1992 implementation of the redesign—and a subsequent 14- year transition to all-electronic survey instruments—the NCVS became an important adopter of computer-assisted telephone interviewing (CATI) and computer-assisted personal interviewing (CAPI) methodologies. Although the use of CATI interviewing from centralized sites and the switch from paper questionnaires to CAPI were commonly billed as a major potential source of cost reductions, many survey organizations have found that these steps toward survey automation have fallen short of low-cost promises; see National Research Council (2003b) for a fuller discussion. Indeed, as part of its planned set of cost containment measures for 2007, BJS and the Census Bureau dropped the use of Census Bureau CATI call centers for the NCVS. However—consistent with the NCVS objective of accurate data collection—the strong benefits of computer-based survey techniques must be emphasized. Properly implemented, the question-to-question skip pat- terns of an electronic survey instrument can make interviewers’ tasks easier and quicker and ensure that respondents are guided through portions of the questionnaire (e.g., the screening questions) in a more uniform manner. Electronic administration also permits the use of basic editing routines dur-

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 55 ing the course of the interview, allowing for the correction of contradictory answers and data entry errors. Although centralized CATI implementation has not reduced survey costs as much as hoped—an outcome that is cer- tainly not unique to the NCVS—it is important that the NCVS continue to explore methodological advances that may produce greater accuracy. That said, altering the mix of computer assistance is a change that requires careful pretesting. By its nature, the NCVS requires respondents to recall and describe events in their past that are unpleasant or uncomfortable at best and in- tensely traumatic at worst. Hence, a possible methodological improvement suggested by other survey research is incorporating self-response modes to the survey. Computer-assisted self interviewing (CASI) techniques effectively turn around the CAPI dynamic: rather than interviewers reading questions from a computer laptop screen, the laptop is handed over so that only the respondent sees the questions (and his or her answers). A further variant of the basic technique, audio CASI or ACASI, has respondents listen to questions through headphones while going through a questionnaire on the computer-screen. The basic motivation of CASI is that respondents may be more likely to divulge socially sensitive information if they can do so with privacy and without verbally reporting to an interviewer. ACASI research suggests that the methodology is effective in eliciting more reports of sen- sitive information than standard interviewer-administered approaches (see, e.g., Tourangeau and Smith, 1996). Turner et al. (1998) provide a fuller review of CASI methods. ACASI has been implemented for some modules on the British Crime Survey, but it has not been used in other victimization surveys, nor has it been used in other Census Bureau demographic surveys. However, it is notable that many federal government surveys contracted to the private sector that measure sensitive attributes use ACASI; these include the National Medical Care Expenditure Survey, the National Survey of Drug Use and Health, and the National Survey of Family Growth. 3–D CONSTITUENCIES AND USES: STATE STATISTICAL ANALYSIS CENTERS Constituencies and consumers of NCVS data are varied and have diversi- fied since the program’s inception. Criminologists and federal justice policy researchers have historically relied on the NCVS to understand fundamental trend and victimization dynamics. However, in recent years the advent of a victim services infrastructure and increased public attention have widened the scope of interest in the NCVS specifically and victimization data more broadly. Contemporary users of the NCVS include:

56 SURVEYING VICTIMS • State justice statistics and services agencies; • Victim services providers; • Legislatures; • State and local agencies, such as departments of health, mental health, and planning; • Advocacy groups (e.g., domestic violence, child abuse, elder abuse, racial disparity); and • The public. Contemporary users are interested in victimization data that informs is- sues or problems of direct concern to them or their mission. This often means detailed findings on the incidence and nature of victimization in sub- populations, measures of change in the incidence and nature of victimization (trends), information on victimization in a specific geographic area (e.g., state, region, locality, neighborhood), and data on characteristics relevant to specific victims. In this section—and in most of this report—we do not make as exhaus- tive a listing of constituencies and uses for the NCVS as our overall charge suggests. This is due to the initial interest of BJS in an examination of NCVS design options. In our remaining meetings and final report, the panel intends to canvass a fuller set of constituencies for BJS products. For this NCVS methodological report, we focus principally on use of victimization data by state-level statistical analysis centers and their related support organization, the Justice Research and Statistics Association. 3–D.1 SAC Network Since its creation, BJS has supported state efforts to collect, analyze, and report criminal justice statistics through what was initially known as the Sta- tistical Analysis Center (SAC) program. The SAC program was designed to foster criminal justice statistical infrastructure development in the states; its goal is to serve as a resource for policy formation and resource allocation by acting as a conduit for justice information between the federal and state governments and by providing additional information on the nature and dy- namics of crime at the national and state levels. The SAC program did not provide resources to completely build state criminal justice statistical systems and clearinghouses but instead supported state efforts in this regard. The program was redesigned in 1996 so that support to states would be for specific research or system development projects of mutual interest to states and the U.S. Department of Justice. The State Justice Statistics (SJS) program emerged to “maintain and enhance each state’s capacity to address

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 57 criminal justice issues through collection and analysis of data” (Bureau of Justice Statistics, 2006c:2). A network of state SACs exists today in each state and two territories, created either by state statute or executive order. Although the size, location in government, and authorizing features of each SAC varies, they essentially serve similar functions and contribute to justice policy formation through re- search, support of legislative activity, executive policy development, and as a resource for state justice and related agencies. The SACs have also been im- portant resources for BJS by providing assistance, data, and research at the state level on problems or issues of national interest. Individual state SACs also benefit from membership in the Justice Research and Statistics Associ- ation, the professional organization of state centers located in Washington, DC. State SACs work closely with the criminal justice community and re- searchers in their state and are typically familiar with data systems, data qual- ity, and information needs in their jurisdiction. The existing SAC network is familiar with the NCVS and the relevance of victimization research to the justice policy process. In some instances, SACs have conducted victimiza- tion surveys of varied methodological approaches in their own jurisdictions. A bibliography of recent reports related to victimization and victimization surveys solicited from the SACs is included in Appendix D, illustrating the ongoing interest in victimization at the state and local levels. Our panel was informed in its work through a survey of SAC directors regarding the prevalence of victimization surveys conducted at the state or local level and the utility of the NCVS and victimization research for law and policy in their jurisdictions. In addition, three experienced SAC directors appeared before the panel to discuss the NCVS and victimization research needs.3 Findings from the SAC survey indicate that victimization surveys are a valuable tool for policy makers and other users at the state and local lev- els. However, although the NCVS fulfills some of this need, it increasingly is not able to address issues of contemporary importance to victim services agencies, legislatures, advocacy groups, researchers, and governmental pol- icy makers. Key findings from the survey are referred to below. 3–D.2 State Role for National-Level Data NCVS data are used in a variety of ways by SACs and the agencies and organizations that work with them. Among the uses reported by SACs and evident in other published SAC reports are: 3 Those appearing were Kim English, Colorado Division of Criminal Justice; Douglas Hoff- man, Pennsylvania Crime Commission; and Phillip Stevenson, Arizona Criminal Justice Com- mission.

58 SURVEYING VICTIMS • State legislative support and testimony (the most common reported use); • Benchmarking; • Forecasting; • Program and policy evaluation; • Resource allocation; • Victim services policy, planning and operations (particularly for initia- tives funded under the federal Violence Against Children and Violence Against Women Acts); • Victim experiences and satisfaction with the criminal justice system; • Context and benchmarking in a broader study of recidivism patterns; • Protocol development (e.g., the dynamics and relationship of victim- offender has been used to inform child abuse protocols in some states); • Community oriented policing support and evaluation; • Work with advocacy groups and other nongovernmental organizations (NGO); and • Public and criminal justice system stakeholder education. Although national-level estimates from the NCVS do not speak directly to rates and occurrences in local geographic areas, the state SACs still cite the utility of having some kind of national benchmark. The NCVS is used often for the policy and planning efforts of SACs and other constituents in the absence of state, regional, or local victimization data. Findings on trends and characteristics from the NCVS are often found in reports, briefs, and other documents for purposes of illustrating points important for policy and law formation and resource allocation. Overall victimization rates remain of interest, but topical issues related to special victims have emerged as important for policy, planning, and service delivery in most jurisdictions. Of particular interest are domestic and sex- ual violence, factors related to the reporting of crime to police, and victim experiences with the criminal justice system. 3–D.3 Need for Finer Level Estimates Although the survey of state SACs suggested continued interest in na- tional estimates from the NCVS, it also clearly suggested a need and desire for victimization data and related information at the state and in some cases city or regional level. The national victimization measures contained in the current NCVS are generally useful as a triangulation tool, but they do not address the need for data at the state or local level given lack of the ability

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 59 to disaggregate the findings. Users would find victimization data at the state, regional, and in some instances city or local levels more directly relevant to the policy and program uses encountered today. State Victimization Surveys In the absence of state-level estimates produced directly from the NCVS, about half of the state SACs have conducted a victimization survey of their own in recent years. Although the methods of these surveys vary some- what, most replicate basic questions in the NCVS, primarily because those questions have been tested and validated over time and provide a basis for comparison with the NCVS. These subnational victimization surveys in- clude efforts in Alaska (Giblin, 2003), Idaho (Stohr and Vazques, 2001), Illinois (Rennison, 2003; Hiselman et al., 2005), Kentucky (May et al., 2004), Maine (Rubin, 2007), Minnesota (Minnesota Justice Statistics Cen- ter, 2003), Pennsylvania (Young et al., 1997), South Carolina (McManus, 2002), Utah (Haddon and Christenson, 2005), and Vermont (Clements and Bellas, 2003). We have drawn our observations from the experiences re- ported in these states; additional information on some of these state efforts is given in Appendix D. Basic observations from the state victimization surveys conducted to date include: • Methods of data collection vary, but the surveys used either mail ques- tionnaires (Idaho, Illinois, Minnesota, and Utah) or telephone inter- views (Alaska, Kentucky, Maine, Oregon, Pennsylvania, and Vermont). Due to their one-time or semiregular frequency, none has attempted to replicate the NCVS panel structure of repeated interviews at the same addresses (or phone numbers) and have rather relied on cross-sectional samples. • The number of respondents ranged from about 800 to 3,100. Re- sponse rates varied between 12 and 65 percent with no consistent pat- tern based on method of delivery. • The state surveys generally do not attempt to estimate statewide or subpopulation rates from survey data. Most surveys report findings from within the sample. • Surveys typically focused on general victimization experience to cal- culate the extent of victimization in the sample, often similar to the NCVS screener questions. • All of the surveys were conducted on the adult population, primarily because of the legal and methodological difficulties associated with surveying younger groups.

60 SURVEYING VICTIMS • Most surveys collected data on special populations or issues of con- cern in the state (e.g. stalking, domestic violence, hate crimes, school victimization, disability, geographic area, gangs). • Most surveys also examine perceptions of crime and public safety, fear of crime, and reporting of crime to police. • Several surveys measured knowledge and use of victim services. • Two states report using the BJS-developed Crime Victimization Sur- vey software (described further below), effectively replicating the basic NCVS content. Victimization surveys conducted at the state and local levels generally have not produced the level of statistical precision required for estimation used by the NCVS or similarly constructed surveys. Most have relied on sample sizes consistent with measuring public opinion, experienced mixed response rates, and generally measured self-reported victimizations without collecting detailed incident data. In most instances the studies have been conducted only once or for intervals in excess of one year, primarily for cost and administration reasons. Our survey of SAC directors suggested that many would be greatly in- terested in conducting their own state or local victimization studies if it were practical to do so and if resources were available. However, mount- ing a survey is a costly proposition and is most often impractical for state agencies; it is especially impractical for local agencies and nongovernmental organizations. Most agencies, even at the state level, do not have the exper- tise required to design, implement, and analyze data from a statistically and methodologically valid survey. The ability to do so is further constrained by a lack of experience in conducting call center activities (for phone interviews), sampling design, and the availability of skilled analysts. As a consequence, replications of the NCVS at the state and local levels are not widely con- ducted. When studies have been carried out, it is often with the assistance of university-based researchers. BJS has attempted to bridge this gap by developing desktop computer- based Crime Victimization Survey software. The software replicates many of the features of the NCVS survey and allows screening and detailed incident reports. Although it is a useful development, only a few states and localities have conducted their victimization surveys using the tool. Although the soft- ware product automates some parts of the process, mounting a state-level representative survey still requires personnel and resources that individual states have found difficult to obtain. The Alaska SAC report on the use of the BJS-developed tool for a victimization survey in Anchorage (Alaska Jus- tice Statistical Analysis Center, 2002) reviews the basic features of the BJS-

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 61 provided software and points out implementation problems raised during its early development. Due to the resource demands, state victimization surveys tend to be one- shot or episodic events. However, a few states have conducted their own surveys on a more-or-less regular cycle (e.g., Minnesota’s mail-based survey was conducted in 1992, 1996, 1999, and 2002). Consequently, the state surveys tend to be directed toward comparison with national NCVS trends; without a fuller time series of state estimates, they are limited in their ability to evaluate program and policy impacts at the state, regional, or local level. It is important to note that the need for finer level data does not necessar- ily mean a strict disaggregation by state or other level of geography. Rather, state and local agencies like the state SACs would benefit from estimates based on samples that are “more like us”—demographically representative— in other respects than sheer geography. For instance, having more measures that can be disaggregated by level of urbanicity (urban, rural, suburban) would be useful and more relevant to individual jurisdictions than omnibus national totals. As an example, estimates from the Vermont Victimization Survey trended well with measures based on the NCVS sample from rural areas; hence, use of a “rural NCVS” analysis would be sufficient and more cost-effective than conducting an original study in Vermont on a regular basis. Have Local Needs for Victimization Data Changed? The demand for victimization data and research has significantly ex- panded since the NCVS was implemented in 1972. Perhaps the most impor- tant growth driver has been increased demand for more sophisticated and geographically disaggregated measures among state and local constituents. The contemporary rediscovery of crime victims and ensuing victimization movement parallels and is interwoven with the need for increasingly com- plex and textured victimization data (see Karmen, 2007). Aggregate national estimates of victimization rates and crime victim characteristics that were in- novative at the time the NCVS was developed remain important for trend purposes, but they do not fully address needs that have emerged at the state and local levels. In the decades following the development and implementation of the NCVS, the field of victimology and a victim services infrastructure have emerged, significantly fueled by federal, state, and private support. Karmen (2007:27–41), Walker (1998), and others have documented varied factors that in concert have contributed to expansion of victimology and victim services in recent years. Social visibility of vulnerable and politically under- represented populations has propelled the need to understand victimization rates and patterns for various subpopulations as well as social stratification

62 SURVEYING VICTIMS by race, class, and gender. Such forces as escalating crime rates in the 1960s and 1970s, the women’s, civil rights, and children’s rights movements, and elevation of domestic and sexual violence, and subsequent policy at the fed- eral and state levels have pushed the demand for more data and research on crime victims. Such crimes as hate crimes or stalking, which were not part of the criminological lexicon when the NCVS was developed, illustrate how the environment and conceptualization of victimization have changed. Understanding the general victimization rate for purposes of correlation with police-reported crime rates is still important at the state and local levels, primarily for assessing crime trends and patterns. However, more detailed and segmented information about victimization patterns is often needed to craft policy, services, and resource allocation. Contemporary victimization issues include understanding victimization across different population seg- ments, some of which are vulnerable and of significant public concern and have been addressed in the NCVS through topic supplement surveys, like the School Crime Supplement and the Police-Public Contact Survey supplement (see Demographic Surveys Division, U.S. Census Bureau, 2007a). The NCVS topic supplements have provided significant new data and in- formation and as such have been important innovations to the NCVS. How- ever, they are also—as currently implemented—adjuncts or add-ons to the main NCVS and hence may not necessarily reflect the type of sample that would ideally be drawn to study the subject. The need for supplemental sur- veys reflects contemporary demand for enhanced victimization knowledge and should be reexamined relative to the continued role and form of the NCVS. Significant state and federal resources have helped shaped the victims movement over the past three decades and consequently have indirectly fu- eled the need for more rich and geographically focused victimization data. Federal resources have been provided for states directly through landmark legislation such as the Victims of Crime Act of 1984 (VOCA; P 98-473 .L. §1401 et seq.) and the Violence Against Women Act of 1994 (VAWA, reau- thorized in 2000; P 103-322 and 106-386). The VOCA legislation cre- .L. ated the Office for Victims of Crime in the Office of Justice Programs, U.S. Department of Justice, and has assisted states in various ways to construct victim services and compensation fund infrastructures. The VAWA contin- ued efforts in this area by providing resources to improve the investigation, prosecution, processing, and restitution enforcement for victims of crime. The National Center for Victims of Crime has also emerged as a central nongovernmental resource in this movement since 1985 (National Center for Victims of Crime, 2003). The Office for Victims of Crime has grown into a significant resource and facilitated development of a victim services and enforcement infrastruc- ture at the state level. The demand for victimization data, information, and

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 63 research has grown exponentially as programs develop, service resources are allocated, and programs and policies are evaluated. Most federal justice grants have required evaluation components for at least a decade, another source of increased demand for victimization research at the state and local levels. One side note related to efforts to understand victim characteristics and victimization is in order. Some states and jurisdictions have implemented incident-based reporting systems and, specifically, systems compliant with the National Incident-Based Reporting System (see McManus, 2002). Con- temporary records management and incident-based systems hold promise for capturing victim data linked to offenders and crime characteristics, but this promise is yet to be realized on any scale. While NIBRS and simi- lar data may comment on victim-offender relationships and characteristics, these data remain constrained, since they represent reported offenses. In many jurisdictions, however, incident-based or NIBRS data are the only small area victim data available and have been used for policy, planning, and evaluation in the absence of comprehensive victimization data. Legislative and Executive Support Crime policy bills are quite prevalent in legislatures around the country, with few actually making it into law. How- ever, extensive debate occurs and requires reasoned analysis. Policy is often driven by celebrated cases, and crime data are needed to debunk myths or unusual circumstances. The dangers of the lack of information are less ef- fective policies and poor allocation of state and federal resources. Having data over time is also extremely important, although the desir- able time intervals of measures may vary. In some rural jurisdictions, victim- ization patterns may not change quickly enough to warrant annual surveys. Victimization data may not be able to comment specifically on the efficacy of particular programs, but over the long term these data are critical to un- derstanding larger impacts of policies on crime and victimization patterns. 3–E VALUING VICTIMIZATION INFORMATION: COMPARING THE COST OF VICTIMIZATION MEASUREMENT WITH BENCHMARKS Arguably, the most significant challenge faced by the NCVS—and largest constraint on its survival—is the availability of funding resources. As de- scribed in Chapter 1, BJS has been subject to essentially flat funding for a number of years, constraining options on the NCVS as the cost of con- ducting the survey has grown. Accordingly, it is important to consider the question of the value of the information that the NCVS provides. This can be done formally, as suggested by the framework outlined in Box 3-1. In

64 SURVEYING VICTIMS Box 3-1 Value of Information from a Decision-Making Point of View A useful perspective in designing or redesigning a public statistical system or survey is to consider the value of information. The perspective is valuable because the purpose of the system is to improve the efficiency of actions by the government and other users of the data. Empirical valuation of information is extremely difficult (National Research Council, 1976a; Savage, 1985) for a variety of reasons: uses may not be identified, uses may be identified but the role of data may be imperfectly understood, valuation of alternative choices under different states of nature may be infeasible. Some examples in which the valuation of information may be feasible are discussed by Spencer (1982). The basic idea for the value of a survey such as the NCVS may be illustrated by the following stylized example. Suppose that in the absence of NCVS data, alternative actions A1 , A2 ,. . .,Am would be taken with respective probabilities p1 , p2 ,. . .,pm . With the NCVS, the alternative actions A1 , A2 ,. . .,Am are taken with respective probabilities q1 , q2 ,. . .,qm . The expected value of information for this use alone may be represented as (p1 − q1 )U (A1 ) + (p2 − q2 )U (A2 ) + . . . + (pm − qm )U (Am ) where U (A1 ) is the expected value if action A1 is taken. The information is valuable if it leads to higher probabilities of more valuable actions being taken. Differences in values of alternative actions may reflect the differences in value of passing one law (or one version of a law) rather than another. Even if dollar valuation is not feasible, a sense of the impact of alternative laws may lead to a sense that the difference in value is on the order of tens or hundreds of thousands of dollars, or perhaps more. this section, we take a practical approach to assessing the value of NCVS in- formation by comparing the cost of the NCVS with several relevant bench- marks: estimates of the fiscal cost of crime, the costs of other federal sur- vey data collections, and the expenditures of other countries in measuring victimization. 3–E.1 The Cost of Crime The total cost of crime in the United States—including both tangible eco- nomic costs and intangible costs and covering such components as damages to victims and expenditures on the justice and correctional systems—is an elusive quantity to estimate. A large research literature has tried to estimate the economic costs of crime, and we briefly summarize some points from this work in this section. There is certainly a large speculative element to these figures, and the calculation of intangible costs is especially uncertain. In raising the cost of crime as a comparison benchmark for the NCVS, we do not suggest that the costs of crime and the costs of victimization mea- surement should be directly linked (e.g., that spending on the NCVS should be some set fraction of the cost of crime). Instead, we offer the comparison for two purposes. The first is to reinforce the idea that crime is a sufficiently important and complex phenomenon facing the United States as to warrant

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 65 multiple, complementary, and detailed statistical indicators (i.e., both the NCVS and the UCR, as discussed further in Section 3–F). The second is to highlight a unique and important substantive function of the NCVS: the sur- vey is the only direct, systematic source of information on victims’ economic losses due to crime. Studies of the economic cost of crime often arise in the context of a benefit-cost analysis in which criminal justice spending is weighed against the economic losses associated with victimization. Gray (1979) provides a historical review of research on the costs of crime and traces the earliest studies to the early twentieth century. Cohen (2000, 2005) provides a com- prehensive literature review and analysis.4 This section draws heavily from the discussion by Cohen (2000). Research on the costs of crime distinguishes at least nine different types of costs: (1) direct property losses; (2) medical and mental health care; (3) victim services; (4) lost workdays, school days, or days of domestic work; (5) pain and suffering; (6) loss of affection and family enjoyment; (7) death; (8) legal costs associated with tort claims; and (9) long-term costs of vic- timization. Some of these costs accrue directly to crime victims and their families. For example, the cost of lost property that is unreimbursed by insurance is borne by the victim. Other costs are socially distributed. For example, losses reimbursed by insurance are passed on to society in the form of higher premiums. These costs can be categorized broadly as either tangible or intangible. Tangible costs involved monetary payments, such as medical costs, stolen or damaged property or wage losses. Intangible costs are nonmonetary and in- clude things that are generally not priced in the marketplace, like pain and suffering or quality of life. In principle, tangible costs are relatively straight- forward to estimate, but great uncertainty accompanies the estimation of intangible costs. Although the calculation of tangible costs is conceptually straightfor- ward, Cohen (2000:282) reports that the NCVS provides “the only direct source of crime victim costs.” The NCVS obtains from crime victims dollar estimates of the costs of medical care, lost wages, and property loss (Klaus, 1994). These figures are likely to understate the total tangible cost because the recency of the victimization reference period excludes longer term med- ical costs. In addition, the survey does not count mental health costs or other less proximate costs, like moving from the neighborhood or buying home security systems. Some estimates indicate that the tangible costs of victimization are higher than those recorded by the NCVS by a factor of 4 4 Anderson (1999) also reviews previous studies of the cost of crime. Attempting to estimate indirect and opportunity costs associated with crimes, Anderson suggests that the annual net cost of crime in the United States is about $1.1 trillion.

66 SURVEYING VICTIMS Table 3-2 Estimates of the Average Economic Loss Associated with Criminal Victimization Miller et al. (1996) Crime Klaus (1994) Tangible Intangible Total Rape $234 $4,962 $79,202 $84,164 Robbery 555 2,238 5,546 7,784 Assault 124 1,508 7,589 9,097 Theft 221 360 0 360 Burglary 834 1,070 292 1,362 Auto theft 3,990 3,406 292 3,698 NOTE: Intangible costs are estimates of lost quality of life. All figures are in 1992 dollars. for robbery, a factor of 10 for assault, and a factor of 20 for rape (Miller et al., 1996). Other tangible costs of crime are missed entirely by the NCVS. White-collar crimes like fraud or theft of services are difficult to quantify because victims may not be aware of the crime. Potential victims also suffer (unmeasured) tangible costs in the form of crime prevention expenditures. Intangible costs of pain and suffering and lost quality of life are even more difficult to estimate. Some studies have tried to capture the intangible costs of crime by studying the relationship between index crime rates and housing prices. These studies see the risk of victimization as capitalized in housing prices (Thaler, 1978). In another approach, Cohen (1988) used jury awards in tort cases to estimate the monetary value of pain and suffering and lost quality of life. Zimring and Hawkins (1995) criticized this and related work for its arbitrary measurement of intangible costs. Alternative to jury awards, such as workers compensation payments, might have been used, yielding alternative estimates of the intangible costs of crime. In short, intangible costs are highly uncertain and difficult to quantify. Table 3-2 reports a range of estimates of the dollar cost of crime. The table compares the economic losses reflected in the NCVS reported by Klaus (1994), with those calculated by Miller et al. (1996), which include a more expansive inventory of costs. Klaus (1994) uses just those tangible losses reported in the 1992 NCVS. Miller et al. (1996) partly base their estimates of tangible costs on the NCVS, although they add estimates of mental health care and lifetime medical costs, as well as long-term productivity losses. Intangible costs are based on adjusted jury awards for pain and suffering. Clearly, estimates based on a broader consideration of costs yield far higher estimates than the NCVS alone. For violent crimes, intangible costs domi- nate estimates of the total economic loss.

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 67 The large average cost of tangible and intangible losses sum to large losses in the aggregate. In the aggregate, Klaus estimates that crime victims lost a total of $17.6 billion in direct costs. Miller et al. (1996) report that the economic cost of index crimes in 1990 summed to $450 billion, in 1992 dollars. Of this total, $345 billion was due to lost quality of life, and $105 billion was due to tangible economic losses. Fatal crimes, including drunk driving incidents and rape together account for $220 billion. Using almost any of the above metrics, criminal victimization is one of the key attributes affecting the progress and status of a modern society. It is fitting, therefore, that the authorizing legislation of BJS gives to it the mandate to measure victimization, as a key social indicator of the country’s progress. 3–E.2 Comparison with Other Federal Surveys The National Crime Victimization Survey is conducted largely from the 12 regional offices of the U.S. Census Bureau. The Census Bureau also con- ducts the labor force survey, the Current Population Survey, for the Bureau of Labor Statistics. It conducts the National Health Interview Survey for the National Center for Health Statistics. It conducts the American Housing Survey for the Department of Housing and Urban Development. It also conducts periodic surveys, for example, the National Household Travel Survey for the Bureau of Transportation Statistics and the American Time Use Survey for the Bureau of Labor Statistics. These often use the Current Population Survey as a convenient sample for reinterviewing on special topics. The data collection costs for these surveys are sometimes difficult to dis- cern, although rough estimates can be constructed from the presentation made by Census Bureau staff to the panel at our April 2007 meeting. The cost per interview for the NCVS in fiscal year 2006 was estimated at $146;5 at the 2005 rate of 38,600 households interviewed, this would imply total costs of $5,635,600. In the judgment of the panel, the appropriate criterion for assessing how much the country should spend on victimization measurement is the fitness of NCVS estimates for their uses. Fitness for use criteria would entail the BJS articulating all uses and placing them in the context of importance of the uses for the country. These are inherently value-laden judgments. BJS needs some benchmarks for such judgments. They might be had by comparisons with other federal statistical agencies data series. 5 By comparison, the cost per case for the National Health Interview Survey in fiscal 2006 was estimated as $212 by Census Bureau staff; the cost of a Current Population Survey interview as $64.

68 SURVEYING VICTIMS 3–E.3 International Expenditures Comparisons of what the United States spends on crime statistics and what other similar nations spend provides an alternative standard for assess- ing the sufficiency of U.S. expenditures in this area. With this said, making cross-national comparisons is not simple. One must find nations with sim- ilar resources and infrastructure and with similar expectations about pub- lic safety and governmental accountability. Even when these larger insti- tutional structures are similar, arcane budgeting procedures can complicate comparing expenditures. Nonetheless, if one can negotiate these rapids, cross-national comparisons can be very illuminating. In terms of identifying nations with basic social and political institutions similar to the United States, it would seem that most western, industrial- ized democracies would be fitting comparison points. Nations in Western Europe, Australia, and Canada are a good set of comparison points. In addition to simply comparing budgets for collecting crime and justice statistics across these nations, it may be useful to standardize these expen- ditures by some features of these nations that could reasonably be assumed to affect the cost of collecting and reporting these data. Population size, for example, may increase the cost of collecting and reporting crime statis- tics. Larger nations have more correctional facilities, so any census of these facilities would include more facilities and more funds. There are ways to reduce these costs, but, in general, it is not unreasonable to assume that the larger the population, the greater the cost of crime statistics. Similarly, the land mass of a nation can affect the cost of collecting crime statistics. To the extent that in-person visits are required, data collection in far-flung places will entail more travel costs or the maintenance of a standing field staff that would not be required in smaller places. The volume of crime will also in- fluence the collection of data on crime. A nation with 10 crimes should have fewer transactions to document than a nation with 100,000 crimes. So stan- dardizing crime statistic budgets by residential population, land mass, and the volume of crime will make for more comparable data across nations. There are a number of other differences between nations that are clearly relevant for cross-national comparison, such as the degree of administrative centralization in a country or the nature and extent of federalism. These differences are perhaps more consequential than the ones noted above, but we are not yet in the position to standardize comparisons for these effects. At this time we make comparisons only between the United States and England and Wales; these expenditures are shown in Table 3-3. Moreover, we have restricted comparisons to the costs of collecting victimization survey data because the collection of court and corrections data in England and Wales now resides with the Ministry of Justice and not the Home Office.

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 69 Table 3-3 Comparative Expenditures on Victimization Surveys, United States and England and Wales Total Expenditures on Victimization By Population By Square By Number of Nation Surveys per 1,000 Kilometer Serious Crimes England and Wales 12,500,000.00 212.62 82.70 13,897.41 United States 20,731,800.00 73.67 2.26 4,336.91 Ratio of E&W to US 0.60 2.89 36.55 3.20 SOURCE: Land area and population data derived from http://www.nationsencyclopedia.com/economies/Europe/United-Kingdom.html and https://www.cia.gov/library/publications/the-world-factbook/print/us.html. In fiscal year 2006, BJS spent $20.7 million collecting, processing, and reporting NCVS data. The Home Office spends approximately $12.5 mil- lion doing the same for the BCS. The United States has roughly four times the population of England and Wales; so on a per capita basis, the former spends $73.67 per 1,000 population on victimization data while the latter spends $212.62. England and Wales spend almost three times as much as the United States. When viewed in terms of land mass, the differences are even greater. The mainland United States is 9,161,000 square kilometers and England and Wales are 151,000 square kilometers. On a per square kilometer basis, England spends almost 36 times as much on victimization statistics as the United States. If we examine these expenditures by police- recorded serious crime volume, England and Wales spend more than three times what the United States spends on victimization statistics. This differ- ence is about 10 percent greater than what we observed by population alone. These comparisons suggest that—at least compared with one international benchmark—the collection of victimization statistics in the United States has been given relatively less funding compared with England and Wales. In making a comparison with the experience of England and Wales, it is worth noting that a particular role has been defined by statute for the BCS; this formalizes a use and a constituency for it—and adds justification for expenditure on the survey—in a way that does not exist for the NCVS. The Local Government Act 1999 created a set of indicators that are used to measure the performance of government departments and local authorities; the indicators are periodically revised. These indicators are formally known as “best value performance indicators”; in the area of policing, they are

70 SURVEYING VICTIMS commonly described as “statutory performance indicators” or SPIs.6 The SPI data are collected and audited annually by the Audit Commission and are also published on government websites. BCS data are formally required for several of these indicators (Home Office, 2007): for example, in the set of SPIs defined for 2006–2007, “the percentage of people who think their local police do a good job” (SPI 2a), “perceptions of anti-social behaviour” (SPI 10b), and the violent crime rate (SPI 5b). Meeting these statutory guidelines requires that the BCS be regularly funded and capable of providing estimates at the local government level. 3–F ISSUES RELATED TO THE COEXISTENCE OF THE NCVS AND THE UCR For more than three decades, the nation has had two national indicators of crime: the Uniform Crime Reporting program and the National Crime Victimization Survey. As described in Chapter 2, the two programs over- lap in the crimes they cover (and both are used to generate national-level estimates of violent crime) but also differ in some important definitional ways. Despite the definitional differences between the two measures and their complementary nature, a fundamental question still arises in public discussions of crime statistics: Is it necessary to have two data systems for the purpose of estimating and evaluating trends in crime? One part of that broader underlying question concerns the trends shown by the two series and the degree to which they agree or converge over time: In other words, do police records generally reflect victimization trends, and vice versa? A second part of the bigger question is more philosophical, con- cerning the necessity of two series: Does there remain the need for a second indicator completely independent of the official police reports? 3–F.1 Do Police Record Reports Reflect Victimization Trends? The question about the concurrence of the NCVS and UCR trends is made more salient by the fact that there appears to have been a convergence in recent years of UCR- and NCVS-based national estimates of serious vi- olent crime (i.e., rape, robbery, and aggravated assault). In other words, estimates from the NCVS of the number of crimes victims say they have re- ported to the police and the number that are recorded in the UCR program have grown closer in recent years (see Figure 3-2). On its face, the evidence of the most recent years of the series might suggest a redundancy—that na- tional crime trends may be adequately described by UCR and that the NCVS role as a crime trend monitor may have diminished. 6 Additional information on best value performance indicators is available at http://www. bfpi.gov.uk/pages/faq.asp.

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 71 The correspondence or divergence of the UCR and the NCVS takes on extra importance when there appears to be a shift in long-term crime trends. Between 1992 and 2004 the United States experienced a long period of declining crime, by both measures. More recently there have been signs of an upturn in crime in a number of cities, and the national crime rate as measured by the UCR has increased in two consecutive years. It is precisely at turning points such as the current one that rich and timely measures of a variety of aspects of crime are required, to better understand the trajectory that the nation seems to be taking. Although the UCR and NCVS estimates of the number of serious violent crimes reported to the police have generally become more similar over the past decade, this convergence in levels is not yet fully understood, nor is it clear whether the similarities in estimates will continue in the future (see Lynch and Addington, 2007). This is because the convergence in the esti- mates does not necessarily reflect a reduction in error by one or both series. Rather, the two series can produce different estimates and varying long- and short-term trends because they measure different aspects of the crime prob- lem using dissimilar procedures. Furthermore, even if the convergence does reflect some reduction in error associated with one or both series, there is little evidence to suggest that this pattern will remain constant in the future. As evident in Figure 3-2, annual estimates of the total number of serious violent crimes derived from NCVS data have often been higher than the annual counts in the UCR. There are several reasons why this may occur. Most importantly, the NCVS data include crimes that are not reported to the police. Approximately 49 percent of violent victimizations and 36 percent of property victimizations are reported to the police (Hart and Rennison, 2003). In addition, NCVS counts may be higher if police departments do not record all of the incidents that come to their attention or do not forward the reports to the national UCR program. For some types of crimes in the NCVS and the UCR, it is possible to reconcile apparent discrepancies in annual estimates by adjusting the NCVS counts to include only those incidents said to have been reported to the police. When such adjustments are made, levels and trends in burglary, rob- bery, and motor vehicle theft appear generally similar in the NCVS and UCR. However, UCR and NCVS levels and trends in serious violent crime, such as aggravated assault and rape, exhibit many discrepancies after these kinds of adjustments are made. These differences in both levels and trends in ag- gravated assault and rape may result from changes concerning the public’s willingness to report crime to the police, changes in the way police depart- ments record crime, or some other factor. It is clear that the differences in the methodologies of the UCR and NCVS programs must be considered when assessing both levels and trends of crime in the nation. However, the fact that the extent of agreement in current levels of crimes depends on the

72 SURVEYING VICTIMS Figure 3-2 National Crime Victimization Survey and Uniform Crime Reports estimates of serious violent crimes, 1973–2005 NOTES: Serious violent crimes include rape, robbery, aggravated assault, and homicide; homicide estimates from the UCR are added to the NCVS series. “NCVS Actual” includes crimes not reported to the police as well as those that are (“NCVS Reported”). NCVS estimates before 1993 are based on data year; for 1993 and later years collection year is used (see Table C-2). SOURCE: National Crime Victimization Survey, Bureau of Justice Statistics and Uniform Crime Reports, Federal Bureau of Investigation. Data from http://www.ojp.gov/bjs/glance/tables/4meastab.htm [11/1/07]. nature of the offense makes it difficult to claim that UCR data alone could be a sufficient indicator for estimating current levels of “crime.” If the goal is to assess long-term trends in crime, then a high correla- tion between UCR and NCVS trends would suggest that either data series would serve as a reasonable proxy for some analytical purposes. McDowall and Loftin (2007) assessed the correlations between UCR and NCVS na- tional trends for index crimes for the period 1973–2003. Using a correla- tion standard of 0.80 or higher to indicate sufficient agreement in trends, they found that only two crimes came close to or exceeded this standard: robbery (r = 0.76) and burglary (r = 0.93). The next highest correlation was found for motor vehicle theft (r = 0.67). However, the remaining crime types exhibited much lower or even negative correlations. For larceny theft, the correlation was weak (r = 0.20), and for rape and assault, the correla- tions were negative (r = −0.16 and r = −0.21, respectively) (McDowall and Loftin, 2007:101). Like current level estimates, the trend correlations varied according to crime type. Analysts studying robbery and burglary can expect

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 73 generally similar results using either UCR or NCVS trend data for this time period. For other types of crime, however, this will not be the case. One of the main hypotheses about why assault trends might differ in the UCR and NCVS series is heightened police productivity resulting in a growth of police estimates of assaults (O’Brien, 1996). Rosenfeld (2007) hypothesized that if the divergence in the two series was the result of changes in the way police were handling less serious assaults, then one should expect to see similar trends in the UCR and NCVS gun assault rates, but divergent trends in the nongun assault rates because the perceived seriousness of gun assaults and the ways in which such crimes are handled by the police are much less susceptible to change over time. In addition, if the hypothesis is correct, the ratio of nongun to gun assaults should have increased more in the UCR than in the NCVS. Using UCR and NCVS aggravated assault data for the period 1980–2001, Rosenfeld found that the correlation between the UCR and NCVS estimates of gun assaults was 0.74, while the correlation for nongun assaults was 0.16 (not significantly different from 0). In addition, the ratio of nongun to gun assault rates in the UCR grew, while the same ratio using the NCVS data did not. Thus, the two data series provided similar information about trends in gun-related aggravated assault, but they differed in their patterns for aggravated assaults without guns. The form of the UCR and NCVS nongun assault trends also suggested that changes in police recording and categorization of such incidents stabilized during the 1990s as the two series began to exhibit more similar trends. In their comprehensive examination of UCR and NCVS trends, Mc- Dowall and Loftin (2007) found that the two series for each of the index crimes began tracking each other more closely in the 1990s. McDowall and Loftin argue that this would suggest a structural break in the UCR data series, which might indicate that the estimates from the two series will continue to follow each other more closely in the future. The reason that modifications in the UCR are thought to be responsible for the increased correspondence is that there have been changes in the recordkeeping systems of police de- partments, while the NCVS methodology remained relatively more stable over the same period (McDowall and Loftin, 2007:111). The recordkeep- ing capacities of police departments improved as a result of technological innovations, as well as increased numbers of personnel involved in this task. However, the authors note that the agreement in the series is fairly recent and based on a limited number of data points: thus it is premature to con- clude that it will continue in the future (McDowall and Loftin, 2007:114). Whether it is reasonable for state and local governments to believe that their local police data accurately capture trends in crime is more contentious. When state and local governments are interested in assessing trends in crime in their own areas, they typically must rely solely on police data because vic- timization survey data are rarely available for small geographical areas. The

74 SURVEYING VICTIMS collection of reliable crime survey data is costly, and most state and local governments have not had the resources to conduct their own victimization surveys, especially on an annual basis. As a result, many wonder whether conclusions about the recent convergence in national police and victim sur- vey data apply to their local areas. The limited amount of research that has addressed the comparability of UCR and NCVS trends in local areas has used data from special tabulations of NCVS data. One such special subset allows researchers to produce vic- timization estimates for the 40 largest metropolitan core-county areas in the country (Bureau of Justice Statistics, 2007b). NCVS estimates can be gen- erated from this newly available file for comparisons to UCR data for those same places and years (Lauritsen and Schaum, 2005). Existing research us- ing these data has found that the correlations in the trends for robbery and aggravated assault vary considerably across metropolitan areas, although much less so for robbery than for aggravated assault (Lauritsen, 2006a). For robbery, the average UCR-NCVS trend correlation across the 40 largest metropolitan areas for 1979–2004 is 0.59 (range = 0.02 to 0.95), while the average correlation for aggravated assault is 0.16 (range = −0.75 to 0.76). In addition, the trend correlations tend to be higher in the more populated metropolitan areas, which may reflect earlier adoption of crime records man- agement technology by the larger police departments. Lauritsen concluded that there is a good deal of variation at the level of the metropolitan statis- tical area in the correlations between the two sets of trends that is masked at the national level, and, as a result, it would be unwise for local areas to assume that their local UCR data provide good indicators of nonlethal vi- olence trends. In addition, Wiersema (1999) developed an area-identified NCVS data set from public-use data files, coded with geographic identifiers down to the census tract level, that was briefly available through Census Bureau research data centers. These area-identified data have been used to support some subnational analyses; see, e.g., Baumer (2002); Baumer et al. (2003); Lauritsen (2001). However, the data have been taken out of circu- lation; we discuss this further in Section 5–A. In sum, although much less is known about how UCR data trends com- pare with NCVS trends for state and local areas, it appears that at the na- tional level, index crime trends have become more similar in recent years. However, just as structural breaks in UCR data collection appear to be re- sponsible for the increasingly similar trends in the 1990s, changes may occur in the future. These alterations can result from a variety of factors, ranging from resource shortages in police departments to local political pressures regarding crime rates. Changes in the NCVS estimates can occur as well, as a result of declines in participation rates, sample coverage problems, lim- ited resources, or other factors. Without both sources of crime information,

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 75 it extremely difficult to fully understand the meaning of future changes in crime rates from either data series. McDowall and Loftin (2007) and others (e.g., Lynch and Addington, 2007) argue that too much emphasis should not be placed on the general issue of convergence in national crime trends. Rather, the complementari- ties of the two data systems should be emphasized and the selection of one series over another should depend on the research question at hand. For some types of analyses, researchers can use both data sources to assess and understand the strengths and limitations of findings. However, the NCVS is currently the only available source of national data for describing and understanding trends in certain types of crime. These crime types include those that are defined according to specific conditions of the incident (such as intimate partner violence); victim characteristics (such as violence against women or crimes against the elderly); and crimes that are believed to be severely underrepresented in police data for assorted reasons (such as hate crimes, sexual violence, and identity theft). 3–F.2 Independence of the NCVS and the UCR The origin of the NCVS as an independent estimate compared with the UCR was a development from the social and political climate of the 1960s. Cantor and Lynch (2000:97) note that “the confluence of several forces”— including a general mistrust of institutions—“made the 1960s an auspicious time for the development of victim surveys.” Specifically, “reforms of sev- eral of the Nation’s metropolitan police departments were accompanied by exposés of the previous practice of killing crime on the books”—that is, suppressing levels of reported crime. Against this backdrop, “victim surveys brought the ‘patina of science’ ” and an air of accuracy and impartiality to crime statistics; “there was greater trust that the resulting [NCVS] crime es- timates were not purposely manipulated” because “the Census Bureau and survey research agencies were not interested parties with respect to the crime problem” (Cantor and Lynch, 2000:88–89). Beyond the question of whether the UCR and the NCVS respond to the same underlying phenomena, the question can be raised about whether the need for a independent, national-level, and victimization-based measure of the traditional index crimes persists. If it were concluded that there was no need for such an independent national-level measure, then a different class of NCVS design options becomes feasible, if not preferable: for in- stance, crime-type coverage between the NCVS and the UCR would be re- allocated so that the UCR becomes the sole source of national indicators of some crimes, while the NCVS is focused more on hard-to-measure or newly emerging crime types. We think that the NCVS has strong policy relevance as a national-level

76 SURVEYING VICTIMS measure of crime independent of the UCR and should continue to function as such, for several key reasons. These include: • The NCVS as an objective measure: Given that police are not an unin- terested party in crime rates, there is always an inherent possibility of minimizing or reducing reported crime. Put more colloquially, having the police as the reporter of crime suggests the possibility of “cooking the books,” lowering reported counts or possibly declining to report altogether. The UCR can do some imputation (and does), but an inde- pendent measure as a counter to this possibility still has merit. (Note, though, that this argument is weakened in the absence of local-area NCVS-based estimates, which would be the best check on individual department reporting.) • Voluntary UCR reporting leads to coverage gaps: The UCR program relies on the voluntary cooperation of more than 17,000 seprate law enforcement agencies. Complete nonresponse to the UCR program, for individual years or for long stretches of time, occurs and is some- times pervasive for some states and large localities (see, e.g., discus- sions of UCR coverage in Maltz, 1999, 2007). Again, imputation helps to bridge gaps in national-level UCR estimates, but for representative- ness in coverage, the conceptual advantage still goes to a nationally representative sample like that employed in the NCVS. • Independent measure as “calibration” device: It is useful to have two related-but-not-identical measures in simultaneous operation simply because they may not always agree. The United States has two inde- pendent measures of jobs and employment (in the Current Employ- ment Statistics and the Current Population Survey); it has multiple measures of health insurance prevalence and of disability. The dif- ferences among the indicators enhance general understanding of the dynamics of the phenomena under study. Divergent or discrepant find- ings resulting from the two series may signal some structural problem with either of the individual measures and draw attention to poten- tial problems in methodology. An original implicit notion in the cre- ation of the NCVS—grounded in distrust of police reporting—may have been the use of the NCVS as a check on the UCR; however, it is equally valid to say that the two series can serve as an operational and conceptual check on each other. At the same time, in speaking of “calibration,” it is important to bear in mind that one data source is not always unequivocally right and the other wrong; both the NCVS and UCR are subject to measurement flaws. UCR measurement can suffer from lack of reporting by law enforcement agencies (discussed below) and underreporting due to victims’ hesitance to come for-

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 77 ward to the police. But, likewise, crimes may not be reported to interview- ers either, as discussed in Section 3–B.2; other examples of crimes missed in NCVS exist, including the finding by Cook (1985) that National Crime Survey estimates captured only about one-third of gun assaults resulting in gunshot injury that were apparent in emergency room data. Such underes- timates might arise from disproportionate gunshot prevalence among those not part of the household population or not listed as household members (and thus not sampled), those who were never contacted or who refused to participate, as well as those respondents who failed to report incidents to the NCVS interviewer. Having an independent measure is important as long as there remains reason to believe that not all crime is reported to police and that not all crimes known to the police are completely tallied in the UCR. That said, the utility of an UCR-independent measure of crime should not prevent consid- eration of design options that reduce lockstep similarity between the UCR and the NCVS (e.g., measuring exactly the same set of “index crimes” except for homicide). For several years, the National Incident-Based Reporting System (NIBRS) has been developed as a next-generation version (and replacement for) the UCR. The presence of a strong and complete NIBRS program might further blur the line between the UCR and the NCVS as separate indicators of crime. However, NIBRS development has been slow, and its coverage (i.e., cooper- ation by agencies in providing more detailed incident reporting) is still quite small. As of September 2007—about 15 years after development of initial NIBRS protocols—only about 26 percent of law enforcement agencies that contribute data to UCR were submitting NIBRS-compliant information; see Section D–2 for additional detail. 3–G ASSESSMENT As is true of many multipurpose social indicators, the basic utility of the NCVS to the American public is difficult to characterize in tangible terms. Because it does not currently provide estimates at small areas of geography, its role in allocating federal or state funds for criminal justice improvements is limited, and it does not readily lend itself to focusing specific police in- terventions in specific neighborhoods. However, through its focus on pro- viding detail on all types of crime and violence—reported to the police or not—and its rigorous design based on a representative sample with uniform national coverage, the NCVS has undeniable importance as a critical statis- tical indicator. For an informed assessment of the state of public welfare, federal statistical agencies like BJS have a core mandate “to be a credible source of relevant, accurate, and timely statistics” (National Research Coun-

78 SURVEYING VICTIMS cil, 2004:3). The NCVS provides information on the extent, consequences, and causes of violent behavior that are not available at the same level of comprehensiveness and quality from any other source. Accordingly, direct reports from BJS on NCVS trends are frequently sought for information and for assessment of new policy, and NCVS data play an important role in na- tional appraisals of child welfare (Federal Interagency Forum on Child and Family Statistics, 2007) and public health (U.S. Department of Health and Human Services, 2000). In our assessment, the need for a victimization-based measure of crime— another indicator, separate from the official police reports of the UCR—is as significant today as it was when the NCVS was first conceptualized. This is the case not out of any inherent distrust of official reports to police or demonstrated inaccuracy therein, but rather for the reason suggested most concisely by the President’s Commission on Law Enforcement and Admin- istration of Justice (1967:18): “No one way of describing crime describes it well enough.” The importance of the crime problem in the United States de- mands ongoing monitoring from multiple perspectives. In this monitoring, the NCVS is a vital complement to the police reports of the UCR, provid- ing valuable information on the context and causes of victimization in ways that summary counts can never do by themselves (and in which even a fully implemented NIBRS would still be lacking). However, in its size and available resources, the current NCVS is not capable of matching the original vision of the survey. As the costs of col- lecting information from the U.S. public have risen, the NCVS budget has not kept pace. Budget reductions have led to cutbacks in NCVS activities, most often through cuts in the total sample size. As it is currently config- ured, the NCVS does not meet the goal of being able to accurately measure year-to-year change in crime trends. That is, the standard errors of change estimates are too large to detect changes of importance to the country; BJS has had to use averages from 2-year groups of data in order to make state- ments about change (see Catalano, 2006), even though inferences from these rolling averages are not as intuitive to users and members of the public as direct estimates of change.7 To state this as a finding: Finding 3.1: As currently configured and funded, the NCVS is not achieving and cannot achieve BJS’s legislatively mandated goal to “collect and analyze data that will serve as a continuous and comparable national social indication of the prevalence, in- 7 For additional information on the use and interpretation of rolling-average estimates from federal survey data, see National Research Council (2007); the American Community Survey will use 3-year and 5-year averages in order to produce estimates for small areas and populations.

CURRENT DEMANDS AND CONSTRAINTS ON THE NCVS 79 cidence, rates, extent, distribution, and attributes of crime . . .” (42 U.S.C. 3732(c)(3)). Clearly, given the panel’s charge to consider options for the conduct of the NCVS, one possibility is not to conduct the NCVS at all; we reject that option. To take that option violates the legislative responsibilities of the Bureau of Justice Statistics. Furthermore, the panel thinks that BJS is the appropriate locus of responsibility for victimization measurement. As a fed- eral statistical agency, it alone has the mandate for independent, objective, statistical measurement, with the transparency that can establish public trust in the information (see National Research Council, 2004). Thus, although there should be no need for the panel to do so, we feel obliged to state a recommendation that is already explicit in the mission of BJS: Recommendation 3.1: BJS must ensure that the nation has qual- ity annual estimates of levels and changes in criminal victim- ization. However, the natural corollary is that the resources necessary to ade- quately achieve this mission must be forthcoming: Recommendation 3.2: Congress and the administration should ensure that BJS has a budget that is adequate to field a survey that satisfies the goal in Recommendation 3.1. NCVS’ unique substantive niche is providing information on crimes that are particularly likely to go unreported to the police, for whatever reason— whether fear or stigma, individual distrust of authority, or the perception that a violent act or threat is not significant enough a “crime” to report. Recommendation 3.3: BJS should continue to use the NCVS to assess crimes that are difficult to measure and poorly reported to police. Special studies should be conducted periodically in the context of the NCVS program to provide more accurate mea- surement of such events.

Next: 4 Matching Design Features to Desired Goals »
Surveying Victims: Options for Conducting the National Crime Victimization Survey Get This Book
×
Buy Paperback | $65.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

It is easy to underestimate how little was known about crimes and victims before the findings of the National Crime Victimization Survey (NCVS) became common wisdom. In the late 1960s, knowledge of crimes and their victims came largely from reports filed by local police agencies as part of the Federal Bureau of Investigation's (FBI) Uniform Crime Reporting (UCR) system, as well as from studies of the files held by individual police departments. Criminologists understood that there existed a "dark figure" of crime consisting of events not reported to the police. However, over the course of the last decade, the effectiveness of the NCVS has been undermined by the demands of conducting an increasingly expensive survey in an effectively flat-line budgetary environment.

Surveying Victims: Options for Conducting the National Crime Victimization Survey, reviews the programs of the Bureau of Justice Statistics (BJS.) Specifically, it explores alternative options for conducting the NCVS, which is the largest BJS program. This book describes various design possibilities and their implications relative to three basic goals; flexibility, in terms of both content and analysis; utility for gathering information on crimes that are not well reported to police; and small-domain estimation, including providing information on states or localities.

This book finds that, as currently configured and funded, the NCVS is not achieving and cannot achieve BJS's mandated goal to "collect and analyze data that will serve as a continuous indication of the incidence and attributes of crime." Accordingly, Surveying Victims recommends that BJS be afforded the budgetary resources necessary to generate accurate measure of victimization.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!