National Academies Press: OpenBook

Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics (2009)

Chapter: 3 Overview of Bureau of Justice Statistics Data Series

« Previous: 2 Measurement in the Justice System
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 75
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 76
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 77
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 78
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 79
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 80
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 81
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 82
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 83
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 84
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 85
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 86
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 87
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 88
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 89
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 90
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 91
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 92
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 93
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 94
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 95
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 96
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 97
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 98
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 99
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 100
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 101
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 102
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 103
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 104
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 105
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 106
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 107
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 108
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 109
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 110
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 111
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 112
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 113
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 114
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 115
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 116
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 117
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 118
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 119
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 120
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 121
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 122
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 123
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 124
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 125
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 126
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 127
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 128
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 129
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 130
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 131
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 132
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 133
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 134
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 135
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 136
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 137
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 138
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 139
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 140
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 141
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 142
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 143
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 144
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 145
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 146
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 147
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 148
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 149
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 150
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 151
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 152
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 153
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 154
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 155
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 156
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 157
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 158
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 159
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 160
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 161
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 162
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 163
Suggested Citation:"3 Overview of Bureau of Justice Statistics Data Series." National Research Council. 2009. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics. Washington, DC: The National Academies Press. doi: 10.17226/12671.
×
Page 164

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

–3– Overview of Bureau of Justice Statistics Data Series S OME OF THE DATA COLLECTIONS maintained by the Bureau of Justice Statistics (BJS) are very recent innovations whereas others were de- veloped as the agency took its current form over the course of the 1970s. By comparison, at least one of BJS’s collections can trace its origin to the turn of the 20th century (and, by extension, to the 1850 decennial census). Its portfolio includes those that have been successfully repeated in subsequent years (and hence have developed series continuity), but it also in- cludes one-shot efforts that could have been repeated but were not because of budget or other constraints. In their content and structure, they range from extensively developed population surveys to targeted administrative questionnaires filled out by institution managers to hand-coded summaries of court docket folders—and, accordingly, range considerably in their as- sociated level of expense. As illustration of the varying costs of BJS’s data series, Table 3-1 summarizes BJS’s expected spending in fiscal year 2008. This chapter provides a brief overview of BJS’s major data collection ef- forts, divided into four major topic areas: victimization (Section 3–A), cor- rections (3–B), law enforcement (3–C), and adjudications (3–D). (In recent years, BJS has undertaken a series of data collections specifically focusing on justice issues on American Indian reservations and tribal lands; these collec- tions slightly overlap the major topic areas and are separately described in Box 3-1.) Within each of these sections, a table illustrates the years of collec- tion for the various series under that topic heading. As noted in Chapter 1, these summaries are not intended to be full dossiers on the collections, their 75

76 JUSTICE STATISTICS Table 3-1 Estimated Funding for Bureau of Justice Statistics Criminal Justice Statistics Program, Fiscal Year 2008 Estimated Funding (thousands of Program dollars) Victimization National Crime Victimization Survey—Collection 18,700 National Crime Victimization Survey—Redesign Effort 3,900 Law Enforcement Census of State and Local Law Enforcement Agencies 905 Prosecution and Adjudication Statistics Civil Trial Court Cases 340 Court Statistics Project 415 National Judicial Reporting Program (Year 1) 340 State Court Processing Statistics 300 Survey Development, Two New Collections 300 Corrections Annual Probation and Parole Statistics 175 Annual Survey of Jails 230 Capital Punishment Statistics 260 Census of Probation and Parole Agencies 250 Deaths in Custody Reporting Program 553 National Corrections Reporting Program 647 National Prisoner Statistics 130 State Prison Expenditures 300 Federal Justice Statistics Program 800 Criminal Justice Employment and Expenditures 222 Firearm Background Check Statistics 360 Tribal Statistics Tribal Criminal History Improvement Program 704 State Tribal Crime Reports 145 State Justice Statistics Program State Statistical Analysis Centers 2,300 Technical Assistance to SACs/Multi-State Projects 1,400 Publication and Dissemination 2,872 Management, Administration, and Joint Federal Statistics Efforts 2,131 Total 38,679 NOTE: Expenditures for National Criminal History Improvement Program grants and for data collections pursuant to the Prison Rape Elimination Act of 2003 are funded through separate lines in the BJS budget. SOURCE: Adapted from table provided by BJS. uses, and their associated methodological challenges, but rather a general orientation. In particular, our interim report (National Research Council, 2008b) describes the development and protocols of the National Crime Vic- timization Survey (NCVS) in considerably more detail than we attempt in this more limited treatment.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 77 We close the chapter in Section 3–F with general assessments of BJS’s portfolio and the structure of BJS collections within that portfolio. 3–A VICTIMIZATION The President’s Commission on Law Enforcement and Administration of Justice (1967) that developed the justice system “funnel” model we use as a framework in this report (Section 2–A) and recommended the creation of what would become BJS also pioneered a new approach to studying crime. The commission sponsored the National Opinion Research Center (NORC) to survey the members of 10,000 households on their experiences as victims of crime and violence, and the commission then compared the results to the reported-to-police estimates from the Uniform Crime Reporting (UCR) program. This prototype National Survey of Criminal Victims demonstrated to the commission that “for the Nation as a whole there is far more crime than ever is reported” (President’s Commission on Law Enforcement and Administration of Justice, 1967:v): Burglaries occur about three times more often than they are reported to police. Aggravated assaults and larcenies over $50 occur twice as often as they are reported. There are 50 percent more robberies than reported. In some areas, only one-tenth of the total number of certain kinds of crimes are reported to the police. As a consequence of the commission’s report, the Omnibus Crime Con- trol and Safe Streets Act of 1968 specifically mandated the new Law Enforce- ment Assistance Administration (LEAA) to “collect, evaluate, publish, and disseminate statistics and other information on the condition and progress of law enforcement in the several States” (P 93-83 § 515(b); see also U.S. .L. Census Bureau, 2003:A1-5). This data-gathering authority was invoked to begin pilot work and implementation of what would become the National Crime Survey; later, at the culmination of an extensive redesign process, this survey was renamed the National Crime Victimization Survey. Drawn from an original mandate to study the progress of law enforcement, the NCVS was designed to do so by asking respondents about victimization incidents generally, whether or not they were reported to authorities. Accordingly, the NCVS presented the unique ability to shed light on the “dark figure of crime”—the phrase coined by Biderman and Reiss (1967) to describe crimi- nal incidents that are not reported to police. Table 3-2 shows the years of collection of the NCVS and, more specifi- cally, the topic supplements to the NCVS. As we will discuss, there are some instances in which content from a supplement was subsequently integrated into the core NCVS; hence, the table’s suggestion that a supplement was not repeated in later years does not necessarily mean that the topic was dropped.

78 JUSTICE STATISTICS Box 3-1 Bureau of Justice Statistics Collections on Tribal Justice Tribal justice agencies are included in several of BJS’s principal data series such as the Census of State and Local Law Enforcement Agencies (Hickman, 2003). However, on tribal lands, “criminal jurisdiction . . . is divided among the Federal, State, and tribal governments” depending on the “nature of the offense, whether the offender or victim was a tribal member, and the State in which the crime occurred” (Perry, 2005:1). In recent years, BJS has developed one-shot and continuing collections on the justice authorities that operate in American Indian tribal lands. BJS typically has worked with the U.S. Department of the Interior’s Bureau of Indian Affairs on developing address and contact lists; the U.S. Department of Justice also maintains an Office of Tribal Justice that facilitates interactions between the Justice Department and the tribes, and the Federal Bureau of Investigation (FBI) shares law enforcement authority on tribal lands. Conducted on a one-shot basis to date, in 2002, the Census of Tribal Justice Agencies (Perry, 2005) was the U.S. Department of Justice’s first comprehensive effort to document tribal justice agencies as systems in their own right. The major difference between this collection and BJS’s standard series is that it sought to articulate the use of “indigenous forum” arrangements (e.g., councils of elders or “sentencing circles”) that are distinct from court proceedings, and that may combine adjudication and law enforcement functions on reservation lands. For those tribes that have developed specific law enforcement agencies, the census sought staffing and policy information (including the level of interface and cross-deputization with nontribal authorities). Questions on the census were also intended to provide information on tribes’ criminal history record-keeping and ability to provide crime reports to federal efforts such as the FBI’s National Crime Information Center and National Sex Offender Registry. The census achieved participation from 314 of 341 federally recognized tribes; however, “participation by Alaska Native tribes or villages was not extensive enough to enable their inclusion” in the results of the census (Perry, 2005:iii). BJS contracted with Falmouth Institute and Policy Studies, Inc., as the data collection agent for the census. In 1998, BJS began specifically collecting information on tribal jails as a component of the Annual Survey of Jails; the collection has since developed into a separate Survey of Jails in Indian Country that continues on an annual basis. This collection mirrors the content of the Survey of Inmates of Local Jails (Section 3–B.2) but concentrates on those facilities and detention centers operated by the federal Bureau of Indian Affairs or by individual tribal authorities. The Survey of Jails in Indian Country also includes questions on facility programs and services, such as health care and counseling, but these questions typically are asked only in selected years. See, e.g., Minton (2006, 2008) for reports of survey results. BJS’s National Criminal History Improvement Program of grants to assist local law enforcement agencies develop criminal history and other information databases has recently expanded to include a tribal justice–specific component. Perry (2007) summarizes work in the first few years of the program, 2004–2006. Dubbed the Tribal Criminal History Record Improvement Program, the program’s grant solicitation for 2008 puts particular priority on “enhancing automated identification systems, records of protective orders involving domestic violence and stalking, sex offender records, [and] DWI/DUI conviction information,” as well as developing tribal interfaces to federal background check data. The experiences of American Indians living on and off reservation lands in the justice system have been analyzed by BJS staff using the National Crime Victimization Survey and other resources; see, e.g., Greenfeld and Smith (1999) and Perry (2004).

Table 3-2 Bureau of Justice Statistics Data Collection History and Schedule, Victimization, 1981–2009 Series 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08 09 Total National Crime Victimization Survey • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 29 NCVS Supplements or New Content Areas Crimes Against Disabled · · · · · · · · · · · · · · · · · · · · · · · · · · • • • 3 Cybercrime · · · · · · · · · · · · · · · · · · · · • • • • · · · · · 4 Hate Crimes · · · · · · · · · · · · · · · · · · • • • • • • • • • • • 11 Identity Theft · · · · · · · · · · · · · · · · · · · · · · · • • • • • • 6 Police-Public Contact Survey/Police Use of Force · · · · · · · · · · · · · · · • · · • · · • · · • · · • · 5 School Crime Supplement · · · · · · · · • · · · · · • · · · • · • · • · • · • · • 8 Supplemental Victimization Survey (stalking) · · · · · · · · · · · · · · · · · · · · · · · · · • · · · 1 Vandalism · · · · · · · · · · · • • • • • • • • • • • • • • • • · · 16 Workplace Violence Supplement · · · · · · · · · · · · · · · · · · · · · • · · · · · · · 1 City-Level Victimization Surveys (12) · · · · · · · · · · · · · · · · · • · · · · · · · · · · · 1 NOTES: •, data collected. ·, data not collected. Preliminary versions of NCVS questions on crimes against the disabled were asked as early as 2000, but only those used OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES starting in 2007 were meant for or suitable for producing regular estimates. SOURCE: Bureau of Justice Statistics. 79

80 JUSTICE STATISTICS In its role as an indicator of crime levels in the United States, the NCVS is often compared to data from the UCR program maintained by the Federal Bureau of Investigation (FBI); we discuss the UCR and related programs in more detail in Section 4–C. 3–A.1 National Crime Victimization Survey Since the outset, BJS has commissioned the U.S. Census Bureau as the data collection agent for the NCVS; the NCVS is one of the major federal surveys administered by the Census Bureau’s Demographic Surveys Division. The NCVS is a household survey using a rotating panel sample of addresses, meaning that addresses are chosen to be eligible for interviewing for a cer- tain number of interviews over a fixed period of time. Currently, contacts are made for interviews at sample addresses for 3 years, seven interviews at 6-month intervals.1 As sample addresses complete their time in sample, they are replaced with new ones. When the NCVS began in 1972, the NCVS was administered to 72,000 households—a large sample, meant to produce reliable estimates of year-to-year change in victimization as well as informa- tion on relatively rare crime types. However, sample size reductions (for purposes of cost savings) since the 1980s have reduced the sample size of the NCVS by almost half—in 2005, the NCVS was administered to about 38,600 households or 67,000 people. The sample sizes and response rates for the NCVS between 1996 and 2006 are shown in Table 3-3. Although the current sample size qualifies the NCVS as a large data col- lection program, occurrences of victimization are essentially a rare event rel- ative to the whole population: many respondents to the survey do not have incidents to report when they are contacted by the survey. Consequently, as we described in Chapter 1, the reduced sample size (combined with gen- erally low and decreasing estimated overall victimization rates) is such that only a large percentage change in violent crime victimization rates—at least 8 percent—is a statistically significant year-to-year change. Indeed, as noted in the U.S. Office of Management and Budget (2007:8) annual review of statistical program funding, “cost cutting measures applied to the NCVS continue to have significant effects on the precision of the estimates—year- to-year change estimates are no longer feasible and have been replaced with two-year rolling averages” in BJS reports on victimization. 1 The sample is further divided into six “rotation groups,” and each of these into six “pan- els.” One panel from each of the rotation groups is designated for interviewing each month, hence the “rotating panel” nomenclature. In large part, the use of this rotating panel structure derives directly from the choice and retention of the Census Bureau as data collector for the NCVS; to achieve some efficiencies in collection, such as sharing a pool of interviewers, some design features of the NCVS were chosen to emulate those of the Census Bureau’s Current Pop- ulation Survey (CPS), then the Census Bureau’s largest intercensal survey (National Research Council, 2008b:123; see also Cantor and Lynch, 2000:107).

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 81 Table 3-3 Number of Households and Persons Interviewed by Year, 1996–2006 Households Persons Year Sample Size Response Rate Sample Size Response Rate 1996 45,000 93 85,330 91 1997 42,910 95 79,470 90 1998 43,000 94 78,900 89 1999 43,000 93 77,750 89 2000 43,000 93 79,710 90 2001 44,000 93 79,950 89 2002 42,000 92 76,050 87 2003 42,000 92 74,520 86 2004 42,000 91 74,500 86 2005 38,600 91 67,000 84 2006 38,000 91 67,650 86 NOTE: These sample sizes correspond to the number of separate households and persons designated for contact in a particular year. Participation rates for a particular year would be roughly double these, accounting for two interviews with sample addresses in the same year. SOURCE: Bureau of Justice Statistics (2006b, 2008c). A multiyear redesign effort, culminating in 1992 with the first collec- tion using all of the new procedures (and the renaming of the survey to NCVS), focused principally on implementing a screening procedure. The first part of an NCVS interview is the screening questionnaire, which uses a series of carefully constructed questions to elicit counts—but not yet full information—about crime victimization incidents in the past 6 months. Af- ter this screener has been completed, the NCVS interviewer guides the re- spondent through the completion of a detailed incident report on the cir- cumstances of each incident counted by the screener. This interviewing process is repeated for each person age 12 or older in the household at the sample address (although only the first respondent is asked about gen- eral characteristics of the household). Because the number of victimizations experienced by respondents varies—and, with it, the number of incident re- ports that must be completed—the length of time that it can take to complete an NCVS interview can also vary. BJS estimates that the average face-to-face interview lasts 26 minutes (Bureau of Justice Statistics, 2008d:10). Over time, the mode of NCVS interviewing and the use of interviews to calculate estimates has shifted. BJS still insists that the first interview with a sample household be conducted in person by a Census Bureau enumera- tor. Beginning in 1980, BJS began to permit every other interview (after the first contact) to be conducted via phone; by 2003, NCVS interviews were

82 JUSTICE STATISTICS being advised to complete their interviews by phone “whenever possible” (U.S. Census Bureau, 2003:A1-11). In the late 1990s and early 2000s, BJS also invested in the use of survey automation procedures, so that the NCVS and its supplements are fully electronic; face-to-face interviews are done by computer-assisted personal interviewing (CAPI) with the interviewer using a laptop computer, and telephone interviews can be completed using the same computer interface.2 In terms of how NCVS interviews result in final esti- mates, BJS currently produces “collection year” estimates from the NCVS, accumulating the data from all interviews completed in a particular year t . Because of the 6-month reference period of the survey, this means that a year t estimate from the NCVS includes events that may have occurred in the last half of the year t − 1 and does not include all incidents that actually occurred in year t (since late year t incidents would only be picked up in interviews in year t + 1). Until recently, a key feature of the NCVS was that the first interview with a sample household was not included in the estimates. Instead, it was withheld and used as a bounding interview: counts and incidents reported in the second interview could be checked against the bounding interview to correct for the same incidents being reported multiple times. As one of a bundle of cost-cutting measures, BJS and the Census Bureau began to in- clude these “bounding interviews” in the production of estimates, beginning with estimates from the 2006 administration of the survey. As we discuss in Section 3–A.3, these changes produced anomalous results that led BJS to declare a “break in series” that prevents comparison with previous years’ data. NCVS results are annually described in two BJS report series, Criminal Victimization (e.g., Rand and Catalano, 2007) and Crime and the Nation’s Households (e.g., Klaus, 2007). BJS staff have issued a number of “Spe- cial Reports” dedicated to analysis of particular content from the NCVS, such as the involvement of weapons and firearms in victimization incidents (Perkins, 2003), violence in the workplace (Duhart, 2001), reporting of rape or attempted rape to the police (Rennison, 2002b),3 and specific reports on victimization experiences by racial and ethnic groups (e.g. Rennison, 2002a; Harrell, 2007). A complete list of references to publications that have used and analyzed NCVS data is beyond the scope of this report, but some recent references suggest the breadth of application of the data. NCVS data have been probed to study the issue of violence and police response to incidents in disadvantaged areas and neighborhoods (Baumer, 2002; Baumer et al., 2 As mentioned in the next section, BJS and the Census Bureau attempted to do many of the NCVS telephone interviews from centralized Census Bureau call centers, but this practice was eliminated in the most recent set of cost-cutting measures. 3 Rennison (2002b) is officially designated a “Selected Findings” report rather than a “Spe- cial Report.”

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 83 2003); the NCVS has also been used in lines of research on the reporting of crime by women, particularly of rape, to authorities (Baumer et al., 2003; Felson et al., 2005; Xie et al., 2006; Addington and Rennison, 2008) and the effect of victimization on residential turnover (Dugan, 1999; Xie and McDowall, 2008). Longer-term analyses have considered the comparabil- ity of NCVS with other sources of data on crime such as the UCR (Lynch and Addington, 2007) and the National Violence Against Women Survey (Rand and Rennison, 2005), and differential trends in violence by gender (Lauritsen and Heimer, 2008). 3–A.2 NCVS Supplements Over time, NCVS’s flexibility as a survey vehicle has been exploited to gather occasional or one-time data through survey supplements. These sup- plements have been supported by contributions from partner agencies or grants from other organizations; supplements have also been directly devel- oped to respond to new mandates from Congress. A battery of questions on crime against the disabled was developed in di- rect response to P 105-301, the Crime Victims with Disabilities Awareness .L. Act of 1998, which directed that the NCVS be used to measure “the nature of crimes against individuals with developmental disabilities” and “the spe- cific characteristics of the victims of those crimes.” The module of questions is meant to assess whether victims of crime were in poor health, had any physical or mental impairments, or had disabilities that affected their every- day life. They are also asked to judge if any of these provided an opportunity for their victimization. Similarly, since 1999, the NCVS has collected data on hate crimes, a content area motivated by the Hate Crime Statistics Act of 1990 (P 101- .L. 275). The hate crime questions were not a standalone module but rather a section added directly to the NCVS questionnaire. BJS collaborated with its fellow Office of Justice Programs agency, the National Institute of Justice, to include a module of questions on crime in schools in the 1989 version of the survey (Bureau of Justice Statistics, 2008e:5). After a repeat administration in the mid-1990s, the National Cen- ter for Education Statistics (NCES) of the U.S. Department of Education has paid for BJS to include the School Crime Supplement (SCS) to the NCVS on a biennial basis. As described by the U.S. Census Bureau, Demographic Surveys Division (2007:51): The supplement contains questions on preventative measures employed by the school to deter crime; students’ participation in extracurricular activities; transportation to and from school; students’ perception of rules and equality in school; bullying and hate crime in school; the presence of street gangs in school; availability of drugs and alcohol in

84 JUSTICE STATISTICS the school; attitudinal questions relating to the fear of victimization in school; access to firearms; and student characteristics such as grades received in school and postgraduate plans. The school crime questions were administered to all individual respondents in NCVS sample households who were between ages 12 and 18, “who were enrolled in primary or secondary education programs leading to a high school diploma, and who were enrolled in school sometime during the six months prior to the interview” (U.S. Census Bureau, Demographic Surveys Division, 2007:51).4 The SCS is administered to NCVS respon- dents (maintaining the age-12-and-older restriction of the main survey) who attend schools. In the 2005 administration of the SCS (running from Jan- uary through June), 61.7 percent of the 11,525 eligible SCS respondents completed the questions (Bureau of Justice Statistics, 2008e:6). SCS data are distinct from, and provide a different vantage on crimes in school from, the Schools Survey on Crime and Justice that NCES contracts directly with the Census Bureau to construct; that survey is essentially an establishment survey, meant to be completed by principals of a nationally representative sample of public elementary and secondary schools. Results from the vari- ous administrations of the SCS, and related data resources, are described by Dinkes et al. (2007), and the SCS data have been used in analyses by, for example, Addington (2003). Questions on experiences with identity theft were first added to the NCVS in July 2004. Whereas other NCVS supplements such as the SCS function as a true supplement to the NCVS interviewing experience—a stan- dalone questionnaire that is meant to be answered by everyone meeting cer- tain eligibility requirements—the identity theft “supplement” was built into the NCVS screening interview. The version of the identity theft questions used in 2004 is illustrated in Figures 3-1 and 3-2. As noted in Section 2–C.1, this set of identity theft questions is one of few available quantitative mea- sures of the levels of some types of fraud. With additional sponsorship from the Federal Trade Commission and other bureaus within the Office of Justice Programs, BJS planned to field a more comprehensive identity theft supple- ment to the NCVS beginning in 2008 (Baum, 2007:4). Another important supplement, the Police-Public Contact Survey (PPCS), stems from a brief provision in the Violent Crime Control and Law Enforcement Act of 1994 (P 103-322 § 210402): “the Attorney .L. General shall, through appropriate means, acquire data about the use of ex- cessive force by law enforcement officers.” As a direct result of this mandate, BJS developed the PPCS to measure the extent of all types of interactions between the police and members of the public (of which those involving 4 However, “students who were home schooled were not included past the screening ques- tions since it was determined that many of the questions in the SCS were not relevant to their situation” (Bureau of Justice Statistics, 2008e:5).

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 85 Figure 3-1 Module of questions on identity theft in the 2004 National Crime Victimization Survey (part 1) NOTE: The questions are part of NCVS-1, the screening questionnaire part of the NCVS interview.

86 JUSTICE STATISTICS Figure 3-2 Module of questions on identity theft in the 2004 National Crime Victimization Survey (part 2) NOTE: See Figure 3-1. “excessive force” is logically a subset). The survey was first conducted on a pilot basis in 1996 (Greenfeld et al., 1997); after refinement, it was fully fielded as an NCVS supplement in 1999 and has become a continuing occasional supplement. The survey gathers detailed information about the nature of police-citizen contacts, respondent reports of police use of force and their assessments of that force, and self-reports of provocative actions that respondents may have themselves initiated during the encounter. We return to discussion of the PPCS in Section 5–A.2 because of events that transpired in the release of data from the 2002 administration of the supple- ment; those events notwithstanding, PPCS data have driven useful analyses of racial profiling by, for example, Engel and Calnon (2004) and Engel (2005). BJS fielded a Supplemental Victimization Survey (SVS) from January through June 2006; the report of findings from the supplement was released in January 2009 (Baum et al., 2009). The SVS was funded by the Justice Department’s Office on Violence Against Women (OVW) and focused prin- cipally on the measurement of victimization by stalking or harrassing behav- ior. The SVS content was determined on the basis a 1-day expert workshop

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 87 convened by OVW and BJS and a subsequent year-long working group of OVW BJS, and Census Bureau staff (Baum et al., 2009:10). An interesting , feature of the SVS is that it deliberately did not use the term “stalked” (or variants thereof) until the final question. NCVS respondents were routed into the SVS based on their response to a screening question listing a num- ber of behaviors (e.g., “leaving unwanted items, presents, or flowers”) that “frightened, concerned, angered or annoyed” the respondent. Only in the final question were SVS respondents asked whether the behaviors they had just described constituted “stalking” (Baum et al., 2009:11–12). The display of NCVS supplements in Table 3-2 favors supplements that have been mounted in the past 15 years, and misses some older one-shot supplements. Other NCVS supplements over the years have included: • Crime seriousness: An early supplement gathered national data on the perceived seriousness of crime, information that has been used to dif- ferentially weight incidents to reflect their impact on the public. On a one-shot basis, BJS collaborated with the Office of Community Ori- ented Policing Services in conducting community safety surveys by telephone in 12 cities, wholly distinct from the NCVS (Smith et al., 1999). • Attitudes and lifestyles: Another supplement gathered extensive data on the attitudes of individuals and the relationship between crime and how they conduct their lives (Murphy, 1976; Cowan et al., 1984). • Workplace risk: A module of questions on nonfatal violence in the workplace was sponsored by the National Institute for Occupational Safety and Health (Centers for Disease Control and Prevention) in 2002. • Harassment and stalking: As described above, the SVS was conducted from January through June 2006 on, as yet, a one-shot basis with sponsorship from OVW Questions focused primarily on perceived ex- . periences with harassment and stalking. It is important to note two aspects of the existing set of NCVS supple- ments. The first is that they have been topic-based supplements—modules of additional questions asked to some or all NCVS respondents interviewed at a particular time. “Supplement,” interpreted more broadly, could con- note the addition of persons or households to the sample, such as adding sample in particular geographic areas to support subnational estimates or “targeting” additional sample units in particular age, race, or gender groups. The second noteworthy aspect of the current NCVS supplements is that the depiction in Table 3-2 may suggest more of a structure to the existing supple- ments than actually exists. Though some supplements have come to follow a regular pattern of inclusion in the NCVS, supplements are generally done on an as-available basis, requiring both funding from any external sponsoring

88 JUSTICE STATISTICS agency and time to develop and test questionnaires. As we discuss further in Section 3–F.2, there is currently no regular, rigorous schedule of ongoing NCVS supplements. 3–A.3 The NCVS “Break in Series” We describe the methodology of the NCVS in greater detail in our in- terim report (National Research Council, 2008b), the executive summary of which is reprinted as Appendix B of this report. Hence, our description of the NCVS in this report is meant to be a brief synopsis rather than a compre- hensive overview. However, in this report’s description of BJS data series, we believe it is important to mention a complication concerning the 2006 data from the NCVS that was encountered as our interim report was in the end stages of production. On December 12, 2007, BJS released its first NCVS estimates for data collected in 2006—doing so with a strong warning that these newest es- timates were fundamentally different from, and incomparable to, previous years’ estimates. In processing 2006 NCVS results, BJS and the Census Bu- reau detected “variation in the amount and rate of crime [that] was too extreme to be attributed to actual year-to-year changes.” After consulting with individual external experts, BJS concluded that these differences were sufficiently large as to declare that that “there was a break in series between 2006 and previous years that prevent[s] annual comparison of criminal vic- timization at the national level” (Rand and Catalano, 2007:1). A technical note in the support documentation for the 2006 NCVS data, as logged in the National Archive of Criminal Justice Data (NACJD), indi- cates BJS’s conclusion that (Bureau of Justice Statistics, 2008d): [The break] was mainly the result of three major changes in the survey methodology: 1. introducing a new sample beginning in January 2006, based on the 2000 Decennial Census to account for shifts in population and location of households that occur over time 2. incorporating responses from households that were in the survey for the first time (called “bounding interviews”) in the production of survey estimates 3. replacing paper and pencil interviewing (PAPI) with computer- assisted personal interviewing (CAPI). On the first point, the Census Bureau “redesigns” samples for the house- hold surveys that it performs under contract to other federal agencies follow- ing the completion of a new decennial census. The use of results from the 2000 decennial census to update survey samples (and derive the population “controls” used to weight sample survey data to reflect the whole popula- tion) was delayed for several reasons. Key among these reasons was final

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 89 determination of exactly which census results—whether the initial census totals or figures that had been statistically adjusted for nonresponse—should be applied. Though a 1999 U.S. Supreme Court decision precluded the use of adjusted census numbers for purposes of congressional apportionment, it left open the possibility of adjustment for data used in legislative redistrict- ing, in deriving survey controls, or for other purposes. In a series of rec- ommendations, the Census Bureau ultimately decided against adjustment of 2000 census results for any purpose, but said determination required 2 years of additional research and analysis (National Research Council, 2004b). The change to a sample based on the 2000—and not the 1990—census began for the NCVS in January 2006. In their analyses, BJS and the Cen- sus Bureau concluded that the shift to the new sample had contributed to severely anomalous results (Bureau of Justice Statistics, 2008d): Of the new areas included in the 2006 sample, about two-thirds were in areas designated as rural areas. . . . Introduction of the new sample in rural areas showed that the rate of violent victimization increased by 62% between 2005 and 2006. However, there was very little change in rates of violent victimization for urban and suburban sample areas, and violent victimization rates for 2006 continuing areas (urban, suburban, and rural) were not significantly different from 2005. BJS and the Census Bureau concluded that the addition of the new sample might also lead to a more subtle effect on the estimates: “during every sam- ple redesign, the selection and integration of a new sample requires hiring and training interviewers to administer the survey in new areas.” Accord- ingly, the introduction of the new sample might have led to a different mix of new versus experienced interviewers and, accordingly, differences in the effectiveness in eliciting victimization incidents from respondents. The second cause for the “break in series”—the inclusion of first, bound- ing interviews—was one cost-cutting measure introduced in 2006 in order to remain within BJS’s budgetary resources. Another cost-cutting measure implemented at the same time was an across-the-board 14 percent cut in the NCVS sample size. The technical documentation note indicates (Bureau of Justice Statistics, 2008d): Because of telescoping and panel bias (sometimes called respondent fa- tigue), respondents tend to report more incidents of crime during the first interviews than in subsequent interviews. A weighting adjustment factor was applied to mitigate over-counting of crime. Despite these weighting adjustments, though, BJS and the Census Bureau were unable to fully parse the effects of including the first interviews. Finally, BJS and the Census Bureau concluded that at least part of the anomalous findings might be attributable to mode effects: differences in sur- vey response due to the medium through which questions are posed, such as face-to-face interviewing, self-response on paper forms, self-response by

90 JUSTICE STATISTICS telephone, or self-response via the Internet. For several preceding years, BJS had invested in converting the NCVS from a paper-based survey to a fully automated collection, with interviewers reading from and entering responses into an electronic version of the questionnaire on a laptop computer. (Since the first interview with a sample household must be completed in person, but later interviewers may be completed by telephone, the fully-automated NCVS is an example of both CAPI [first and any subsequent face-to-face interview] and computer-assisted telephone interviewing [CATI].) This au- tomation work happened to be completed at exactly the same time as the other methodological changes (Bureau of Justice Statistics, 2008d): In July 2006, NCVS was converted to a fully automated data collec- tion. . . . Previous research suggested that computer-assisted telephone interviewing (CATI) enhances data accuracy and produces higher and more accurate estimates because the computer-based interviewing pro- cess ensures that correct skip patterns are followed so that respondents answer all relevant questions. [However,] limited time and financial re- sources prohibited the Census Bureau and BJS staff from fully assessing the effects of CAPI on the 2006 estimates. In releasing its first report on 2007 NCVS data (Rand, 2008), BJS sub- stantially softened the rhetoric suggesting an irrevocable break in series. In- stead, the report tentatively characterizes the 2006 results as “a temporary anomaly in the data” and expresses “a high degree of confidence that survey estimates for 2007 are consistent with and comparable to those for 2005 and previous years.” Generally, the report characterizes the still-not-fully- understood changes from 2005 to 2006 and again from 2006 to 2007 as “substantial fluctuations” that “do not appear to be due to actual changes in crime” (Rand, 2008:1, 2). In technical notes, the report summarizes evalu- ative work done by the Census Bureau on BJS’s behalf that concludes that the effects of sample size reduction and inclusion of bounding interviews had little effect on NCVS estimates; it nudges toward concluding that the “hiring and training of new interviewers to administer the survey” as part of the redesign sample in 2006 was a major factor in the spike in victimization rates observed in that year (Rand, 2008:10). The report notes that “BJS con- tinues to work with the U.S. Census Bureau to better understand the impact of these [methodological] changes upon survey estimates” and indicates that adjusted estimates for 2006 and 2007 may be issued at a future date (Rand, 2008:2). In the interim, in a footnote (Rand, 2008:Note 2), BJS encourages “users . . . to focus on the comparison between 2005 and 2007 victimization rates until the changes to the NCVS in 2006 are better understood.” BJS’s December 2007 announcement of the “break in series” in the NCVS was made as our panel’s interim report on the NCVS was in the very late stages of review. In that report, we could only acknowledge the BJS announcement in a footnote (National Research Council, 2008b:86).

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 91 Limited though it was, the brief footnote on the NCVS “break in series” was made as a comment to a pair of sentences that provide a good starting point for fuller discussion: The decision to include unbounded, first interviews in NCVS estimates was made as our panel was being established and assembled, and so we do not think it proper to second-guess it; we understand the fiscal constraints under which the decision was made. However, it serves as an example of a seemingly short-term fix with major ramifications, and it would have benefited from further study prior to implementation. We return to a discussion of the NCVS break in series at various points in Chapter 5, particularly Section 5–B.8. 3–B CORRECTIONS The maintenance of statistics on persons under correctional supervision in the United States dates back to the 1850 decennial census, giving correc- tions data the longest lineage of BJS data series. As discussed by National Research Council (2006:Sec. 3–D), the 1850 census was the first to give enumerators formal rules for determining residence. One of these was the specific direction to treat jailors and other superintendents of institutions as heads of “families,” counting prisoners under their supervision as members of the family; as the term was used in censuses of the period, “family” had no direct tie to blood relations. This practice continued in the next several censuses, with the 1880 and 1890 censuses introducing a special form for enumerators to record information on individual prisoners. In 1904, the newly permanent Census Bureau began the annual publication “Prisoners in State and Federal Institutions,” beginning to tally commitments to the institution in a calendar year rather than a single reference date (as in the decennial census). In a 1923 count, the Census Bureau began to count dis- charges from prison or jail, along with information on time served (Cahalan, 1986:1–2; see also Beattie, 1959). In 1950, authority for this annual collec- tion was transferred to the Federal Bureau of Prisons (BOP), which in turn was transferred to the LEAA. Contracting with the Census Bureau as data collector, BJS has conducted the collection as the still-continuing National Prisoner Statistics (NPS) series since 1973. Data on the correctional population has grown in importance and mean- ing, given the massive growth in that population since the 1970s; see Ta- ble 3-4. Counts of the prison population draw particular concern—tripling between 1980 and 1995 after decades of remarkable stability (Blumstein and Beck, 1999) before settling into slower rates of annual growth—as state governments have struggled to keep pace and develop facility capacity. In doing so, new and ever more varied styles of incarceration have developed, including use of privately built and operated facilities and community cor-

92 JUSTICE STATISTICS rectional facilities; some states have also relied on establishing contracts to house their prisoners in other states where capacity still exists. Significant though the prison population has become, the much larger number of peo- ple under probation supervision also rapidly escalated over the course of the 1980s. A major challenge for corrections systems (and corresponding chal- lenge of measurement) concerns the experience of the formerly incarcerated when sentences are completed or parole is granted, and prisoners reenter the community. BJS’s data collections in the area of correctional supervision (see Ta- ble 3-5) include censuses and surveys of prisons and jails, intended to moni- tor the stocks and flows of inmates within these facilities. They also include periodic surveys of the inmate population, which provide an opportunity to study criminogenic factors in their backgrounds. Recently, BJS has con- ducted a special Survey on Sexual Violence among inmates of prisons, jails, and juvenile correctional facilities pursuant to the Prison Rape Elimination Act of 2003; those collections, and the legislative act, are described in Sec- tion 5–A.1 rather than this chapter. BJS’s survey-based methods are supple- mented by collections of administrative data providing counts of inmates, probationers, and parolees. BJS also occasionally conducts specialized stud- ies of correctional populations; for instance, its studies of recidivism have provided valuable information about patterns of rearrest and reincarcera- tion among state prisoners released in 1983 and 1994. 3–B.1 Prisons National Corrections Reporting Program and National Prisoner Statistics As described in the beginning of the chapter, the NPS series continues annual collection of the numbers of prisoners in both state and federal prisons that were begun by the Census Bureau in 1926. The Census Bu- reau discontinued the publication of the series in 1946 (but continued some data collection), and authority for the series was shifted to BOP in 1950 (Cahalan, 1986:6). When the BJS predecessor, National Criminal Justice Information and Statistics Service, was formed in 1971, it took responsibil- ity for the series, engaging the Census Bureau as its data collection agent. In 1983, data collection for the NPS was combined with a parallel collection on parole—the Uniform Parole Reports (see Section 3–B.6)—and the re- sulting program was renamed the National Corrections Reporting Program (NCRP). In 1984, the Census Bureau began to collect data from federal prisons as well as state prisons, in addition to the California Youth Authority (Bureau of Justice Statistics, 2007d:3). NCRP collection is based on facility or administrative records, as re- ported by correctional authorities, and data are compiled on both an annual

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 93 Table 3-4 Estimated Number of Adults Under Correctional Supervision in the United States, 1980–2006 Year Prison Jail Probation Parole Total 1980 319,598 182,288 1,118,097 220,438 1,840,400 1981 360,029 195,085 1,225,934 225,539 2,006,600 1982 402,914 207,853 1,357,264 224,604 2,192,600 1983 423,898 221,815 1,582,947 246,440 2,475,100 1984 448,264 233,018 1,740,948 266,992 2,689,200 1985 487,593 254,986 1,968,712 300,203 3,011,500 1986 526,436 272,735 2,114,621 325,638 3,239,400 1987 562,814 294,092 2,247,158 355,505 3,459,600 1988 607,766 341,893 2,356,483 407,977 3,714,100 1989 683,367 393,303 2,522,125 456,803 4,055,600 1990 743,382 405,320 2,670,234 531,407 4,350,300 1991 792,535 424,129 2,728,472 590,442 4,535,600 1992 850,566 441,781 2,811,611 658,601 4,762,600 1993 909,381 455,500 2,903,061 676,100 4,944,000 1994 990,147 479,800 2,981,022 690,371 5,141,300 1995 1,078,542 507,044 3,077,861 679,421 5,342,900 1996 1,127,528 518,492 3,164,996 679,733 5,490,700 1997 1,176,564 567,079 3,296,513 694,787 5,734,900 1998 1,224,469 592,462 3,670,441 696,385 6,134,200 1999 1,287,172 605,943 3,779,922 714,457 6,340,800 2000 1,316,333 621,149 3,826,209 723,898 6,445,100 2001 1,330,007 631,240 3,931,731 732,333 6,581,700 2002 1,367,547 665,475 4,024,067 750,934 6,758,800 2003 1,390,279 691,301 4,073,987 774,588 6,883,200 2003 1,390,279 691,301 4,120,012 769,925 6,924,500 2004 1,421,345 713,990 4,143,792 771,852 6,995,100 2005 1,448,344 747,529 4,166,757 780,616 7,051,900 2006 1,492,973 766,010 4,237,023 798,202 7,211,400 NOTE: Entries in “Total” column are rounded to the nearest 100 “because a small number of individuals may have multiple correctional statuses.” Counts for probation, prison, and parole populations are for December 31 of each year; jail population counts are for June 30 of each year. SOURCE: Table 6.1.2006, Sourcebook of Criminal Justice Statistics Online (http://www.albany.edu/sourcebook/wk1/t612006.wk1).

94 Table 3-5 Bureau of Justice Statistics Data Collection History and Schedule, Corrections, 1981–2009 Series 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08 09 Total Prisons Census of State and Federal Prison Facilities · · · • · · · · · • · · · · • · · · · • · · · · • · · · · 5 National Corrections Reporting Program · · • • • • • • • • • • • • • • • • • • • • • • • • • • • 27 National Prisoner Statistics • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 29 State Prison Expenditures · · · · · · · · · · · · · · · • · · · · • · · · · • · · · 3 Survey of Inmates in Federal Correctional Facilities · · · · · · · · · · • · · · · · • · · · · · · • · · · · · 3 Survey of Inmates in State Correctional Facilities · · · · · • · · · · • · · · · · • · · · · · · • · · · · · 4 Jails Annual Jail Survey · • · • • • • · • • • • · • • • • • • • • • • • • • • • • 25 Annual Jail Survey of Indian Country · · · · · · · · · · · · · · · · · • • • • • • • • • • • • 12 Census of Local Jails · · • · · · · • · · · · • · · · · · • · · · · · · • · · · 5 Survey of Large Jails · · · · · · · · · · · · · · · · · · · · · · · • · · · · · 1 Survey of Lcoal Jail Inmates · · • · · · · · • · · · · · · • · · · · · • · · · · · · · 4 Probation, Parole, and Recidivism National Census of Parole Agencies · · · · · · · · · · • · · · · · · · · · · · · · · • · · · 2 National Parole Statistics • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 29 National Probation Statistics • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 29 National Survey of Adult Probationers · · · · · · · · · · · · · · • · · · · · · · · · · · · · · 1 Recidivism Studies · · • · · • · · · · · · · • · · · · · · • · · · · · · • • 6 Prison Rape Elimination Act of 2003 Collections Survey of Sexual Violence—Adult Facilities · · · · · · · · · · · · · · · · · · · · · · · • • • • • • 6 Survey of Sexual Violence—Juvenile Facilities · · · · · · · · · · · · · · · · · · · · · · · · • • • • • 5 National Inmate Surveys Prisons and Jails · · · · · · · · · · · · · · · · · · · · · · · · · · • • • 3 Juvenile Facilities · · · · · · · · · · · · · · · · · · · · · · · · · · · • · 1 Former Prisoners · · · · · · · · · · · · · · · · · · · · · · · · · · · • · 1 Other Capital Punishment • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 29 Deaths in Custody · · · · · · · · · · · · · · · · · · · • • • • • • • • • • 10 NOTES: •, data collected. ·, data not collected. SOURCE: Bureau of Justice Statistics. JUSTICE STATISTICS

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 95 and a seminannual (midyear) basis. In principle, participating agencies are asked to complete a prison admission questionnaire (NCRP-1A) for each new prisoner entry during a reporting year and to send those questionnaires (or corresponding information from facility databases) to the Census Bu- reau on a flow basis. A prison release record (NCRP-1B) and parole exit record (NCRP-1C) are supposed to be kept on file for each prisoner; if the prisoner is released upon the end of a sentence, the NCRP-1B is to be filled and completed. Otherwise, if the prisoner is placed on parole, the NCRP-1C is intended to be forwarded to the parole authority and sent to the Census Bureau when the person exits parole. If a parole exit re- sults in a return to prison, both a parole exit and a new prisoner entry (1A) record are created (but not directly linked to each other; Bureau of Justice Statistics, 2007d:User Guide 3.4). In addition to basic summary counts of admissions and releases (disaggregated by gender and race), the NCRP com- piles information on conviction offenses, sentence length, and completed jail time. NCRP-D records collect annual end-of-year stock counts of prisoners, though they have only been completed by at most 26 states. In its studies of the prison population, BJS typically distinguishes be- tween “in custody” and “under jurisdiction” counts. The state or govern- ment that has legal authority over the prisoner (under jurisdiction) may transfer physical custody of a prisoner to another government (such as to deal with prison overcrowding). In generating jurisdiction counts, BJS’s def- inition of prison is broader than the classic penitentiary model and includes other facilities where an inmate may be held for long durations, such as halfway houses, boot camps, and treatment centers. However, its custody counts typically exclude prison inmates who may be held in local jails (again, as may be done to deal with prison crowding issues) or privately operated facilities (Sabol and Couture, 2008:9). Beginning with calendar year 2003, responding correctional systems were provided with a Web reporting option; agencies can also submit ques- tionnaire information on paper forms or computer media. The Census Bu- reau enters into specific arrangements with each state to provide NCRP data; as of 2003, 41 states (accounting for about 90 percent of the state prison population5 ), the federal prison system administered by the Bureau of Pris- ons, and the California Youth Authority contributed data to NCRP (Bureau of Justice Statistics, 2007d). After about a decade of issuing an annual bulletin summarizing charac- teristics of both prison and jail inmates as of the middle of the preceding year (e.g., Harrison and Beck, 2006; Sabol et al., 2007), BJS began the process 5 Calculation by panel staff based on 2005 prisoner counts for the nine noncontribut- ing states (Arizona, Delaware, Idaho, Indiana, Kansas, Montana, New Mexico, Ohio, and Wyoming) reported by Sabol et al. (2007:Appendix Table 1).

96 JUSTICE STATISTICS of issuing separate reports for prisons and jails; the first such separate re- port, Sabol and Couture (2008), was released in June 2008 and summarized prisoner stocks as of mid-2007. BJS also issues an annual report on the year- end stock of prisoners (e.g., Sabol et al., 2007), that includes totals from a set of sources that only provide data on an annual, year-end basis. Sepa- rate from the NPS program, BJS receives year-end counts of prisoners from several sources. The U.S. Department of Defense Corrections Council sup- plies BJS with prisoner counts (disaggreated by demographic characteristics, branch of service, and basic sentence and offense information) for “persons held in U.S. military confinement facilities inside and outside of the conti- nental United States.” BJS receives similar data from the U.S. Immigration and Customs Enforcement (ICE) on persons detained for immigration vi- olations; this includes ICE-operated facilities as well as prisoners that ICE arranges to hold in government- or privately operated facilities. Finally, cor- rectional departments in U.S. territories and commonwealths only provide information on the year-end, annual basis (Sabol et al., 2007:10). Census of State and Federal Correctional Facilities Every 5–6 years, the Census Bureau has conducted the Census of State and Federal Correctional Facilities for BJS. The Census Bureau uses data provided by the American Correctional Association to update files from its most recent facility census to develop the frame for a new census. The intended scope of the census is all correctional facilities directly ad- ministered by state governments or the federal government and that are pri- marily intended to house state or federal prisoners. The NACJD codebook for the 2000 correctional facility census describes its scope and specific ex- clusions as follows (Bureau of Justice Statistics, 2004a:5): The Census includes the following types of State, Federal, and private correctional facilities intended for adults but sometimes also holding juveniles: prisons, penitentiaries, and correctional institutions; boot camps; community corrections; prison farms; reception, diagnostic, and classification centers; road camps; forestry and conservation camps; youthful offender facilities (except in California); vocational training facilities; prison hospitals; and drug and alcohol treatment facilities for prisoners. . . . [It specifically excludes:] 1) private facilities not primarily for State or Federal inmates; 2) military facilities; 3) Immigration and Naturalization Service facilities; 4) Bureau of Indian Affairs facilities; 5) facilities operated by or for local governments, including those housing State prisoners; 6) facilities operated by the U.S. Marshals Service; 7) hospital wings and wards reserved for State prisoners; and 8) facilities that hold only juveniles. Facility census data are collected through mailed questionnaires, which are

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 97 intended to be filled through reference to administrative records (at either the individual facility or state corrections department level). The census includes a battery of questions on the physical characteristics of the facility itself: age and type of the structure, physical security, capac- ity, operating costs, and new construction plans. The census asks for stock information on the composition of the inmate population as of a particular reference date, including breakdowns of inmates being held under contract with another state or authority and the number of juvenile (under age 18) inmates held in the facility. The census includes queries on the number and occupational category of facility staff, the number of misconduct reports filed against prison staff, and incidence of escape attempts or other distur- bances. Inmate interviewing programs (described below) are based in part on facility responses to census questions on the education and health services that are provided for inmates, though the practices reported in the facility- level census have been explored in specific BJS reports such as the analysis of prevalence and treatment programs for hepatitis B and C by Beck and Maruschak (2004). Similarly, Beck and Maruschak (2001) summarize the data on mental health services (and estimated prisoner/patient counts) from the prison census while James and Glaze (2006) study the inmate-reported data from BJS’s prison and jail inmate surveys described below. BJS’s report on the most recent prison census, held in 2005, is by Stephan (2008). In the 2005 administration of the survey, BJS and the Census Bureau used telephone follow-up with state corrections departments to update the list of prisons covered in 2000. The Illinois Department of Corrections did not participate in the 2005 census; the Census Bureau estimated Illinois re- sults using both data that had been supplied in the 2000 prison census as well as data accessible on the Illinois department’s website (Stephan, 2008:7). Surveys of Inmates in State and Federal Correctional Facilities BJS has commissioned the Census Bureau to conduct personal interviews with inmates of state prisons on an irregular basis, every 5–7 years, through the Survey of Inmates in State Correctional Facilities. More recently—first in 1991 with additional sponsorship from BOP and then again in 1997 and 2004—the Census Bureau expanded its interviewing to include prisoners in a sample of federal prisons. BJS and the Census Bureau refer to the broader collection effort as the Surveys of Inmates in State and Federal Correctional Facilities (SISFCF). The state prisoner and federal prisoner components of the SISFCF are drawn in similar ways, though the stratification schemes differ between the two groups (Bureau of Justice Statistics, 2007h:3–8): • The most recent BJS Census of State and Federal Correctional Facili- ties, updated to include known recently constructed facilities, is used

98 JUSTICE STATISTICS as the sampling frame for the state prison group. Male- and female- only institutions are sampled separately, dividing mixed-sex facilities between the two groups. Facilities are drawn with probability pro- portional to institution size from strata defined by geographic region (treating the large states of California, Florida, New York, and Texas as separate strata). (Some large facilities, particularly those report- ing that they provide selected health care services, are treated as self- representing and automatically included in the first stage of sampling.) A systematic sampling scheme and a random start time are then used to sequence interviews at chosen facilities. Within chosen facilities, interviewers are generally able to randomly select prisoners from a facility-provided list of inmates using a bed the previous night. • A BOP-generated facility list is the frame for the federal prisoner por- tion, with facilities chosen with probability proportional to size within strata defined by facility security level (five levels for male prisons, two for female prisons). Two male-only and one female-only facilities were selected with certainty for the sample. Because of added restrictions, BOP staff served as intermediaries in arranging interviews; they se- lected the sample of inmates (systematically, but with oversampling of nondrug offenders) and provided it to facilities 5–7 days before inter- viewing. In 2004, Census Bureau staff completed 14,499 interviews of state prisoners and 3,686 of federal prisoners. Each interview (using CAPI) is about an hour in length, including information on individual characteristics of prison inmates, current offenses and sentences, characteristics of victims, criminal histories, family background, gun possession and use, prior drug and alcohol use and treatment, and prison services. The personal history information included in SISFCF data has been used by BJS staff in several of its “Special Report” series (see Box 1-1 for a de- scription of this type of report). Several of these have focused on prisoner health issues (Maruschak, 2001; James and Glaze, 2006; Maruschak, 2008b) whereas others have focused on other characteristics of the convicted of- fender population, such as veteran status (Mumola, 2000b; Noonan and Mumola, 2007), educational attainment (Harlow, 2003), firearm use and acquisition (Harlow, 2001), and children of incarcerated parents (Mumola, 2000a; Glaze and Maruschak, 2007). 3–B.2 Jails In terms of understanding transitions and flows, local jails are a vitally important part of the criminal justice system. The definition of “local jail” that BJS uses in describing its Annual Survey of Jails (Bureau of Justice Statis- tics, 2007b:4) is telling, for the sheer range of listed functions:

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 99 Local jails: • receive individuals pending arraignment and hold them awaiting trial, conviction, or sentencing, • readmit probation, parole, and bail-bond violators and abscon- ders, • temporarily detain juveniles pending transfer to juvenile authori- ties, • hold mentally ill persons pending their movement to appropriate health facilities, • hold individuals for the military, for protective custody, for con- tempt, and for the courts as witnesses, • release convicted inmates to the community upon completion of sentence, • transfer inmates for Federal, State, or other authorities, • house inmates for Federal, State, or other authorities because of crowding of their facilities, • relinquish custody of temporary detainees to juvenile and medical authorities, • sometimes operate community-based programs as alternatives to incarceration, • and hold inmates sentenced to short terms (generally less than one year). Frase (1998:100) is more succinct, and blunt, in characterizing local jails as “the custodial dumping ground of last resort, when no other appropriate holding facility is available.” As the points in the preceding definition illus- trate, the incarcerated population housed in and cycling through local jails includes a mixture of short-term stays (e.g., pretrial or prearraignment hold- ing) and long-term stays (e.g., contractual arrangements to house convicted prisoners because of crowding in state prisons). The breadth of custodial arrangements accommodated by local jails and the dynamics of the jailed population make jails a critical feature of the justice system—albeit one that defies neat definition and measurement. Cahalan (1986:7) observes that, “apart from Census Bureau reports done at 10-year intervals, no national [statistical] reports had been done on jails” until LEAA began a program of jail surveys in 1970. As of that point, “the last Census Bureau report on jails to contain special criminal justice related information such as offenses or sentence data had been in 1933.” LEAA fielded initial jail surveys in 1970, 1972, and 1978; these early efforts mod- eled Census Bureau practice by trying to characterize the inmate population present on the day of the survey, rather than quantifying the flow into and out of jails over the course of the year. These initial surveys gave rise to BJS’s current program of jail studies, which is generally similar in structure to its core collections on prisons.

100 JUSTICE STATISTICS Annual Survey of Jails and Census of Jails BJS’s principal data collection on local jails, with the institution as the unit of analysis, is the Annual Survey of Jails (ASJ), which is intended to collect data on facilities that are administered (either directly or under con- tract to a private firm) by county and municipal governments and that hold inmates for some period after their initial arraignment (i.e., those that typi- cally hold inmates for more than 48 hours). On an irregular basis—roughly every 5 years—the coverage of the ASJ is expanded to include a complete canvass of all known jails, and the results from this Census of Jails becomes the sampling frame for the annual ASJ. Since the ASJ series and periodic jail census began in 1982, the Census Bureau has been engaged as the data collection agent. The work developed from experimental efforts in the 1970s, when congressional interest in cor- rectional facility overcrowding led to a first jail census in 1970. The core content of the ASJ and the Census of Jails is the same, including facility characteristics (structure age, capacity and average daily capacity, and staffing) and inmate demographics (age/sex/race, legal status, and length of stay). In recent years, the ASJ, like the data collections in prisons, have de- veloped to include information on facility services and health care (including the prevalence of HIV/AIDS and tuberculosis in the inmate population). The ASJ and the Census of Jails are still conducted principally by mailout/mailback methodology, though the Census Bureau permitted Inter- net and electronic reporting beginning in 1999. In ASJ (noncensus) years, a sample of jails is drawn using stratified random sampling, using strata de- fined by the reported average daily inmate population in the last census, with some exceptions (for instance, some jails that are regional in scope rather than serving a single jurisdiction are automatically included in the sample, as are facilities that reported housing at least one juvenile offender in the most recent census). Stephan (2001) described the results of the 1999 Census of Jails, empha- sizing comparisons with the 1993 census. Similar questionnaire items in the jail census and the NPS instrument have been used to study HIV prevalence and testing regimes among the incarcerated population (Maruschak, 2001); however, a more up-to-date, electronic-only publication on HIV/AIDS (Maruschak, 2008a) uses only the prison data. Survey of Inmates in Local Jails BJS’s periodic Survey of Inmates in Local Jails (SILJ) collects data on the personal and family characteristics of jail inmates, past drug and alcohol use, history of physical and sexual abuse, and history of contact with the criminal justice system. The survey also probes inmates to provide information on

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 101 services offered by the jail system (e.g., health care) and other jail activities and programs. The survey relies on personal interviews with a nationally representative sample of almost 6,000 inmates; as of the 2002 version of the survey, the roughly hour-length interviews are now completed using CAPI. The survey is intended to be nationally representative of “persons held prior to trial and on those convicted offenders serving sentences in local jails or awaiting transfer to prison” (Bureau of Justice Statistics, 2006d:5). The most recent Census of Jails serves as the frame for the SILJ; in 2002, the sample of 7,750 inmates was drawn using a two-stage design, selecting jails from within strata defined on the basis of a jail’s proportions of adult male, adult female, and juvenile inmates and then sampling within selected jails. Of the 7,760 names chosen for inclusion in the sample, 6,982 interviews were completed, with 263 inmates refusing to participate, 407 having exited the jail system between selection and interviewing, and 98 who could not be interviewed for medical or security reasons (Bureau of Justice Statistics, 2006d:5–6). The SILJ is conducted periodically, if not regularly. Documentation for the 2002 implementation of the survey describes its frequency as every 5 to 6 years (Bureau of Justice Statistics, 2006d:5), though 7-year gaps have occurred. As of 2002, BJS contracts with the Census Bureau to conduct SILJ interviews. BJS has issued both general summaries of characteristics of the jail popu- lation based on the SILJ (Harlow, 1998; James, 2004) and detailed probes of particular SILJ topic areas. For example, Maruschak (2006) details current medical problems reported by jail inmates, including assessments of whether the problems are related to fight- or accident-related injuries; Maruschak (2008b) performs a similar analysis based on prisoner survey data; and SILJ data were used in the analysis by James and Glaze (2006) of mental health problems and disorders among the incarcerated. 3–B.3 Custodial Conditions Deaths in Custody The Death in Custody Reporting Act of 2000 (P 106-297) required .L. that states provide the U.S. Justice Department with quarterly “information regarding the death of any person who is in the process of arrest, is en route to be incarcerated, or is incarcerated” at any correctional facility as a condition for receiving federal grant assistance.6 The law requires that, “at 6 Technically, the act was attached to the authorization for the broader Violent Offender Incarceration and Truth-in-Sentencing grant program, which dispersed $5.2 billion beginning in 1998 to the states to expand prison capacity. Correctional systems that accepted those funds had to agree to the Deaths in Custody reporting as a condition of the grant (Mumola, 2005:2).

102 JUSTICE STATISTICS a minimum,” this information include personal characteristics (name, age, gender, race, ethnicity) and details of the death (date, time, location, and brief description of circumstances). Coverage in Deaths in Custody reporting was added in stages, begin- ning with data on deaths in local jails in 2000. In 2001, state prisons were added; state juvenile correctional facilities followed in 2002, and in 2003 the program began attempting to measure deaths in the process of arrest. As described in Box 3-3, the act was up for reauthorization in 2007; H.R. 3971 reimplements the act with the added requirement of a Justice Depart- ment study of means for reducing deaths in custody. The updated legislation passed in the House of Representatives in January 2008; the Senate Judi- ciary Committee reported a modified version of the bill in late September 2008. Though the reauthorization is still pending, BJS has worked to ex- pand coverage to include deaths in ICE facilities (Sedgwick, 2008:2). Deaths in Custody data collection functions are shared by the Census Bu- reau and BJS (through its state Statistical Analysis Centers; see Section 4–A). On a quarterly basis, the Census Bureau collects inmate death records from each of the nation’s state correctional systems (adult and juvenile) and from local jails. Data coded from these records include the deceased’s personal characteristics (age, gender, and race/ethnicity), their criminal background (legal status, offense types, length of stay in custody), as well as details of the death itself (the date, time, location, and cause of each death, as well as infor- mation on autopsies and medical treatment provided for illnesses/diseases). BJS collects quarterly reports from state and local law enforcement agencies (known from the Census of State and Local Law Enforcement Agencies, as described below) on deaths incurred during the process of arrest. Though collected on a quarterly basis, reports and tabulations from the Deaths in Custody program are only reported in annual formats. To date, BJS has used the Deaths in Custody data to produce three analyt- ical reports on differing aspects of the data: an analysis of deaths concluded to be suicides or homicides (Mumola, 2005) was followed by a more gen- eral inventory of the medically determined causes of deaths recorded in the data (Mumola, 2007b), and finally a study making use of the newer data on deaths occurring in the process of arrest (Mumola, 2007a). BJS has since established a website page dedicated to statistical tables from the Deaths in Custody data (http://www.ojp.usdoj.gov/bjs/dcrp/dictabs.htm) which is up- dated with new annual counts. Survey on Sexual Violence and National Inmate Surveys The data collections on the incidence of rape and sexual violence in cor- rectional facilities, as mandated by the Prison Rape Elimination Act of 2003, are described and discussed in Section 5–A.1.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 103 3–B.4 Capital Punishment A separate component of the NPS program, known as NPS-8, was desig- nated in 1972 to collect annual data on prisoners under a death sentence, as well as transitions out of “death row” (e.g., through commutation or vaca- tion of a capital sentence). By counting the number of executions performed, BJS’s capital punishment program represents a continuation of a data series dating to 1930; the fuller detail on death sentencing and inmate characteris- tics began in 1972. The Census Bureau, as data collector for the NCRP also , collects the capital punishment data. Part of the Capital Punishment data collection is an annual update of death penalty statutes in the various states; the Census Bureau sends a sep- arate questionnaire to state justice departments, including questions on any actions by state supreme courts, the minimum age at which persons can be sentenced to death, and the methods of execution authorized by state law. The codebook for a compilation of BJS’s capital punishment data for the 1972–2006 time series notes several reasons why BJS’s counts may be dis- crepant from those recorded by other authorities (Bureau of Justice Statis- tics, 2007c:4): (1) NPS-8 adds inmates to the number under sentence of death not at sentencing but at the time they are admitted to a State or Federal correctional facility. (2) If in one year inmates entered prison under a death sentence or were reported as being relieved of a death sentence but the court had acted in the previous year, the counts are adjusted to reflect the dates of court decisions. (3) NPS counts are always for the last day of the calendar year and will differ from counts for more recent periods. Prior to data collected in 2006, BJS summarized findings from the capi- tal punishment data in an annual bulletin (Snell, 2006); it has since switched to electronic-only dissemination of spreadsheet tables (Snell, 2007, 2008). It has also registered a combined 1973–2005 data set in the NACJD (Bu- reau of Justice Statistics, 2007c), covering all persons on death row since 1972, capable of analysis by state, basic demographic characteristics, capital offense type(s), and status (e.g., still awaiting execution or removed from death sentence). 3–B.5 Inventory of State and Federal Corrections Information Systems In 1998, BJS, the National Institute of Justice (NIJ), and the Correc- tions Program Office jointly sponsored a study by the Urban Institute on the general state of offender-based corrections information systems at the state and federal levels. The Urban Institute was specifically tasked to describe the capacity of these systems for record linkage and electronic exchange of records. In carrying out this study, the institute obtained the assistance of

104 JUSTICE STATISTICS the State-Federal Committee of the Association of State Corrections Admin- istrators. The final report of this inventory of systems was issued as Bureau of Justice Statistics (1998b); to date, the study has not been repeated. 3–B.6 Probation and Parole Some basic information on exit from parole status is collected in the NCRP (described in Section 3–B.1 above) which absorbed the former Uni- form Parole Reports program in 1983. The Uniform Parole Reports se- ries began on an experimental basis in 1966, coordinated by the National Council on Crime and Delinquency with financial support from the Na- tional Institute of Mental Health (Bureau of Justice Statistics, 2007d:4). It began by collecting data from selected state parole boards for which records were available, but developed nationwide coverage over several years, due in part to a feasibility study of yearly reporting funded by the LEAA in 1975 (Cahalan, 1986:7). BJS sponsors an Annual Probation Survey and an Annual Parole Survey; collectively, they are described as the Probation and Parole Data Surveys. Be- tween 1993 and 2005, BJS collected these data in-house, but it contracted with the Census Bureau as data collection from 1980 to 1992 and from 2006 to the present. The probation and parole surveys are establishment surveys, intended to be filled by agency authorities through reference to adminis- trative records. According to the methodology note in Glaze and Bonczar (2007:9), this means that the 2006 version of the survey questionnaires was sent to 463 probation agencies7 and 54 parole agencies. About 13 probation agencies failed to supply data and a few others provided only partial returns, necessitating imputation procedures; the state of Illinois was the only parole authority not to report in 2006. As agency-level collections, the Probation and Parole Data Surveys focus principally on aggregate counts of entries and discharges, though some data are also collected on demographic characteris- tics and offense types of the agencies’ service population; questions are also asked about the use of procedures such as electronic monitoring. BJS staff worked to expand the coverage of the Annual Probation Survey in the late 1990s, adding 175 agencies to the survey’s frame between 1995 and 2006 (Glaze and Bonczar, 2007:9). On a one-shot basis in the early 1990s, BJS conducted a fuller study of the probation population through a contract with the Census Bureau. In 1991, the agencies mounted a Census of State and Local Probation and Parole Agencies to generate a complete inventory of agencies operated by federal, state, and local governments. This census produced facility-level 7 In some states, local courts have the direct authority to supervise probationers. Hence, almost 70 percent of the 463 eligible reporting probation agencies in 2006 were concentrated in two states: 185 and 128 in Ohio and Michigan, respectively (Glaze and Bonczar, 2007:9).

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 105 data on staffing, expenditure, and basic procedures (e.g., frequency of drug or HIV testing required of probationers). However, the primary function of the census was to serve as the sampling frame for a one-shot Survey of Adults on Probation in 1995, the first nationally representative sample of probationers that had been drawn and analyzed to date. The personal in- terview with sampled probationers included detailed questions on drug and alcohol use, criminal history, and the extent of their contact with their su- pervising probation authorities. Save for a legislative mandate under the Prison Rape Elimination Act of 2003 to query a sample of the probation and parolee population about the incidence of sexual violence during im- prisonment, the 1995 survey is BJS’s only personal-interview measurement of community corrections to date. 3–B.7 Recidivism In 1983 and 1994, BJS tracked large samples of released prisoners for 3 years. BJS described the need for these studies as being motivated by “widespread demand for information on the topic of recidivism” (Bureau of Justice Statistics, 2002:1): Among the many information requests that come to the U.S. Depart- ment of Justice each day—from departments of corrections, elected of- ficials, policy makers, the media, members of the general public—one of the most frequent is for facts regarding recidivism. Legislators draft- ing community notification laws, for example, wish to know how often released sex offenders commit a new sex offense. Departments of cor- rections need to learn how the recidivism rate in their State compares to the national rate. Of special interest to the FBI is the extent to which released sex offenders become involved in criminal activity in States other than the State in which they had served time. This is important information relevant to the development of a national DNA registry. The databases compiled in the 1983 and 1994 studies drew from crim- inal history information recorded on “rap sheets” (Records of Arrests and Prosecutions) and derived multiple measures of recidivism or resumption of criminal activity. The 1983 study tracked about 16,000 prisoners released from 11 state corrections systems; the 1994 study reached 15 states and 38,624 prisoners. In both cases, the sample of states was purposive—that is, based on a state’s willingness and ability to cooperate—while the samples of prisoners within states was generally drawn based on stratification by most severe conviction offense. At the time these recidivism studies were conducted, BJS researchers had no access to the national criminal history record databases compiled by the FBI (see Section 4–B for subsequent developments). Participating correction departments turned over lists of all prisoners released in the reference year; BJS drew its samples and returned a list of identifiers to the state depart-

106 JUSTICE STATISTICS ments, asking for computerized rap sheets for those prisoners. Separately, BJS also provided the set of identifiers for sampled prisoners to the FBI, asking it to query its databases—particularly useful to get information on of- fenses committed outside the state that released the prisoner. The state and FBI records were combined to form a master database—in 1994, one that included 6,427 variables on the 38,624 prisoners, 6,336 of which provided criminal histories for up to 99 “arrest cycles” per prisoner (Bureau of Justice Statistics, 2002:6–7).8 The four recidivism measures derived for each prisoner were rearrest, reconviction, resentence to prison, and return to prison with or without a new prison sentence. The BJS report on the 1994 study (Langan and Levin, 2002:2) follows the description of these four measures with a disclaimer that the measures are likely to be underestimates: To an unknown extent, recidivism rates based on State and FBI criminal history repositories understate actual levels of recidivism. The police agency making the arrest or the court disposing of the case may fail to send the notifying document to the State or FBI repository. Even if the document is sent, the repository may be unable to match the person in the document to the correct person in the repository or may neglect to enter the new information. For these reasons, studies such as this one that rely on these repositories for complete criminal history information will understate recidivism rates. BJS conducted the recidivism studies in-house, with assistance in data collection and processing from the Regional Justice Information Service. Funding for the 1994 study was received, in part, from the FBI and the Corrections Program Office within the Office of Justice Programs (Langan and Levin, 2002:16). On a one-time basis, in 1986, BJS drew from criminal history records to track a sample of convicted felons for 3 years upon their entry into proba- tion. This collection generated estimates of the percentage of probationers who were rearrested, reconvicted, or reimprisoned for new crimes during the study period. 3–C LAW ENFORCEMENT In its final report, the President’s Commission on Law Enforcement and Administration of Justice (1967:10) was struck by the difficulty in studying “law enforcement” as a unified entity, where policy changes made on high directly affect the public experience at the street level: 8 Tenprisoners in the 1994 sample had more than 99 arrest cycles, and one had 176 differ- ent arrests on record. In these cases, the earliest arrests were dropped to fit the 99-maximum limit of the database (Bureau of Justice Statistics, 2002:7).

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 107 At the very beginning of the process—or, more properly, before the process begins at all—something happens that is scarcely discussed in lawbooks and is seldom recognized by the public: law enforcement pol- icy is made by the policeman. For policemen cannot and do not arrest all the offenders they encounter. It is doubtful that they arrest most of them. A criminal code, in practice, is not a set of specific instructions to policemen but a more or less rough map of the territory in which policemen work. How an individual policeman moves around that ter- ritory depends largely on his personal discretion. . . . Every policeman [is] an interpreter of the law. In a sense, it is difficult to draw a clear conceptual line between “crime statistics” and “law enforcement statistics” because the two concepts are so intertwined; much of the task of law enforcement is identifying and respond- ing to crime. Indeed, law enforcement agencies are a major provider of statistical data on crime. Still, for purposes of this report, we can define “law enforcement statistics” as those that describe the activity and social organization of law enforcement agents and agencies, where social organi- zation includes organizational structure, resources, personnel, policies, and tactics. Under this rubric, the mobilization of the police by citizens and the response of police to crime events would be considered part of law enforce- ment statistics. Law enforcement and the more general concept of “crime statistics” intersect when the police serve as the source of data to identify and characterize crimes. As we review in this section, and discuss elsewhere in this report, BJS’s data collections to date in the area of law enforcement (see Table 3-6 for a collection timeline) have heavily emphasized a top-level, management and administration focus. Though BJS has framed collections related to special- purpose agencies such as campus law enforcement departments or medi- cal examiners offices, the data content is administrative in focus, describing workforce levels, available resources, and general workload. Like the NCVS, BJS’s law enforcement data collections share some sub- stantive overlap with components of the FBI’s UCR program; the law en- forcement aspects of the UCR are summarized in Box 4-4. 3–C.1 Law Enforcement Management and Administrative Statistics The core BJS data collection in the area of law enforcement is the Law Enforcement Management and Administrative Statistics (LEMAS) survey of agency administrators that has been conducted roughly every 3 years since 1987. Most recently, BJS has used the Police Executive Research Forum (PERF) as the data collection agent for LEMAS. Langworthy (2002:23) observes that “the LEMAS survey has its roots in salary surveys conducted both by the Fraternal Order of Police (FOP) and the Kansas City Police Department (KCPD),” which were conducted

108 Table 3-6 Bureau of Justice Statistics Data Collection History and Schedule, Law Enforcement, 1981–2009 Series 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08 09 Total Law Enforcement Management and Administrative Statistics (LEMAS) Survey · · · · · · • · · • · · • · · · • · • • · · • · · · • · · 8 Community Oriented Policing Supplement · · · · · · · · · · · · · · · · • · • · · · • · · · · · · 3 Census of State and Local Law Enforcement Agencies · · · · · • · · · · · • · · · • · · · • · · · • · · · • · 6 Special Agency and Service Agency Censuses Federal Law Enforcement Agency Census · · · · · · · · · · · · • · · • · • · • · • · • · • · • · 8 Campus Law Enforcement Survey · · · · · · · · · · · · · · • · · · · · · · · • · · · · · 2 Census of Law Enforcement Training Academies · · · · · · · · · · · · · · · · · · · · · • · · · • · · · 2 Census of Medical Examiner and Coroner Offices · · · · · · · · · · · · · · · · · · · · · · · • · · · · · 1 Census of Publicly Funded Forensic Crime Labs · · · · · · · · · · · · · · · · · · · · · • · · • · · · · 2 National Survey of DNA Laboratories · · · · · · · · · · · · · · · · · • · · • · · · · · · · · 2 Survey of Law Enforcement Gang Units · · · · · · · · · · · · · · · · · · · · · · · · · • · · · 1 State Police Traffic Stop Survey · · · · · · · · · · · · · · · · · · • · • · · • · · · · · 3 Traffic Stop Statistics · · · · · · · · · · · · · · · · · · · · · • · · · • · · · 2 NOTES: •, data collected. ·, data not collected. SOURCE: Bureau of Justice Statistics. JUSTICE STATISTICS

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 109 on an annual basis beginning in the early 1950s. Over the years, the two collections took different approaches, with the FOP survey emphasizing more complete coverage of police departments and the KCPD effort tar- geting large-jurisdiction departments (approximately 40) but expanding the range of questions. After the KCPD was forced to discontinue its collection, it collaborated with the Police Foundation and PERF on two general sur- veys of police operational and administrative practices in 1977 and 1981. BJS commissioned a study in 1983–1984 on the utility of a recurring sur- vey of police agencies; that study “established that there was considerable demand for comparative police organizational data captured on a recurring basis from both the police practice and research communities” (Langworthy, 2002:23–24, summarizing Uchida et al., 1984). Additional background on recent and historical survey series of law enforcement agencies is given by Maguire (2002). As the name suggests, the focus of the survey is on organizational and ad- ministration matters. In addition to acquiring counts of sworn and civilian employees and information on budgetary resources, the LEMAS survey in- strument queries agencies about whether they follow certain policies or have specific technical resources. For example, the instrument asks about use of academies and special curriculum for training new recruits and equipment provided to officers (e.g., distribution of weapons or armor to officers and placement of computers in patrol cars). To show the basic nature of the ques- tionnaire, 2 of the 11 pages of the 2003 LEMAS questionnaire are excerpted in Figures 3-3 and 3-4. As the figures suggest, the LEMAS instrument is an establishment survey that is intended to be filled out with relatively little need to refer to available records; questions are generally multiple choice. Under the current LEMAS framework, agencies with 100 or more sworn officers as of the most recent Census of State and Local Law Enforce- ment Agencies (see next section) are always included in the sample as self- representing units. In 2003, this included 574 local police departments, 332 sheriffs’ offices, and the 49 state police agencies. A stratified random sample (by type of agency, size of service population, and number of sworn offi- cers) of agencies with fewer than 100 sworn personnel makes up the rest of the LEMAS sample, as non-self-representing units. In 2003, 2,199 agencies were selected for inclusion. An additional 25 agencies had been designated for inclusion but had either “closed, outsourced their operations, or were operating on a part-time basis,” ruling them out of scope. Of the 3,154 agencies contacted by mail in 2003, 2,869 responded to the survey, for a 91 percent response rate (Bureau of Justice Statistics, 2006c:4–5). As of the 2003 administration of the LEMAS survey by PERF, agen- cies were allowed to respond to the mailed questionnaire by any of several modes: mail, fax, or Internet. In the case of Internet responses, entries could be typed into a fillable PDF form; however, Internet respondents were re-

110 JUSTICE STATISTICS Figure 3-3 Law Enforcement Management and Administrative Statistics questionnaire, 2003, p. 7

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 111 Figure 3-4 Law Enforcement Management and Administrative Statistics questionnaire, 2003, p. 8

112 JUSTICE STATISTICS quired to type the ID number from the printed questionnaire received by mail into the electronic form. BJS has reported results from the LEMAS survey in large publications covering major segments of the sample—agencies with 100 or more offi- cers in the 2000 administration of the survey (Reaves and Hickman, 2004) and separate Sheriffs’ Offices (Hickman and Reaves, 2006b) and Local Po- lice Departments (Hickman and Reaves, 2006a) reports from the 2003 data. LEMAS data have also spawned specific BJS reports on the adoption of community-oriented policing practices (Hickman and Reaves, 2001), the frequency of citizen complaints of police use of force (Hickman, 2006), and long-term trends specific to large-city police departments (Reaves and Hickman, 2002b). 3–C.2 Census of State and Local Law Enforcement Agencies Conducted on a 4-year cycle, the primary purpose of the Census of State and Local Law Enforcement Agencies (CSLLEA) is to produce the sampling frame for the main LEMAS survey. It also provides the frame for some of the special-agency censuses described in the next section. The CSLLEA is sometimes known, and is archived in the NACJD, as the Directory of Law Enforcement Agencies (or the Directory Survey) for its comprehensive focus, providing data on all state and local law enforcement agencies. Its intent is to gather information on “all police and sheriffs’ departments that were publicly funded and employed at least one full-time or part-time sworn officer with general arrest power” (Bureau of Justice Statistics, 2003a). The 2000 version of the questionnaire, administered by the Census Bu- reau as data collector, is reproduced in full in Figures 3-5 and 3-6. As a frame- or directory-building operation, the CSLLEA is a short, two-page questionnaire. The questionnaire includes standard items on the number of sworn and civilian personnel and budget level (with a specific question on drug asset forfeiture); question 5 on functions “perform[ed] on a routine basis” is one that can be used to determine the presence of some of the poli- cies or practices that may be queried in greater detail in the special-agency censuses. Between the 2000 and 2004 administrations of the CSLLEA, BJS changed data collection agents for the census, switching from the Census Bureau (2000) to the National Opinion Research Center (2004; NORC). In both years, responses were permitted by mail, fax, or Internet. In 2004, NORC and BJS developed the contact list for the CSLLEA by updating the 2000 directory with lists of agencies requesting an Originating Agency Iden- tifier (ORI) from the FBI since 2000, as well as lists provided by Peace Officer Standards and Training offices. Although its principal purpose is internal—to provide the basis for sub-

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 113 sequent surveys—BJS has issued bulletins following the completion of the CSSLEA, providing general statistics on the comparative size and workforce of agencies (Reaves and Hickman, 2002a; Reaves, 2007). 3–C.3 Special-Agency and Service-Agency Censuses BJS has conducted several data collections on special-focus law enforce- ment agencies (e.g., police forces maintained by colleges and universities) as well as agencies that support law enforcement in various ways (e.g., med- ical examiners’ and coroners’ offices). These special agency censuses tend to follow the basic mold and cover information similar to the LEMAS sur- vey, though they are not “branded” as a part of or supplement to LEMAS. They typically tend to be censuses that are intended to capture data on the agencies of a particular type and, as such, function largely as a directory or catalog of agencies. Though this directory-building function could be the basis for follow-up sample surveys (asking, perhaps, more extensive ques- tions on agency policies, practices, and experiences), this typically has not been done; in reference to these data collections, the “survey” label is used when a “census” label might be more appropriate. Survey of Campus Law Enforcement Agencies The original 1995 Survey of Campus Law Enforcement Agencies was motivated by concern over the coverage of college police departments in the CSLLEA and LEMAS. By their design and their focus on law enforcement agencies affiliated with governmental units, campus police forces commonly fall out of the LEMAS scope. “Because LEMAS includes only a small number of agencies serving public colleges and universities in its sample and does not include any of those at private institutions,” BJS concluded that a special sur- vey of college campuses was warranted (Bureau of Justice Statistics, 1997c). BJS worked with the International Association of Campus Law Enforcement Administrators (IACLEA) in developing the 1995 survey; contact informa- tion for an IACLEA representative was included on the 1995 questionnaire, though BJS handled the data collection in-house. The collection focuses on 4-year institutions with 2,500 or more stu- dents, excluding the U.S. military academies, professional schools, and for- profit schools. For the 2004 administration, the scope was increased to include 2-year public colleges with enrollments of 10,000 or more (Reaves, 2008). Similar to LEMAS content, data are collected on agency personnel, expenditures and pay, arrest powers (e.g., whether limited to on-campus in- cidents), operations, equipment, computers and information systems, poli- cies, and special programs; college-specific questions ask for enrollments of full- and part-time students as both undergraduates and graduates.

114 JUSTICE STATISTICS Figure 3-5 Census of State and Local Law Enforcement Agencies questionnaire, p. 1

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 115 Figure 3-6 Census of State and Local Law Enforcement Agencies questionnaire, p. 2

116 JUSTICE STATISTICS To date, the Survey of Campus Law Enforcement Agencies has only been performed in 1995 and 2004; descriptions at the time of the initial 1995 collection imply that it was hoped that the survey could be repeated as early as 1997, but this was apparently not done until 2004. Federal Law Enforcement Agency Census BJS’s Federal Law Enforcement Agency Census, conducted every 2–3 years since 1993, concentrates on federal agencies with general law enforce- ment and criminal investigation authority. In the 2004 version of the survey, this broad definition covered 65 federal agencies, including 27 offices of in- spector general at cabinet departments or independent agencies; however, it was not meant to cover the U.S. armed forces, and the Central Intelligence Agency and the Transportation Security Administration’s Federal Air Mar- shal program were excluded “because of classified information restrictions.” Agencies were asked to report on the number of officers assigned to various duties, such as police patrol or security and protection. Officer counts are meant to include “personnel with Federal arrest authority who were also authorized (but not necessarily required) to carry firearms while on duty” (http://www.ojp.usdoj.gov/bjs/pub/ascii/fleo04.txt). Forensic and DNA Crime Laboratories In 1998–1999, NIJ provided funding for BJS to conduct the National Study of DNA Laboratories, as part of NIJ’s larger DNA Laboratory Im- provement program. Conducted in-house by BJS staff, the survey of DNA laboratories was repeated in 2001; in this second administration, BJS sup- plemented its frame from the 1998 wave by adding laboratories that had applied for grants from NIJ and checking against lists of participants in the FBI’s Combined DNA Index System repository (Steadman, 2002:2, 7). The survey focused on publicly operated forensic crime laboratories that per- form DNA analyses. According to its NACJD codebook (Bureau of Justice Statistics, 2003c) The survey included questions about each lab’s budget, personnel, pro- cedures, equipment, and workloads in terms of known subject cases, un- known subject cases, and convicted offender DNA samples. The survey was sent to 135 forensic laboratories, and 124 responses were received from individual public laboratories and headquarters for statewide forensic crime laboratory systems. The responses included 110 publicly funded forensic laboratories that performed DNA testing in 47 states. In 2002 and 2005, BJS set out to conduct a fuller Census of Publicly Funded Forensic Crime Laboratories, including those that may not perform DNA testing. In addition to information on the range of services provided by the laboratories, a major focus of the censuses was on workload and backlog

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 117 of pending cases. Though labeled as the 2002 and 2005 censuses, the data collection for these studies was actually conducted in 2003–2004 and 2006– 2007, respectively, for facilities known to exist in the nominal (2002 or 2005) year. In 2002, the Survey Research Laboratory of the University of Illinois at Chicago was awarded a grant to conduct the census; in 2005, the data collection grant was won by Sam Houston State University. In both cases, the universities consulted with the American Society of Crime Laboratory Directors on questionnaire content and development of frames and mailing lists (Bureau of Justice Statistics, 2005b, 2008b). The 2002 and 2005 studies were summarized in BJS reports by Steadman (2002) and Peterson and Hickman (2005), respectively. Census of Law Enforcement Training Academies To date, BJS has twice conducted a Census of Law Enforcement Train- ing Academies, obtaining information on the number and types of staff em- ployed at these training facilities, their budget and funding sources, the num- ber of trainees, and their general policies and practices. In addition to these basic organizational data, the survey collected information on training cur- riculum issues critical to current law enforcement policy development; for instance, questions asked whether the nature of terrorism and tactics to re- spond to terrorist attacks is a part of the training program. The initial 2002 administration of the collection was partially funded by the Justice Depart- ment’s Office of Community Oriented Policing Services (COPS); the COPS office also provided input on the questionnaire. The collection was later repeated in 2006. The 2002 census found “a total of 626 law enforcement academies op- erating in the United States [that] offered basic law enforcement training to individuals recruited or seeking to become law enforcement officers. This includes 274 county, regional, or State academies, 249 college, uni- versity, or technical school academies, and 103 city or municipal academies” (Hickman, 2005b:1). The report summarizing the collection notes only that the list of agencies “was compiled from a variety of sources, including pro- fessional associations, State law enforcement training organizations, and ex- isting law enforcement data collections” (Hickman, 2005b:21) BJS contracted with PERF to conduct the data collection and consulted with the International Association of Directors of Law Enforcement Stan- dards and Training (IADLEST); IADLEST was also enlisted to provide a supporting letter to bolster participation and help with nonresponse follow- up efforts.

118 JUSTICE STATISTICS Census of Medical Examiner and Coroner Offices Conducted once to date, in 2004, the Census of Medical Examiner and Coroner Offices was conducted by RTI International under contract to BJS; the National Association of Medical Examiners and the International Asso- ciation of Coroners and Medical Examiners were enlisted to help with the development of the questionnaire and to encourage individual offices to re- spond to the query. The Centers for Disease Control and Prevention (CDC) generated an initial list of offices (because CDC regularly compiles morbid- ity and mortality data from state and local offices) to contact in the survey, and the list and contact information was updated by RTI. RTI developed mixed-mode response options for the census; in addition to mailed paper questionnaires, individual coroner offices were permitted to respond through an online website or by fax. In all, 1,998 offices responded to the census (85.9 percent response rate). Much like the other special-agency censuses, the medical examiner cen- sus focused on administrative characteristics such as staffing levels, expen- ditures, and workload; general findings from the collection are reported by Hickman et al. (2007). One particular line of inquiry in the data collection concerned the number of unidentified human remains in the custody of each office. Curiously, when BJS developed a special “Fact Sheet” on unidenti- fied human remains (Hughes, 2007), it did so after BJS obtained access to the FBI’s Unidentified Person File (a voluntary reporting system) and the National Center for Health Statistics’ National Death Index. Through the National Death Index, BJS derived a time series for the span 1980–2004, but the fact sheet made no attempt to compare the latest of these estimates (based on individual death records reported by state vital statistics offices) with the agency-level totals from BJS’s own census of coroner offices. Special-Agency Censuses Under Development In 2007 and 2008, BJS filed Information Collection Review packages to the U.S. Office of Management and Budget (OMB) for several additional special-agency censuses. The Census of Law Enforcement Aviation Units follows up on findings from the 2003 LEMAS survey that about 250 ser- vice units provide helicopter or fixed-wing aircraft service for state and local law enforcement agencies. BJS also filed a request to obtain clearance from OMB to conduct the Survey of Law Enforcement Gang Units. The support- ing statement for that collection indicates that the data collection is being initiated, in part, because of a department-level antigang initiative (“one of the current top priorities for the Attorney General and the Department of Justice is the development of more effective programs to prevent gang vio- lence and enforce anti-gang laws when such violence does occur”).

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 119 3–C.4 Police Traffic Stop Data NCVS Supplement: Police-Public Contact Survey As described in Section 3–A.2, the PPCS supplement to the NCVS grew directly out of a legislative mandate to study “excessive force by law enforce- ment officers” (P 103-322).9 Faced with this mandate, BJS adopted a strat- .L. egy that it would later use—albeit on a much larger scale—when organizing data collections to respond to the Prison Rape Elimination Act of 2003 (see Section 5–A.1). BJS convened a workshop on police use of force in May 1995 to highlight challenges in systematic measurement of the phenomenon (McEwen, 1996). BJS then began with an administrative, facility-level study; BJS and NIJ jointly funded a study by the International Association of Chiefs of Police that contacted 110 agencies in 1995 and about 30 agencies in both 1996 and 1997. From those contacts, it was concluded that “the police use of force rate was 4.19 per 10,000 responded-to-calls for service, or 0.0419 percent,” in 1996 (Henriquez, 1999:21; see also Fyfe, 2002). At the same time, it also set into motion a plan to directly gather data through direct survey interviewing. The PPCS was constructed and fielded on a pilot ba- sis in 1996, evolving into a triennial collection. In its building of the PPCS, BJS approached the problem of public interactions with police more broadly than the “excessive force” text of the act envisioned. The pilot PPCS (and its successors) was fielded “with the goal of better understanding the types and frequency of contacts between the police and the public, and the con- ditions under which force may be threatened or used” (U.S. Government Accountability Office, 2007:7). As we discuss further in Section 5–A.2, disputes over the release of data from the 2002 PPCS, and in particular the evidence it presented about dif- ferential treatment by race during traffic stops, led to the termination of a BJS director. State Police Traffic Stop Data Collection Procedures BJS staff have also periodically compiled what might be considered meta- data related to traffic stops. In 1999, 2001, and 2004, staff have contacted the nation’s 49 primary state police agencies10 with a questionnaire asking about policies for recording data on race and ethnicity of persons stopped on traffic violations. This effort does not collect, and does not intend to collect, actual traffic stop records or even counts of traffic stops, but merely whether demographic data are routinely recorded and whether those data are electronically accessible. 9 The same law also requires the attorney general to “publish an annual summary” of these data (42 USC § 14142). 10 “Hawaii does not have a state police agency” (Bureau of Justice Statistics, 2006c:3).

120 JUSTICE STATISTICS 3–D ADJUDICATION, PROSECUTION, AND DEFENSE Detailed as they are, the legal duties of BJS do not explicitly mention the judicial processing of criminal trials. However, BJS’s general charge to statistically document the “operations of the criminal justice system at the Federal, State, and local levels” does clearly give BJS the mandate to generate statistics on the operations of the courts, because the courts are an integral part of the system. This is a task that is as difficult as it is unusual, from an operational standpoint: a small federal agency tasked with being a data collector on a separate branch of government with highly decentralized operations that vary greatly by locality. Article III of the U.S. Constitution established the federal court system, specifically creating the U.S. Supreme Court and reserving to Congress the authority to create lower federal courts. The modern federal judiciary in- cludes 94 U.S. District Courts and 13 U.S. Courts of Appeals, as well as U.S. Bankruptcy Courts and Courts of Claims and International Trade.11 The federal courts have primary jurisdiction in cases involving federal laws and treaties, as well as disputes between multiple states. Data from the federal court system are collected and disseminated by the Administrative Office of the U.S. Courts. The powers and areas of jurisdiction that are not explicitly assigned to the federal courts are the province of the state court system. Individual state constitutions and laws create the network of courts of original jurisdiction for civil and criminal cases, as well as appellate structures. States vary greatly in their court organizational structures, in the number of original courts, and in the number of appellate layers. Though many of the states have a single court of last resort (e.g., the state supreme court) and a single intermediate court of appeals, this is not a universal rule: as of 2007, 11 states have no intermediate court of appeals, so that appeals from district and other trial courts are appealed directly to the state supreme court.12 Likewise, some states have two courts of last resort, one for civil and one for criminal cases. In addition to appellate structure, individual state court systems also vary in the degree to which specific legal matters are distributed to special- jurisdiction courts, such as family courts, juvenile courts, and probate courts. 11 The U.S. Court of Veterans’ Appeals, the U.S. Court of Military Appeals, and the U.S. Tax Court are considered “Article I” or legislative courts because they have been created by Congress but are not vested with full judicial power to decide questions of federal and constitutional law. 12 Those 11 states are Delaware, Maine, Montana, Nevada, New Hampshire, North Dakota, Rhode Island, South Dakota, Vermont, West Virginia, and Wyoming. In recent years, the Nevada Supreme Court has argued for the creation of an intermediate appellate court, given its steadily increasing workload and legal mandate to hear and consider all cases filed (Supreme Court of Nevada, 2007). Since 1999, the seven-member Supreme Court has dealt with its workload by dividing into three-judge panels, rotating between Carson City and Las Vegas, to hear many cases.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 121 The state court system is sufficiently complex that even the most basic sum- mary of the scope of the system—the number of “courts” it encompasses—is difficult to characterize. The National Center for State Courts (NCSC), with which BJS works on various collections, estimates that the system includes approximately 16,000 “courts” (LaFountain et al., 2007:9): However, this number may be somewhat misleading and is not derived from any universally agreed upon definition. For example, Texas, the second-largest state, considers each judgeship in the state to be a court and thereby reports over 3,300 trial courts statewide. Conversely, Cali- fornia, the largest state, has 58 superior courts in its trials system. State courts are also inherently difficult to conceptualize and measure because their basic structures and jurisdictions are subject to change. The example of California is useful again; the state’s current 58 county-level su- perior courts were reformed between 1998 and 2001, following passage of a constitutional amendment via 1998’s Proposition 220. The amendment permitted counties to consolidate the operations of then-existing mixed- jurisdiction superior and municipal trial counts in a single, unified superior court with jurisdiction over all civil and criminal cases. The measurement of activity in the judicial branch, and particularly the state court system, has been a long-standing challenge, and previous attempts by the Census Bureau and the FBI to do so have been relatively short-lived. In the 1930s and early 1940s, the Census Bureau attempted such measure- ment, and the introduction to the Census Bureau’s 1933 “Judicial Criminal Statistics” report explained the goal and expressed great hope for the collec- tion (quoted in Cahalan, 1986:4–5): It is the purpose of the Census Bureau, through cooperation with the several States, to develop a national system of collecting judicial crim- inal statistics which will be mutually advantageous to the States and the Federal government. . . . It is hoped that eventually each State will adopt the Census forms and classifications. If this is done, one report for the court will suffice for the State and for the Federal government, the statistics of different States will be compiled on the same basis, and needless duplication of work and expense will be avoided. The Census Bureau’s intent was to collect, “by offense, the number of per- sons prosecuted, the disposition made of prosecutions, and the sentences imposed on convicted persons” (Beattie, 1959:584). However, the Cen- sus Bureau cited the major difficulty in obtaining comparable data from the states and incomplete responses (from at most 32 states) in discontinuing the data series in the early 1940s. Likewise, the FBI UCR program asked po- lice departments to submit information about judicial disposition of arrests beginning in 1955, but the practice was abandoned after 1977 (Cahalan, 1986:5). Although court information systems have improved over time, the mea-

122 JUSTICE STATISTICS surement of even basic parts of the adjudication and prosecution systems presents a formidable challenge. An unusual government document—five agencies, including BJS, coauthoring a two-page summary—illustrates the point: It describes the difficulties involved in comparing case processing statistics even at the federal level.13 The summary attributes the incompara- bility of processing statistics across data sources to fundamental definitional differences, from the classification of offenses and definition of “defendants” to differences in labeling types of case dispositions (U.S. Department of Jus- tice, 1998). Table 3-7 illustrates the collection dates for BJS’s data series in the area of adjudication, and the balance of this section describes the principal series. Two collections—the Civil Justice Survey of State Courts and BJS’s work with the Federal Justice Statistics Resource Center—have already been dis- cussed at more natural points in the narrative, in Section 2–C.2 and Box 3-4, respectively. Several of BJS’s projects related to adjudication have been con- ducted with or by NCSC; in particular, NCSC’s Court Statistics Project is the source of monitoring data on caseloads and completed cases within the state court systems. We describe NCSC and the project more completely in Box 3-2. 3–D.1 National Judicial Reporting Program BJS contracts with the Census Bureau to compile court record data from felony trial courts in a sample of counties through the National Judicial Re- porting Program (NJRP), a biennial collection. The NJRP provides national estimates of the demographic characteristics, conviction offense type(s), and sentence imposed. When selected to participate in the sample, jurisdic- tions can provide records data in a variety of formats (electronic and paper, with electronic submissions accounting for 97 percent of data received in the 2004 NJRP), which the Census Bureau then keys, codes, and formats. “State courts were the source of NJRP data for about 44% of the 300 coun- ties sampled [in 2004]. For other counties, sources included prosecutors’ offices, sentencing commissions, and statistical agencies” (Bureau of Justice Statistics, 2007e:5). For the 2002 and 2004 NJRP (which used the same sample), the sample of counties was drawn by assigning each state a “cost factor”—with values 1 (low), 3 (moderate), or 5 (high)—based on the estimated cost of collecting data in those counties in 2000. These cost factors were then used in com- bination with county populations from the 2000 census to define 20 strata, 13 The coauthoring agencies were the Administrative Office of the U.S. Courts, the Executive Office for the U.S. Attorneys, BOP, the U.S. Sentencing Commission, and BJS. Because the document was issued with “U.S. Department of Justice” as the header, that label is used as the author in the citation.

Table 3-7 Bureau of Justice Statistics Data Collection History and Schedule, Adjudication, 1981–2009 Series 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08 09 Total Federal Justice Statistics • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 29 National Judicial Reporting Program · · • · • • · • · • · • · • · • · • · • · • · • · • · • · 14 State Court Annual Caseload Statistics · · · · · · · · · · · · · • • • • • • • • • • • • • • • • 16 State Court Processing Statistics · · · · · · · • · • · • · • · • · • · • · • · • · • · • · 11 Census of State Court Prosecutors · · · · · · · · · · · · · · · · · · · · · · · · · · • · · 1 Civil Justice Survey · · · · · · · · · · · • · · · • · · · · • · · · • · · · · 4 National Prosecutors Survey · · · · · · · · · • · • · • · • · · · · • · · · • · • · · 7 National Survey of Indigent Defense Services · • · · · • · · · · · · · · · · · · • · · · · · · · • · · 4 Prosecution of Felony Arrests · • · · · • • • · · · · · · · · · · · · · · · · · · · · · 4 State Court Organization · · · · · · • · · · · · • · · · · • · · · · · • · · · · • 5 NOTES: •, data collected. ·, data not collected. OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES SOURCE: Bureau of Justice Statistics. 123

124 JUSTICE STATISTICS Box 3-2 The Court Statistics Project and the National Center for State Courts Much of the Bureau of Justice Statistics’ (BJS’s) work with the state courts in the Court Statistics Project has been conducted by the nonprofit National Center for State Courts (NCSC). NCSC was founded in 1971 on the recommendation of Chief Justice Warren Burger at a national conference of the judiciary. Since 1978, it has been headquartered in Williamsburg, Virginia, and is governed by a Board of Directors elected by state court administrators and chief justices. The NCSC’s mission is to improve state court administration by serving as a clearinghouse of information, including training and development of performance standards. NCSC’s Court Statistics Project (called the State Court Statistics Project by BJS) began in 1978; it receives financial support from BJS for its work on collecting data on state court caseload. Results from the Court Statistics Project are posted and maintained on the NCSC website; the URL http://www.courtstatistics.org redirects browsers to the specific site. The core reports from the Court Statistics Project are the annual Examining the Work of State Courts (LaFountain et al., 2007) and State Court Caseload Statistics (Court Statistics Project, 2006), both of which are now maintained and updated in electronic format on the website. Both of these report series are branded and identified as NCSC or Court Statistics Project reports and not BJS reports, though a BJS logo is included and extensive links to related BJS reports are included in the electronic documents. The Court Statistics Project compiles data on cases filed and disposed in state appellate and trial courts from the 50 states, the District of Columbia, and Puerto Rico. Coverage of trial court caseloads is not limited to criminal cases; for instance, entries for civil cases, traffic and other violations, domestic relations, and juvenile cases are also recorded. However, data in the project are limited to aggregate counts of cases and not specific characteristics of cases. BJS also periodically sponsors a Survey of State Court Organization that is conducted by NCSC, with assistance from the Conference of State Court Administrators; it is conducted every 5–7 years, most recently in 2004. Though the survey includes requests for administrative counts (e.g., number of judges and personnel), the emphasis of the study is on changes to the structure of the state courts. The 2004 survey, in particular, attemped to query court systems about their processing of domestic violence cases and their adoption of specialized courts to handle certain case types. Results from the survey are summarized in regular BJS reports on State Court Organization (Rottman and Strickland, 2006; Langton and Cohen, 2007), which draw extensively on NCSC data and the flowcharts that the center maintains to illustrate the court structures in individual states. Among other activities, the NCSC worked with BJS on a study of habeas corpus petitions filed by state prisoners in federal court challenging their sentences on the basis of violations of constitutional rights (Hanson and Daley, 1995). NCSC also initiated and organized a 10-year effort that led to the publication of Trial Court Performance Standards (TCPS). The TCPS effort was conducted with funds from the Bureau of Justice Assistance, and the formal result of the work is a four-volume report issued by an advisory commission organized by NCSC. The standards are intended to give individual courts a metric to compare their own activities within such performance areas as ensuring access to justice and public trust and confidence. The TCPS initiative is described more fully in an NCSC-hosted website (http://www.ncsconline.org/D_Research/tcps/index.html), and Keilitz (2000) describes remaining challenges in converting the standards into practices, including the fuller development of statistical measures and indices.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 125 which were constructed to give the 75 largest counties—which “account for a disproportionately large amount of serious crime in the Nation”—“a greater chance of being selected than the remaining counties.” The final sam- ple consisted of 300 counties (out of 3,141 county-level equivalents in the United States), 58 from among the largest 75 counties; some selected coun- ties that declined to participate were replaced by other counties. On the ba- sis of this sample, Census Bureau staff examined case-level data for 471,646 convicted felons sentenced in 2004, of which about 70 percent originated in the most populous counties (Bureau of Justice Statistics, 2007e:4). BJS summarizes findings from the NJRP on sentence length in its bul- letin series Felony Sentences in State Courts (e.g., Durose and Langan, 2007) and, earlier, Felony Sentences in the United States (e.g., Brown and Langan, 1999). In recent incarnations, spreadsheet tables of key results are presented on the BJS website at http://www.ojp.usdoj.gov/bjs/abstract/scscfst.htm. 3–D.2 State Court Processing Statistics Originally developed in 1982 and known, through 1994, as the National Pretrial Reporting program, the State Court Processing Statistics (SCPS) pro- gram provides data on the criminal justice processing of felony defendants in a sample of large counties. The program prospectively tracks felony defen- dants from charging by the prosecutor until disposition of their cases or for a maximum of 12 months. Data are obtained on demographic characteristics, arrest offense, criminal justice status at time of arrest, prior arrests and con- victions, bail and pretrial release, court appearance record, rearrests while on pretrial release, type and outcome of adjudication, and type and length of sentence. In at least one instance, the standard SCPS program has been augmented to cover special case types: in 1998, records were drawn from 40 large urban counties on juveniles facing felony charges in adult criminal courts (Strom et al., 1998; Rainville and Smith, 2003). The documentation for a 1990–2004 compilation of SCPS data on the NACJD (Bureau of Justice Statistics, 2007g:3) summarizes the data collec- tion’s content: This data collection effort was undertaken to determine whether accu- rate and comprehensive pretrial data can be collected at the local level and subsequently aggregated at the national level. The data contained in this collection provide a picture of felony defendants’ movements through the criminal courts. Offenses were recoded into 16 broad cat- egories that conform to the Bureau of Justice Statistics’ crime defini- tions. Other variables include sex, race, age, prior record, relationship to criminal justice system at the time of the offense, pretrial release, detention decisions, court appearances, pretrial rearrest, adjudication, and sentencing. The unit of analysis is the defendant.

126 JUSTICE STATISTICS The sampling scheme for the SCPS is unusual; in what follows, we de- scribe the procedure and counts used in the 2004 SCPS, but the general design of the collection has been similar in previous years. The intended universe that the program is meant to reflect is “felony court filings dur- ing the month of May in even numbered years from 1990–2004 in the 75 most populous counties in the United States” (Bureau of Justice Statistics, 2007g:4). Why the 75 most populous counties is not entirely clear (though the 75 largest counties are said to “account for more than a third of the United States population and approximately half of all reported crimes,” and availability of records is also likely a factor), nor why May is the tar- geted month. The sampling scheme designed by the Census Bureau calls for 40 counties to be chosen from the 75 in the first stage, 10 with certainty (“because of their large number of court filings”) and the others drawn from three strata based on their levels of filings. The court system in each of the chosen counties was asked to provide “data for every felony case filed on selected days during” May 2004; the high-filing 10-counties chosen with certainty were only asked to provide 5 days’ worth of filings, while other counties were asked for 10 or 20 days’ worth of filings. In the end, data were collected for 15,761 felony cases, out of an estimated 57,497 May 2004 cases in all 75 large counties (Bureau of Justice Statistics, 2007g:4–5). Since the inception of SCPS, BJS has contracted with the Pretrial Justice Institute (PJI) as the data collector for the program. Founded in 1977, PJI was known as the Pretrial Services Resource Center until 2007. BJS opened the SCPS data collection contract to competition in 2006 and PJI was again selected as the collector. According to PJI’s website, PJI “completely re- designed the internal project management processes, moved to online data collection and submission, and for the first time, accepted large data sets ex- tracted from jurisdictions’ information management systems” between 2006 and 2008.14 In early 2008, BJS issued a “redesign solicitation” request for propos- als, asking for bids to “re-conceptualize SCPS to take into account the in- creasing levels of automation in state courts and other enhanced collection mechanisms that have occurred since the late 1980s” (CFDA No. 16.734). The solicitation also candidly describes “several important limitations” to the current SCPS: First, the SCPS project covers case processing in the Nation’s 75 most populous counties. It does not have the capacity to make national or county level inferences about felony case processing or pretrial release. Secondly, the current SCPS sampling strategy of selecting only 40 of the Nation’s 75 most populous counties and requesting participating SCPS jurisdictions to provide less than a whole month of felony filing 14 See http://www.pretrial.org/AnalysisAndResearch/StateCourtProcessingStatistics/Pages/ default.aspx.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 127 data (e.g., 5 or 10 business days) introduces certain levels of sampling error into the data collection process. Lastly, SCPS does not collect several key data elements that potentially play a crucial role in pretrial decision-making. The decision to restrict the SCPS sample to 40 of the Nation’s 75 most populous counties, confine felony filing data to less than a whole month, and limit the types of data collected were primarily driven by time and cost restraints and the difficulties inherent in obtaining court case processing data. Specific goals called for in the redesign are to “develop and test alternative sampling strategies that allow for periodic modular enhancements of SCPS” and to take “advantage of automated systems of case management, state criminal history depository programs, and administrative jail systems.” PJI’s website acknowledges that its bid, in partnership with Justice Management Institute and the National Association of Pretrial Services Agencies, was se- lected in September 2008.15 SCPS data collection will be suspended in 2009 during the redesign, the first break in the series since 1984. BJS reports based on SCPS data include Cohen and Reaves (2006), Reaves (2006), and Cohen and Reaves (2007). 3–D.3 National Prosecutors Survey The National Prosecutors Survey (NPS) asks chief prosecutors in state court systems to report management information, resources, and policies; in its content and focus, it is very analogous to the LEMAS survey. The codebook for the NACJD filing of the 2005 NPS summarizes the content as follows: The [NPS’s] purpose was to obtain detailed descriptive information on prosecutors’ offices, as well as information on their policies and prac- tices. Variables cover staffing, funding, special categories of felony pros- ecutions, caseload, juvenile matters, work-related threats or assaults, the use of DNA evidence, and community-related activities, such as involve- ment in neighborhood associations. Most recently, BJS has used the NORC to collect NPS data. In 2005, the NPS sample was drawn from a frame of 2,400 prosecuto- rial districts handling felony cases that was assembled by the Census Bureau. Districts were grouped into five strata on the basis of 2004 population es- timates; “within each stratum, districts were systematically selected for the sample,” yielding a final sample of 310 offices, 307 of which responded to the mail survey (Bureau of Justice Statistics, 2007f:4). The periodicity—and the scope—of the prosecutor survey have been ir- regular. First conducted in 1990, the survey was performed every 2 years 15 PJI also indicated that it plans to partner with the Urban Institute specifically to improve the sampling strategy underlying SCPS; see http://www.pretrial.org/TechnicalAssistance/Pages/ OurProjects.aspx.

128 JUSTICE STATISTICS but, more recently, it has been performed every 5 years or so. Moreover, the NPS has most frequently been conducted as a sample of prosecutor of- fices but also, occasionally, as a full census. The 2001 collection was the first intended to be a complete enumeration of all prosecutors’ offices (on the order of 2,400), whereas the 2005 version was a sample of 310 offices. BJS’s supporting statement in requesting clearance for the 2007/2008 ver- sion of the NPS (see Box 5-4; ICR 200704-1121-004) reflects the confused nature of the collection; though generally maintaining the “National Sur- vey of Prosecutors” nomenclature, it also refers to conducting this version as a “National Census of Prosecutors” or “National Census of State Court Prosecutors.” The need—or even potential use—for this collection to sup- port more detailed, targeted surveys in subsequent years is not mentioned in BJS’s argument. Though it approved the collection, OMB chided BJS for the uncertain periodicity and requested feedback on “the magnitude and nature of change identified from 2001 and 2005 to the present,” as well as fuller articulation of the utility of the resulting data, prior to future collections. Because of the “census” nature of the 2007/2008 survey, BJS scaled back the level of information requested in the most recent administration of the survey, with the objective of capping the burden on responding prosecutor offices at 30 minutes. Previous versions of the survey went into somewhat fuller detail, including questions on handling of cases involving juveniles and civil actions filed against prosecutors. 3–D.4 National Survey of Indigent Defense Systems In 1996, BJS published a “Selected Findings” report (Smith and DeFrances, 1996) that pieced together the limited glimpses of what its exist- ing data systems revealed about the use of indigent defense systems: that is, the use of court-appointed legal representation and legal defender services by defendants unable to afford them. In particular, the report drew from questions on the NPS (on the availability of public defender or assigned counsel arrangements in their jurisdictions) and BJS’s prison and jail inmate surveys (asking about the inmates’ own representation). On the basis of this first effort, BJS sponsored NORC to conduct a Na- tional Survey of Indigent Defense Systems; it was BJS’s first structured survey of such defense agencies since two studies in the early 1980s. Though “Na- tional” in label, the survey was restricted to agencies within the 100 most populous counties in the United States, as of 1997 population estimates. The survey was conducted in two stages in 1999–2000; an initial “county survey” sent to county governments asked for their assistance in identifying indigent criminal defense programs in their area, and the more detailed 141-question “program survey” was then sent to identified programs. In instances where the first survey suggested that such programs were solely administered by

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 129 the state government, the program survey was sent to the appropriate state department. DeFrances and Litras (2000) summarize the survey as a whole, while DeFrances (2001) focuses on the 21 states that are the sole funders of indigent defense services. The 2007 BJS Census of Public Defender Offices covered publicly funded indigent defense offices, not the full range of service providers. 3–E OTHER DATA COLLECTIONS For the sake of completeness, Table 3-8 summarizes the data collection years of miscellaneous activities that do not fall neatly into the major cate- gories covered in this chapter. Some of these are related to BJS’s criminal history improvement grant programs, which we describe more fully in Chap- ter 4. 3–F ASSESSMENT OF THE PORTFOLIO It is not difficult to find references to and uses of the major BJS data se- ries in the academic literature. It is also clear that BJS data series and results frequently garner the attention of one critical audience and user—the U.S. Congress—as we describe in Box 3-3. BJS’s data collection programs range widely in their scope and universe size—from the sprawling and nationally representative NCVS to the subset of law enforcement agencies that operate dedicated gang units—and so vary in their level of expense. They vary in methodology from hand-coding of paper court dockets to online question- naire completion by facility administrators. One thing that we think clear from a review of BJS’s entire data collec- tion portfolio is that it is a solid body of work, generally well justified by public information needs or required by law. It represents a better-than- good-faith effort by the agency to marshal data relevant to an astoundingly large substantive mandate, given that fiscal resources typically have been less than commensurate. Within its resources and the topics it has chosen to address, BJS has done well in the sense that nothing in its portfolio is obvi- ously frivolous, wasteful, or inconsistent with its legal mandates. Certainly, however, not all of BJS’s individual data series are equally influential, and there are some important topics (such as those described in Section 2–C) on which BJS currently collects little or no data. Our review of the existing data collections of the Bureau of Justice Statis- tics yields some basic observations about the major topic segments of the portfolio: • BJS’s key data series in the area of victimization, the NCVS, is its most expensive, most flexible, and most scrutinized collection. It is also, ar-

130 Table 3-8 Bureau of Justice Statistics Data Collection History and Schedule, Criminal History Improvement and Miscellaneous Studies, 1981–2009 Series 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05 06 07 08 09 Total Criminal Justice Agency Survey • • • • • • • • • • • • • • • • • • • • · · · · · · · · · 20 Domestic Violence Processing · · · · · · · · · · · · · · · · · · · · · • · · · · · · · 1 Domestic Violence Recidivism · · · · · · · · · · · · · · · · · · · · · · • · · · · · · 1 Expenditure and Employment Statistics • · · · • • • • • • • • • • • • • • • • • • • • • • • • • 26 Firearm Inquiry Statistics · · · · · · · · · · · · · · · · · · • • • • • • • • • • • 11 Inventory of Correctional Information Systems · · · · · · · · · · · · · · · · · • · · · · · · · · · · · 1 Justice Assistance Data Survey · · · · · · · · · • · · · · · · · · • · · · · · · · · · · 2 National Study on Campus Sexual Assault · · · · · · · · · · · · · · · · • · · · · · · · · · · · · 1 Offender-Based Transaction Statistics · · • • • • • • • • · · · · · · · · · · · · · · · · · · · 8 Survey of Cybercrime on Businesses · · · · · · · · · · · · · · · · · · · · · · • · · • · · · 2 Survey of State Criminal History Information Systems · · · · · · • · • · • · • · • · • · • · • · • · · • · · · 10 Survey of State Procedures on Firearm Sales · · · · · · · · · · · · · · • • • • • • • • • • • · · · · 11 NOTES: •, data collected. ·, data not collected. SOURCE: Bureau of Justice Statistics. JUSTICE STATISTICS

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 131 Box 3-3 Congressional Uses of Bureau of Justice Statistics Data BJS reports and data are frequently cited in congressional debates and statements. In many instances, the citation is used to establish some basic fact, as grounding for new legislative initiatives; accordingly, they are frequently referenced in “whereas” or “findings” clauses. For instance, in the 110th Congress, H.R. 4611—the End Racial Profiling Act of 2007—includes among its prefatory findings a basic recounting of findings from the 2002 Police-Public Contact Survey supplement to the NCVS: (12) A 2005 report of the Bureau of Justice Statistics of the Department of Justice on citizen-police contacts that occurred in 2002 [(Durose et al., 2005)], found that, although Whites, Blacks, and Hispanics were stopped by the police at the same rate– (A) Blacks and Hispanics were much more likely to be arrested than Whites; (B) Hispanics were much more likely to be ticketed than Blacks or Whites; (C) Blacks and Hispanics were much more likely to report the use or threat- ened use of force by a police officer; (D) Blacks and Hispanics were much more likely to be handcuffed than Whites; and (E) Blacks and Hispanics were much more likely to have their vehicles searched than Whites. (Section 5–A.2 describes the referenced report more fully, and the controversy that surrounded it; Judiciary Committee chairman John Conyers cited specific figures from Durose et al. (2005) in his remarks on introducing the bill [Congressional Record, December 13, 2007, p. 2576].) Likewise, H.R. 5654—the Families Beyond Bars Act of 2008—begins noting that: Congress finds as follows: (1) The Bureau of Justice Statistics estimates that 1,500,000 children in the United States have at least one incarcerated parent, and an esti- mated 10,000,000 more individuals have at least one parent who was incarcerated at some point during the individual’s childhood. (2) In 2006, the Bureau of Justice Statistics estimated that 75 percent of incarcerated women were mothers, two-thirds of whom were mothers of children under the age of 18, and an estimated 32 percent of incarcer- ated men were fathers of children under the age of 18. . . . (4) The Bureau of Justice Statistics estimates that children with imprisoned parents may be almost 6 times more likely than their peers to be incar- cerated. BJS findings are used similarly in floor debates. One such example arose during debate on H.R. 3992, the Mentally Ill Offender Treatment and Crime Reduction Reauthorization and Improvement Act of 2008, on January 23, 2008. The remarks of Rep. Bobby Scott (D-Va.) included the following paragraph (Congressional Record, January 23, 2008, p. 426): A 2006 report by the United States Department of Justice Bureau of Justice Statistics entitled “Mental Health Problems of Prison and Jail Inmates” suggests that the criminal justice system has become, by default, the primary caregiver of the most seriously mentally ill individuals. [The report referenced is James and Glaze (2006) and uses data from the Survey of Inmates in State and Federal Correctional Facilities, 2004, and the Survey of Inmates in Local Jails, 2002.] The bureau reports that over one-half of the prison and jail population of this country is mentally ill. More specifically, 56 percent of State prisoners, 45 percent of Federal prisoners, and 64 percent of jail inmates have some degree of mental illness. Occasionally, the findings from one BJS report lead to new suggestions for data collec- tion and analysis. The BJS recidivism work mentioned in the Second Chance Act of 2007 (continued)

132 JUSTICE STATISTICS Box 3-3 (continued) (see Box 3-5) is one such example. Another is the debate on reauthorizing the Death in Custody Reporting Act, first enacted in 2000, in early 2008. In debate on H.R. 3971, both the chairman and ranking minority member of the House Judiciary Subcommittee on Crime argued for improved data collection. Chairman Bobby Scott (D-Va.) outlined the state of knowledge before the original act and basic findings from BJS’s original data collection under the act (Congressional Record, January 23, 2008, pp. 428–429): Before the enactment of the Death in Custody Act of 2000, States and localities had no uniform requirements for reporting the circumstances surrounding the deaths of persons in their custody, and some had no system for requiring such reports. The lack of uniform reporting requirements made it impossible to ascertain how many people were dying in custody and from what causes, although estimates by those concerned suggested that there were more than 1,000 deaths in custody each year, some under very suspicious circumstances. . . . Since the enactment [of the Death in Custody Reporting Act] in 2000, the Bureau of Justice Statistics has compiled a number of statistics detailing the circumstances of prisoner deaths, the rate of deaths in prison and jails, and the rate of deaths based on the size of various facilities and so forth. But the most astounding statistic reported since the enactment of the [act] is the latest Bureau of Justice statistics report dated August 2005, which shows a 64 percent decline in suicides and a 93 percent decline in homicides in custody since 1980 [(Mumola, 2005)]. Those statistics showing a significant decline in the death rate in our Nation’s prisons and jails since stricter oversight has been in place suggest that the oversight measures, such as the Death in Custody Reporting Act, play an important role in ensuring the safety and security of prisoners who are in the custody of State facilities. Other congressional requests in enacted laws for new BJS data collection efforts include “a study of the criminal misuse of toy, look-alike and imitation firearms, including studying police reports of such incidences [and reporting] on such incidences relative to marked and unmarked firearms” (1988; P 100-615) and the addition of questions on .L. disabilities (1998; P 105-301) and crimes against seniors (2000; P 106-534) to .L. .L. the NCVS. Of course, many such suggestions in legislative bills are never enacted. Prior to the establishment of the U.S. Department of Homeland Security, the proposed Barbara Jordan Immigration Reform and Accountability Act in the 107th Congress (H.R. 3231) would have markedly increased BJS’s scope. In abolishing the existing Immigration and Naturalization Service and distributing its functions elsewhere in the Justice Department, it would have created a separate Office of Immigration Statistics within BJS. The bill passed in the House but did not advance beyond referral to the Senate’s Judiciary Committee. Examples of proposed new inquiries in bills introduced in the 110th Congress include H.R. 259, which would create “a task force within the Bureau of Justice Statistics to gather information about, study, and report to the Congress regarding, incidents of abandonment of infant children.” This would involve “collecting information from State and local law enforcement agencies and child welfare agencies regarding incidents of abandonment of an infant child by a parent of that child,” including “the demographics of such children and such parents” and “the factors that influence the decision of such parents to abandon such children.” Similarly, H.R. 3187 would require BJS to “conduct a study to determine the extent to which methamphetamine use affects the demand for (and provision of) oral health care in correctional facilities,” including statistical information on the financial impact of “meth mouth” treatment on corrections budgets.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 133 guably, the agency’s most underutilized collection, with its capacity to achieve its potential undercut by scarce resources, diminishing sample size, and—to a degree—a lack of innovation in analyzing and promot- ing the data. Though much methodological research was conducted in the early years of the survey and again during the late-1980s redesign, a major threat to the NCVS is stagnation in content and methodology. • BJS’s data series in corrections are a good and successful example of a well-designed and integrated system of collections, carefully delineat- ing information to be obtained by different methodologies (personal interviewing and facility or administrative records) at certain time in- tervals for a range of facilities (prisons, jails, support agencies). The set of collections is designed such that “censuses” build the frame for subsequent, more detailed “surveys,” in a way that is blurred in other areas of the BJS portfolio. Going forward, the challenge for BJS’s efforts in the general area of corrections is one of expanding its cover- age to include prisoner reentry issues, generally, and improving on its previous solid (but infrequent) studies of recidivism. • BJS’s work in law enforcement is hindered by a sharp and overly re- strictive focus on management and administrative issues; its analysis of law enforcement generally lacks direct connection to data on crime, much less providing the basis for assessing the quality and effectiveness of police programs. It is also in the area of law enforcement, with the proliferation of numerous special-agency censuses and little semblance of a fixed schedule or interconnectedness of series, where the need for refining the conceptual framework for multiple data collections is most evident. • Critique of BJS’s work in adjudication is more a reflection on the gen- eral difficulty of measurement in the justice system than a criticism of BJS. Information systems in state court systems—and, indeed, the structure and jurisdiction of those courts—vary strongly in their acces- sibility and sophistication. The dominant impression that comes from looking at BJS’s statistical series in the courts is that of the agency (with its data collection partners) doing the best it can with what it has. That said, there are numerous areas where improvement is needed: bolster- ing the adjudication series’ basis in statistical sampling and patching important gaps in statistical coverage of the justice system funnel (par- ticularly declinations to prosecute and out-of-court settlements) would dramatically upgrade the relevance and utility of BJS’s data series. Generally, our major concerns with the shape of BJS’s data portfolio and our suggestions for improvement can be grouped under two broader themes that we discuss more completely in the balance of this section:

134 JUSTICE STATISTICS • BJS’s data collections are mainly cross-sectional in nature and focus on relatively narrow, individual parts of the justice system. The coverage that is attained through these cross-sectional series is extensive, but knowledge of longitudinal flows and progressions through (and out of) the justice system is comparatively scant. • Reflecting its cross-sectional nature, the major fault of BJS’s data col- lection portfolio is not that any individual component is deficient but that the portfolio lacks a sense of integration and cohesiveness. New data series that have been added are generally important, but how they fit within broader conceptual frameworks and what they uniquely con- tribute to knowledge of crime and justice is not always well articulated. 3–F.1 Lack of Longitudinal Series Finding 3.1: BJS currently gathers data about the criminal justice system but it does so on an institution-by-institution basis (po- lice, courts, corrections) using varying units of analysis (crimes, individuals, cases) and sometimes varying time periods and sam- ples. This approach provides good cross-sectional assessments of parts of the system, but makes it difficult or impossible to an- swer questions about the flow of individuals from arrest through eventual exit from the system. Yet people exit the system at many different stages in ways that are ill-understood but consequen- tial for the effectiveness and fairness of criminal justice system processes. The cross-sectional approach misses the interfaces be- tween the institutions, such as the large but unknown number of individuals who are arrested but not prosecuted. The elegance of the funnel model of the criminal justice system is its longitudinal, progress-over-time structure and the way that it focuses atten- tion on the system as a whole. However, as the coverage bars in Figure 2-2 make clear, there exist no longitudinal data that actually follow the flow of individuals (or cases) through all steps of the system. BJS develops and main- tains a large set of data series that describe the basic features of the sequence of events in the criminal justice system. These data series cover various dimensions of law enforcement, prosecution and pretrial services, adjudi- cation, sentencing and sanctions, corrections, and recidivism. A complex set of institutions operating at the local, state, and federal levels is covered by these data series, generating a wealth of information about the staffing and caseloads of those institutions. However, these data are generally cross- sectional “slices” of information at various points in the justice system that do not permit an assessment of experiences in the system as a whole, from initial contact (arrest) through placement in correctional supervision to, per- haps, reentry into the community.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 135 BJS has developed one major continuing data resource that permits ex- amination of some of the processes that influence felony case dispositions from case filing through sentencing. The BJS-sponsored Federal Justice Statistics Resource Center (see Box 3-4) uses the “defendant-case” as its unit of analysis, and the data files linked in this collection indicate the flow of these “defendant-cases” from step to step. However, these data have the significant limitation of including only federal matters, not those in the state courts which handle the vast majority of criminal and civil justice cases. On a one-time basis in 1988, BJS mounted an effort that linked different stages of the justice system. BJS traced background and demographic infor- mation for about half of the cases in a sample of 33 large urban counties that involved a murder charge (brought in 1988 or earlier) and that were disposed during 1988. The resulting sample of 9,576 murder defendants contained information on the circumstances of the crime, the relationship between victim and defendant, and the disposition of the case; the sample was meant to be representative of the circumstances surrounding murder cases in the 75 most populous counties of the nation. The data were sum- marized by Dawson and Boland (1993), and the data set (Bureau of Justice Statistics, 1996) was used in subsequent analyses: on murder involving fam- ily members and dependents (Dawson and Langan, 1994) and the particular circumstances of spousal murder cases (Langan and Dawson, 1995). The ability to follow persons from initial contact with the arrest through exit from the system is important for understanding the fairness and effec- tiveness of the criminal justice system at all stages of its operations. Hence, developing the longitudinal structure of BJS data should be a high priority. Although the creation of new data collections, explicitly designed to collect information on longitudinal flows, is one approach to improvement in this area, it is also important to note that some important progress along these lines can be made without new series, by working within the framework of BJS’s existing data series and data archives. Emphasize Flows in Current BJS Series Within its existing data series, BJS could make strides to provide some empirical insight on gross flows in the system through improvements we suggest elsewhere in this report. BJS could more effectively use its surveys, particularly the NCVS, to examine points of contact throughout the justice system; the PPCS is a useful model in this regard by examining public inter- action with law enforcement, but targeted modules could also query about experiences with adjudication or correctional systems. BJS’s one-time effort to study victim, incident, and offender characteristics of murder cases that were adjudicated and disposed in 1988 in large urban counties (Bureau of Justice Statistics, 1996) is a possible model that could be considered in revis-

136 JUSTICE STATISTICS Box 3-4 The Federal Justice Statistics Resource Center of the Urban Institute The Bureau of Justice Statistics (BJS) contracts with the Urban Institute to maintain the Federal Justice Statistics Resource Center (FJSRC) website, an attempt to consolidate available data from federal agencies and courts. By combining series, the intent is to describe all steps of the processing funnel (see Figure 2-1) for suspects and defendants faced with federal charges. Significantly, the project does not cover processing in state courts, but it does attempt to make definitions consistent with those used in BJS’s collections on state court processing. The FJSRC takes a “defendant-case”—the combination of a defendant (either a person or a corporation) and a particular case—as a unit of analysis. The Urban Institute’s FJSRC staff receive regular extracts from the case management systems of participating federal agencies, corresponding to different stages of the criminal justice process: • Arrest—The U.S. Marshals Service Prisoner Tracking System includes arrests made by all federal law enforcement agencies (e.g., Customs and Border Protection, Bureau of Alcohol, Tobacco, and Firearms, and the Marshals Service itself) and bookings by the Marshals Service. Separate data are obtained from the Drug Enforcement Administration’s Defendant Statistical System. • Prosecution—FJSRC works with data from the Executive Office for U.S. Attorneys Central System and Central Charge files to create six analysis files: matters filed, matters concluded, cases filed, cases terminated, charges filed, and charges disposed. • Pretrial Release—Data from the U.S. Probation and Pretrial Service System documents any pretrial hearings, detentions, and releases of federal defendants between the time of an initial interview to disposition in district court. FJSRC uses extracts from these data to form three analysis files: defendants interviewed, investigated, or otherwise entering pretrial services; defendants terminating periods of pretrial supervision; and defendants under active pretrial supervision. • Adjudication—Separate analysis files on cases filed, cases terminated, and cases pending for each year are derived from the Criminal Master Files of the Administrative Office of the U.S. Courts. • Sentencing—The U.S. Sentencing Commission’s Monitoring Data Base is used to extract information on sentences reviewed under the terms of the Sentencing Reform Act of 1994; sentencing data may not be fully complete, because they are limited to cases obtained by the commission. • Appeals—The Administrative Office of the U.S. Courts obtains docket information on appeals filed and appeals terminated from the U.S. Courts of Appeal. • Corrections—Federal Bureau of Prisons data are processed to form annual analysis files for three cohorts: offenders entering prison, offenders imprisoned, and offenders released from prison. The Post-Conviction Federal Probation Supervision Information System of the U.S. Probation and Pretrial Service System is mined to produce files for three similar cohorts: persons entering active probation supervision, persons under supervision, and persons terminating supervision (whether successfully or unsuccessfully). Though branded “a project of the Bureau of Justice Statistics,” the FJSRC online presence is hosted on Urban Institute servers at http://fjsrc.urban.org. The site includes capability to construct simple tables based on individual analysis files for each year. (continued)

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 137 Box 3-4 (continued) Subpages on “Publications” lead to links to reports on the main BJS website, particularly the Compendium of Federal Justice Statistics (Bureau of Justice Statistics, 2006a), Federal Criminal Justice Trends (Motivans, 2006), and—most recently—Federal Justice Statistics (Motivans, 2008). Some reports using the Federal Justice Statistics Program, such as Sabol et al. (2000) on offenders returning to federal prison after a first release, are authored by Urban Institute staff (and credited as such) but are released as BJS bulletins or special reports, whereas others are prepared by BJS staff (e.g. Scalia, 1996, 1999, 2000, 2001). ing the SCPS series, making at least occasional special efforts to follow cases forward (through disposition) or backward (to recover victim and incident data). Similarly, improvement of knowledge about the justice “system” as a whole would benefit from focused attention on “leaks” and diversions in the justice system model such as declinations to prosecute cases and out-of- court settlement arrangements; we discuss such issues in more detail below in Section 3–F.3. In Section 4–B.3, we discuss one other major effort that is within BJS’s grasp for understanding longitudinal flows—making use of the criminal history record databases that it supports through the National Criminal History Improvement Program for research purposes. BJS’s correctional data collections have generally emphasized stocks of incarcerated populations: inmate counts and demographic breakdowns at annual or midyear levels and more detailed cross-sectional inmate-level in- formation from the inmate surveys. Over the past 30 years, this stock infor- mation has been a critical policy interest as the growth in the correctional population has been propelled by increases in prison admission rates and increases in time served (Blumstein and Beck, 1999, 2005). These trends have involved a large rise in admission rates for drug offenses, and large increase in time served for violent offenses. An ideal set of correctional data series should yield high-quality stock information—specifically, yearly counts of the jail, parole, and prison population for each state, for detailed demographic groups. Recommendation 3.1: BJS’s goal in providing statistics from ba- sic administrative data on corrections should be the development of a yearly count of correctional populations capable of disag- gregation and cross-tabulation by state, offense categories, and demographic groups (age, race, gender, education). However, ideal corrections data would include at least as much attention as flows and transitions in the correctional population as it does stocks and levels. The current National Corrections Reporting System has been used to estimate admission and release rates (entry and exit) but not transition rates at each stage of criminal processing: arrest to conviction to commitment to

138 JUSTICE STATISTICS prison to parole release (unsupervised status), and so on. It would be useful for BJS to explore ways to use its existing data (with, perhaps, slight modifi- cation) to regularly produce estimates of transition rates as a counterpart to its regular stock data. Recommendation 3.2: BJS should produce yearly transition rates between steps in the corrections process capable of disag- gregation and cross-tabulation by state, offense categories, and demographic groups. Although the corrections data are a prominent example of an area where greater emphasis on flows would be beneficial, the same guidance also ap- plies to other changes in status in the justice system. These include filing of charges (transition from law enforcement operations to adjudication) and conviction (transition from court processing to correctional handling). Facilitate Linkage in Existing Data Sets In terms of improvements that can be made in the data-processing and archival process, it should be noted that BJS and its public data warehouse, the NACJD, have taken a number of steps to increase the utility and accessi- bility of the data. Through the creation of multiyear compilations for major series, they have made it easier to link BJS data sets over time and with each other to add value to the information. NACJD staff also developed a “cross- walk” file (Bureau of Justice Statistics, 2004d) that approximates the linkage between the FBI’s ORI “geography” (law enforcement agency jurisdictions, used in UCR and National Incident-Based Reporting System data) and stan- dard geographic boundaries; this file facilitates linkage of LEMAS, UCR, Census Bureau, and other data. Such steps to make it easier to work with BJS data files and facilitate rich analyses can and should be taken. Specif- ically, a standard-format NCVS could be assembled across the entire time series to facilitate long-term trend analyses of these data; the same could be done for other data series including the jail and prison inmate surveys or LEMAS. Moreover, linkage of individuals in the NCRP across years would be very useful in approximating a recidivism study; the public cannot do this because they do not have access to inmate identifiers but it could be done for BJS by the NACJD. The linking of individual-level data collections in the corrections area such as the NCRP or the inmate surveys to many of the facility-based data collections, such as the Census of Adult Correctional Facilities, would leverage the data in both series. The information needed to create these linkages—linking units across time and collections—is generally available within the series’ structures, but this and other information is not now available because of fear of violating confidentiality. To be sure, methods of releasing link-capable data sets that protect the confidentiality of respondents is a major challenge, along with

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 139 the basic logistics of linkage. To be most useful, agreements must be reached between BJS and its data providers—particularly the Census Bureau—as to how to make linked data available to the public without making the process so irksome as to make it unworkable. Greater use of the Census Bureau’s dedicated research centers may be useful in this regard, but the cumbersome- ness of the Census Bureau’s access process is discouraging to potential users; new approaches to these issues should be pursued. Recommendation 3.3: BJS should explore the possibilities of increasing the utility of their correctional data collections by fa- cilitating the linkage of records across the data series. For ex- ample, the ability to link records from the Recidivism Studies or from NCRP to the Census of Adult Correctional Facilities (CACF) would increase the ability to understand how correc- tional facilities contribute to recidivism. Develop Additional Panel Surveys The ideal tool for studying longitudinal experiences in the justice system would be a data series directly designed for that purpose. However, the major impediment to creating such a series is the same reason why extensive longitudinal information does not exist in current data. A basic, logical need in order to approach an ideal measurement of experiences in the system is a systematic “tracking number” attached to a single person (or, more generally, actor) at all stages in the system. To be effective in studying all types of judicial resolutions, such a tracking number would have to be attached to a defendant at a very early stage and maintained. To get a handle on dynamics of the courts, a systematic tracking number assigned near the time of filing might be sufficient, yet a number would practically have to be assigned at booking in order to study the full pretrial mechanisms. Such a tracking number does not currently exist; indeed, common iden- tifiers generally do not exist between broad steps of the process (such as linking court records with later corrections records). This problem is exac- erbated by state-to-state variation in the quality and completeness of elec- tronic case management systems. The lack of a tracking number that facil- itates use of a person as the unit of analysis—logging all charges attached to an individual and following what happens to those charges throughout the system—obviously hinders longitudinal studies. But it is also part of the basic flaw we have already noted in BJS’s adjudication portfolio: the lack of a tracking number complicates the problem of selecting nationally represen- tative samples of judicial proceedings. At this time, given the current level of automation at the various levels of courts, there is basically no alternative to examination of individual “jackets” or file records in the courts.

140 JUSTICE STATISTICS Though the lack of a tracking number is a formidable challenge, we sug- gest that assessment of the fairness and quality of the justice system would be substantially improved by longitudinal studies: Recommendation 3.4: BJS should develop an approach to mea- sure the experiences of individuals through the criminal justice system on a prospective, longitudinal basis, beginning as early as practicable in the process (arrest) and ending with their eventual exit (ranging from early dismissal of charge through completion of sentence). Practically, the most feasible approaches for developing panel studies that track the same individuals over time lie at either “end” of the justice system funnel model. On the input end, the flexible survey vehicle that is the NCVS could serve as the input to a follow-on study of crime victims’ experiences. Very little research has made use of the current panel structure of the NCVS and its repeated interviews at the same household address; in part, this is due to the cumbersome structure of the data files as well as the use of the address as the unit of analysis rather than the individual. An add-on to the NCVS could serve as the starting point for a concerted effort to follow a sample of individuals (even if they move from an NCVS-sample household) over time for a pure analysis of their experiences with other parts of the justice system and any subsequent victimizations; current data systems based on the justice system funnel model tend to lose focus on victims of crime after the decision to report or not report incidents to police. Recommendation 3.5: BJS should develop an approach to mea- sure the victimization experiences of individuals on a prospec- tive, longitudinal basis, beginning from a focal victimization and following the victim forward in time measuring subsequent victimizations and possible consequences of victimization. The NCVS may be used to recruit respondents to a panel survey of crime victims. BJS’s survey of prison inmates could be the springboard to a parallel panel survey: Recommendation 3.6: BJS should develop a panel survey of peo- ple under correctional supervision to understand how individu- als move between institutional and community settings, and to understand the social contexts of correctional supervision. The respondents might enter the panel survey through the 5-yearly SIS- FCF. An initial survey could mirror and expand content in the existing in- mate survey about the prisoner’s history of offending and experience in the courts. Those sampled prisoners within, say, a year of their release date could be reinterviewed annually over the next 3 years. The panel com- ponent of the survey might begin with an interview immediately prior to

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 141 prison release. After release, the survey might measure aspects of com- munity supervision, correctional programming (including educational and vocational training received while under supervision in addition to drug programming), offending and other risky behaviors, victimization, housing, family relationships, and employment. With this population, survey attri- tion is a formidable challenge. The survey interviews might thus be linked to administrative records to provide additional information about further contacts with police and corrections. The panel would be refreshed with each new inmate survey. 3–F.2 Lack of Conceptual Frameworks Our review of BJS data programs in this chapter demonstrates the wide range and breadth of BJS’s data collections; for a small statistical agency, BJS’s level of quality output is certainly impressive. That said, the second broad critique we raise concerning BJS’s portfolio is that it lacks a sense of cohesiveness in some respects. The volume of BJS’s data holdings is such that it can be overwhelming; the differences between different data series, and the unique value of a particular series to describe specific phe- nomena, are not always immediately clear. Parts of BJS’s data portfolio have an unmistakable—and unfortunate—“scattershot” feel to them, whether be- cause the interrlationships between series are not clear or because they seem to lack a well-expressed and common technical basis. Generically, we char- acterize these problems as lack of conceptual frameworks across the full suite of data series, within broad topic areas, and even within highly related series. Develop a Blueprint of Existing Data Collections BJS’s mapping of its data series to the justice system funnel (Figure 2-2) that it presented to the panel is a particularly interesting document because it is a first step toward something that is absent from BJS’s strategic plan (Bureau of Justice Statistics, 2005a) and other planning documents: an ar- ticulation and assessment of the extent to which steps and processes in the justice system are covered and explained by BJS data. BJS’s current strategic plan cites the number of data series that the agency produces—whether on an ongoing basis or as a special request as two of the benchmark measures by which BJS evaluates its own effectiveness (Bureau of Justice Statistics, 2005a:17): Core and recurring series conducted The number of data collection series scheduled to be conducted during a particular calendar year and the number actually conducted. Special analyses conducted BJS periodically conducts special collec- tions or analyses for specific purposes, such as a collaborative effort

142 JUSTICE STATISTICS with other Federal agencies or fulfilling a congressional mandate. The number of special analyses conducted is maintained as an indicator of the utility of specific datasets for unanticipated requirements. That BJS can cite its “maintain[ance of] over three dozen major statistical series” (Bureau of Justice Statistics, 2005a:1) as a measure of the agency’s vibrancy and activity is certainly true, to a point. However, that specific claim in the strategic plan has a second clause—that the three dozen series are “designed to cover every stage of the American criminal and civil justice system”—that is not fully expressed, save for reference to the broad topic areas of the series. Specifically, the plan does not explain: • The unique design features of specific data collections, their method- ology (even in capsule form, such as personal interview, facility inter- view, or reference to facility or administrative records), or their capac- ity to describe actions at multiple stages of justice system processing; • Goals for key activities and programs; • Priorities across the programs, including the identification of coverage gaps or the development of specific data resources to fill them; • Milestones for key programs, such as the implementation of census- updated samples in the NCVS or developments in securing corrections data from frequently nonresponding jurisdictions; or • Evaluative criteria for “success” of individual data series and topic-area groups of data series (separate from evaluative criteria for the agency as a whole). Accordingly, we recommend: Recommendation 3.7: To be useful, a BJS strategic plan must articulate a blueprint of interrelated data collection and prod- uct activities, including both current and potentially new data products. This blueprint would be used to evaluate new oppor- tunities. For data collections that are in development, such a blueprint would de- tail the steps necessary to carry out the work and the timing of the steps; this would include any pilot or small-scale collection used to assess the fea- sibility of the full collection. Of course, we recognize that this concept for a “strategic plan” may not necessarily square with the templates for strategic plans that may be imposed on individual agencies by their parent depart- ments or by government-wide standards. Specific nomenclature aside, what we recommend is that BJS expand on its mapping of data series to the jus- tice system sequence of events as a first step in such a “blueprint” planning document.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 143 Core-Supplement Designs for Major Surveys and the Scope of Law Enforcement Data One of the class of designs we suggest as a possibility for the NCVS in our interim report (National Research Council, 2008b:90) is a core-supplement framework: Some surveys have a set of questions that are consistently asked of all respondents, sometimes labeled the “core.” The full survey question- naire contains core questions and a rotating set of supplement questions. Scheduled supplements allow topical reports from the survey, enriching the breadth of reports. These supplements might change over time, to reflect the changing nature of crime. As we described in the interim report, the United Kingdom’s Home Office has adopted a core-supplement strategy for the British Crime Survey (BCS), its analogue to the NCVS. Though we did not embrace a full adoption of the BCS methodology—in particular, we are reluctant to abandon the NCVS’s repeated panel design, with multiple contacts of the same household, for the BCS’s cross-sectional sample—we think that there is much to be gained from a core-supplement strategy. Accordingly, we reiterate and reaffirm a recom- mendation from our interim report (National Research Council, 2008b:Rec. 4.3): Recommendation 3.8: BJS should make supplements a regular feature of the NCVS. Procedures should be developed for solic- iting ideas for supplements from outside BJS and for evaluating these supplements for inclusion in the survey. (It follows that similar outreach beyond BJS for possible topic supplements— and funding for said supplements—would also be beneficial for BJS’s other major data series.) The SCS is a good example of the kind of topic supplements BJS should seek for the NCVS. The supplement provides information on a class of crime and violence that is a clear issue of continuing public concern, and so new SCS data are regularly awaited and analyzed in concert with related non- BJS data sources. Functionally, the SCS is a useful example because it wins the attention of (and a source of funding from) an executive department other than the Justice Department, and both BJS and the National Center for Education Statistics benefit from methodological and technical interchange in planning and designing the supplement. Moreover, we think that a concerted effort to refine a relatively small core set of questions and build support for regularly scheduled supple- ments would benefit other BJS surveys besides the NCVS. Based on BJS’s presentations to the panel, it is clear that BJS is giving the collection of law enforcement–related data a higher priority in its portfolio. We suggest that applying a core-supplement framework to LEMAS and related surveys

144 JUSTICE STATISTICS would have the benefit of correcting the particularly scattershot appearance of the numerous special-agency censuses and surveys that BJS conducts, from campus law enforcement agencies to police aviation units. A core- supplement approach to the NCVS and the LEMAS survey, among others, would also be instrumental in permitting BJS to expand beyond the manage- ment and administration focus that prevails in its current collections. On the first point—imposing an organizational framework on BJS’s law enforcement surveys—our suggestions are intended to put a structure on BJS’s law enforcement collections such as exists in its corrections data. The corrections data mix various approaches, asking administrative-type census queries of all agencies while collecting a much wider range of items in the inmate surveys. A major difference with the corrections data is that the ba- sic unit of analysis changes between the collections—institutions or correc- tional agencies in the census-type studies, individual inmates in the surveys— whereas BJS’s law enforcement surveys are all focused on institutions or agencies as respondents. The problem with the numerous special-agency censuses is not that they are too costly; they are relatively low in cost because of their targeted nature and finite universes. Nor, to be clear, is it that their focus on smaller num- bers of specialized agencies necessarily makes them less important. They provide value in filling in some major gaps left in the broader CSLLEA, no- tably the array of federal law enforcement agencies; collections such as the campus law enforcement surveys are important steps in more complete un- derstanding of the prevalence and powers of security services maintained by nongovernmental institutions. Instead, the primary weakness of the special-agency collections is that the appearance of myriad, not-obviously- connected data series contributes to a perception that BJS is distracted and trying to do too much at the same time. Moreover, the special-agency in- ventories appear to serve two basic objectives—a genuine collection of infor- mation on policies, procedures, and resources of highly specialized agencies and a “frame-building” function to update and maintain an inventory of law enforcement–related offices—neither of which is fully articulated. Un- like the corrections arena, these law enforcement “censuses” develop survey frames as the basis for administering more detailed information on a rep- resentative sample yet do not obviously result in actual surveys; content is geared toward high-level characteristics and multiple-choice categories are geared more toward quick questionnaire completion times rather than fur- thering knowledge on law enforcement. And, from a frame-building per- spective, the collections suffer from the appearance of being ad hoc mea- sures, without (generally) a clear idea of if or when the information will be used in a later administration of the census or will feed into larger efforts such as the general LEMAS survey or the CSLLEA.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 145 Finding 3.2: The multitude of scattershot “census” studies of specific law enforcement agency types (e.g., campus law enforce- ment, medical examiners, training academies) detracts from the appearance of a coherent measurement program in the area of law enforcement. Instead, the impression left is that these “cen- suses” are sporadic inventories or catalogs of particular agency types with no obvious internal consistency. We suggest that the main LEMAS survey be recast as a core-supplement design—identifying a core set of questions to provide critical information on a timely basis while offering the flexibility to add supplemental questions to query local departments about emerging issues. However, in this case, “supplement” should be interpreted to include both expansion of sample— for instance, to expand the collection of information from medical examiner offices on a systematic basis—as well as expansion of content through topic modules. The effect of this effort would be the more aggressive develop- ment of a LEMAS “brand” (or, better, a general law enforcement statistics “brand”) within BJS’s portfolio, creating and reinforcing the position that the collections are part of a cohesive whole. The expansion of sample to include specialized agencies of a particular type need not be annual—and, indeed, would likely not be annual; what we suggest is development of a calendar for these special collections (and, as appropriate, negotiation of ongoing sponsorship arrangements with other Justice Department and government agencies). By corollary, we suggest that if slots cannot be developed and found for one of these special-agency col- lections within a 5-year time horizon (and that there is little prospect for repeating the collection within 10 years), then its value is likely to be so limited that it should be discontinued. Recommendation 3.9: To maximize both utility and timeliness of information, the LEMAS survey should be conducted as core- supplement design in the context of a continuous data collec- tion. Recommendation 3.10: To improve the utility of censuses of law enforcement agencies, BJS should develop an integrated concep- tual plan for their periodicity, publish a 5-year schedule of their publication, and integrate their measurement into the LEMAS as supplements. The adoption of a core-supplement strategy for LEMAS is consonant with a recommendation by a predecessor National Research Council panel, the Committee to Review Research on Police Policy and Practices, which rec- ommended that BJS implement “an enhanced, yearly version” of the current LEMAS survey (National Research Council, 2004a:107). In particular, that committee noted that “the research utility of the survey would be enhanced

146 JUSTICE STATISTICS by ensuring that a panel of consistently surveyed agencies be maintained within the framework of the survey sample.” The committee further recom- mended attention to the quality of the census or directory survey that serves as the LEMAS sampling frame, and that BJS conduct follow-up studies of the validity of agency responses to LEMAS queries—all of which remain useful and sound suggestions that would helpfully develop a research and evaluation base for BJS programs (as we discuss further in Chapter 5). The second point we make in suggesting a reorganization of the law en- forcement surveys is that it is an important starting point in expanding the scope of BJS’s collections in the general area of law enforcement. A look at BJS’s portfolio (and Figure 2-2) leaves the unfortunate impression that the state of knowledge about “law enforcement” generally can be equated with the head- and resource-count totals in the LEMAS survey and agency censuses. Law enforcement statistics within BJS have been largely defined by the specific LEMAS data collection vehicle, and not a substantive definition of the activities and actors that constitute law enforcement. It would benefit BJS and the consumers of their data if law enforcement were defined sub- stantively and all available data collections were used to illuminate this area of the criminal justice system. Data of the sort produced by the current LEMAS and BJS’s other law enforcement collections are valuable to states and localities for comparative purposes, for planning and for justifying requests for grants and assistance, and for maintaining accreditation as standing agencies. Information on other jurisdictions that have taken a particular approach—the use of tasers or other nonlethal-force technologies, for instance—make it easier to justify adoption of those technologies in similar jurisdictions. (Of course, to be most help- ful in this regard, current data are more compelling than those that may be 3 to 4 years old.) Knowledge of other law enforcement agencies that have adopted particular new approaches also gives late adopters the chance to learn from the experiences of their earlier-adopting peers. Police agen- cies are also interesting from the organizational standpoint because they are not purely static; there are “births” and “deaths” among agencies that are important to understand, such as the merger of separate city and local de- partments in Charlotte and Mecklenburg County, North Carolina, and In- dianapolis and Marion County, Indiana. They are also useful in providing context on current events or policy matters; for instance, current studies of the resources for and demands on campus law enforcement are important contexts in developing policy responses to college campus shootings. Like- wise, documentation on the current status and operational backlogs faced by forensic crime laboratories is important for state and local policy makers as crime scene evidence weighs larger in the public imagination and in court proceedings.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 147 That said, use of the existing LEMAS-type data for advancing knowledge about law enforcement and the administration of justice has generally been limited. One important exception is Wilson (2005), who used LEMAS data from 1997 and 1999, in conjunction with other survey data of police orga- nizations, to analyze the adoption and implementation of community polic- ing (COP) techniques. His models suggest that implementation was most strongly related to whether the police department is located in the western United States and the age of the police department as an organization; re- ceipt of federal funding to support COP implementation was found to be only a weak predictor of effective adoption. Useful though LEMAS data are for benchmarking and cross-agency com- parison, BJS’s challenge in furthering the law enforcement part of its portfo- lio is generating more information on law enforcement that is more relevant to its practitioners than a narrow focus on management permits. A charac- teristic of BJS’s reports on its LEMAS-type data that is particularly telling is that they consistently stop short of drawing linkages to crime data; agencies are also asked relatively few questions about counts or characteristics of in- cidents, with questions oriented more to the presence or absence of policies. To be sure, this sidesteps the major problems of assuming an evaluative (or even regulatory) role of individual departments, but it also causes BJS re- ports to be silent on the most basic notions of effectiveness of police policies or personnel decisions. The sharp management focus—absent a connec- tion to crime data—thus generates numerous facts but not always insights. For example, the report of the most recent campus law enforcement cen- sus (Reaves, 2008) reveals the universities with the largest number of sworn officers and the degree of implementation of blue-light emergency phones and in-field computers, but it makes no attempt to assess the relationship between agency staffing levels and either the number of reported crimes or service calls. Though it does usefully devote a page to the mechanisms by which campus crimes are reported to the U.S. Department of Education un- der the 1990 Clery Act,16 it raises but does not quantify some interesting features such as the nature of relationships and interactions between campus 16 Provisions requiring that postsecondary education institutions compile and disclose regu- lar statistics on campus crime and security were first written into law in 1990’s Crime Awareness and Campus Security Act (P.L. 101-542). Institutions are formally required to produce annual counts of crimes reported to campus security authorities or to local police agencies, whether the incidents occurred on campus, on noncampus buildings or property, or on relevant pub- lic property. These disclosures, and summary statements of security programs, are required as a condition for participation in federal financial aid programs. In 1998, the crime reporting provisions were renamed the Jeanne Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act in memory of a Lehigh University freshman who was murdered in her campus residence hall room in 1986. The Clery Act is codified at 20 USC § 1092(f) and the U.S. Department of Education’s rules for compliance with the act are promulgated at 34 CFR § 668.46. See U.S. Department of Education, Office of Postsecondary Education (2005) for additional detail.

148 JUSTICE STATISTICS law enforcement agencies and the state or local police forces within their area. One example of a law enforcement data collection that could address a wider range of issues about the effectiveness of police work and the nature of police interactions with the public is that suggested by our predecessor Committee to Review Research on Police Policy and Practices (National Re- search Council, 2004a:163, 164). That panel recommended that BJS be given support to “develop and pilot test in a variety of police departments a system to document information applications of police authority.” Such a system would provide data on police activities that stop short of invok- ing the criminal process (and the later stages of the funnel model), such as “simply making their presence or interest known to potential troublemak- ers, stopping and questioning them, persuading, advising, commanding, or threatening them, or referring problems to other agencies.” The commit- tee described the difficulties involved in creating such a system as “truly daunting,” but nonetheless noted its potential value in developing a com- plete picture of police activities. A related direction for expanding coverage of policing activity more generally—and a possibility for a LEMAS supple- ment as an initial step—is to collect data on the use and extent of private security agencies and processes. Suggesting broader and more detailed data collection from law enforce- ment agencies is easy, but implementing such a suggestion is far from easy. A Justice Research and Statistics Association (2003) summary of a series of focus groups organized by the Illinois and Pennsylvania state Statistical Anal- ysis Centers identified four principal and perennial obstacles to “buy-in” by law enforcement agencies to wider data collection efforts: • Inadequate resources for departments to assemble responses and com- ply with multiple data collections; • Increased demands on time; • Fear of negative publicity, particularly if new data are not strictly com- parable to old data; and • Continual changes in direction in collection and use of data, and pro- liferation of data collection requests to address the “next high visibility problem.” Cognizant of these constraints, we suggest the creation of a major, new law enforcement–related data set—but one that draws from existing resources— in Section 4–C.4. Short of dramatically expanded collection directly from law enforcement and increasing the number of survey questionnaires that departments are expected to complete, there is much that BJS can do to expand its data on policing issues. Part of BJS’s work in the law enforcement area should

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 149 be making more effective analytical use of existing data systems—the mea- sures collected by the FBI’s UCR program and BJS’s own NCVS—to inform law enforcement. For example, the Law Enforcement Officers Killed and Assaulted data collected by the FBI could be used in concert with LEMAS data to say more about deaths of and assaults on law enforcement officers. (The same might be said about the Deaths in Custody program described in Section 3–B, but that data collection is very recently begun and it may need further development.) BJS should also seek ways to exploit relevant non–Justice Department data in its analyses, including the aforementioned campus crime data compiled by the U.S. Department of Education. However, returning to a main point of this section, a critical area for BJS to improve its information on law enforcement is making better use of NCVS to study related issues through structured and recurring supplements. Using the NCVS to study law enforcement is decidedly not a novel concept; indeed, the original statutory authority to start the National Crime Survey in the first place was a clause approving the collection of data on “the con- dition and progress of law enforcement” in the United States. Generally, Lynch (2002:62–63) argues that the NCVS “can tell us a great deal about the performance of the police industry and citizens’ perceptions of it.” In particular: The [NCVS incident form] includes questions on whether the police found out about the victimization incident, and, if so, whether they re- sponded when called, their response time, and the various activities they engaged in at the scene. These activities include taking a report, gath- ering evidence, interviewing witnesses, and notifying the victim about further processing of the case. . . . [With this detail, NCVS] data can be used to identify the subpopulations and situations involved when police mobilization and service (or the perceptions of police service) differ. Among the most important distinctions to be made is the type of vic- timization that prompted the call for service. The NCVS has the added advantage of allowing analysts to “define their own classes of crime”—such as “domestic violence, crime at school or at work, crime in the neighborhood, crime in public places, interracial crime, and intraracial crime”—that may be “more meaningful” than the common UCR classifications (Lynch, 2002:63). Hence, Some of the specific issues that [NCVS-based analyses of police issues] could address include: • Changes in the percent of criminal victimization reported to the police by racial and ethnic group and type of crime, with impor- tant attributes of the crime (e.g., degree of injury or amount of loss) and the victim (e.g., age) held constant. • Changes in the percent of reported criminal victimization events to which the police responded by sending an officer who made contact with the victim.

150 JUSTICE STATISTICS • Changes in the activity of the police at the scene and afterward by type of crime and demographic group, holding constant other relevant attributes of the event. • Changes in the recovery of stolen property by demographic group, type of theft, and other attributes of the victim and the offense. • Changes in the outcomes of victimization events, e.g., injury or loss, by whether the police were mobilized and by their actions after mobilization. • Generic area estimates of mobilization and service that would identify areas according to their size and position in the metropoli- tan area, e.g., towns of 10,000 to 25,000 people outside of a large city, or central cities of between 50,000 and 75,000 people. BJS’s fielding of the PPCS is arguably its richest and most probing col- lection related to law enforcement behaviors and actions (we discuss the supplement further in Section 5–A.2). The use of the NCVS as a “citizen’s survey” and an indirect measure of the effects of justice system actors on the public at large should be further developed. Recommendation 3.11: The NCVS (and its supplements) should be more effectively used as a tool for studying law enforcement, both in terms of the types of crime that are reported (and not reported) to police and the action that results from the reporting of a crime (e.g., the Police-Public Contact Survey). Enhance Technical Framework Within Series—Sampling and Adjudication A basic trait, and problem, with BJS’s data collections in adjudication can be phrased very simply: Finding 3.3: BJS’s current approach to data collection in adjudi- cation lacks an effective basis in sampling. Blunt though this statement is, we do not intend it to be interpreted as being unduly harsh. As we noted above, the reason for this lack of an effective basis in sampling is fairly clear: The collections developed from having to work with select jurisdictions for which records could be made available for analysis. BJS and its data collection providers must continue to work within the confines of available records and the confines of court information processing systems that may vary greatly within and between states. To their credit, BJS and its providers are candid in their methodological notes and reports about the design of the court record collections. In partic- ular, NCSC attached a useful graphical device—miniature state-level maps with colored shading indicating those states providing the relevant records— to every trend line in its extensive reports from the ongoing Court Statistics Project (LaFountain et al., 2007). Convenient though this is (and certainly

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 151 superior to extended footnotes), flipping through the multiyear trends in caseloads for particular types necessarily means coming across dispute types with near-complete (aggregate counts of incoming cases by year or counts of judges, covering over 40 states) and minimal coverage (local ordinance violations, reported by about 5 states). Where BJS has been able to design collections in adjudication, the combi- nation of available resources have yielded designs whose representativeness is questionable at best. The “sampling” scheme of the SCPS series is such that it is intended to be representative of “felony court filings during the month of May in even numbered years from 1990–2004 in the 75 most pop- ulous counties in the United States” (Bureau of Justice Statistics, 2007g:4). That is to say, SCPS data are not nationally representative and the program’s design—undoubtedly driven by the ability and willingness of jurisdictions to participate—makes it difficult to make generalizations to all felony filings. Though the most populous counties account for the bulk of felony filings, it is unclear how representative they are of the complete national experience. Moreover, the arbitrary selection of May as the target month raises the pos- sibility of seasonality or other temporal effects in case filing that might make May unrepresentative of the rest of the year. Variety in the nature and development of state court record systems is a long-standing concern and an obstacle that cannot be wished away overnight. That said, processing systems continue to develop, and BJS and its partners need to be aware of the state of development in those systems so that samples of courts (and their records) can be chosen more rigorously, avoiding potential biases that may be induced by overemphasizing states where access to records is convenient as well as those that may be due to temporal effects in filing. Recommendation 3.12: As court records become more accessi- ble through computerized case management systems, BJS should implement more rigorous methods of probability sampling in its adjudication series. In our assessment, BJS currently has good access to state court systems through its collaboration with NCSC and other data providers, as well as through contacts through the BJS-funded state Statistical Analysis Centers. Improvements in the adjudication series depend on cultivating and extend- ing those partnerships. Recommendation 3.13: To inform future revisions to its ad- judication portfolio and to more efficiently acquire and work with court data in the future (including longitudinal analysis), BJS should develop a research program to build representative samples of courts and to assess strategies for collection of case records.

152 JUSTICE STATISTICS The steps that BJS has taken to redesign the SCPS program during 2009—with the explicit goal of improving the sampling structure from the current month-of-May-for-large-counties plan, and ideally supporting sub- national estimation—are very heartening in this regard. In recent years, it was unclear whether BJS or its data collection contractor, PJI, had made a concerted effort to assess the ability of local jurisdictions to participate (and sustain participation) in SCPS data collection; such an assessment should surely accompany a reconceptualization of SCPS. In turn, this work could be instrumental to the development of a wider jurisdiction-based data col- lection system linking cases or persons across decision points in time. Developing a more rigorous sampling for primary data collection in ad- judications is a top priority. In line with the principles and practices ex- pected of statistical agencies (see, in particular, Section 5–B.7), a secondary but essential step for BJS is to evaluate the quality of the information it obtains through SCPS and related collections. Particularly to the extent that case characteristics are coded from documentary files by court administrative staff, completing forms that describe the transactions regarding an individual defendant, it would be useful to implement periodic evaluation and verifi- cation procedures. SCPS documentation does not indicate the use of audits or spot checks for completeness and accuracy of provided records, and such information is important to building the credibility of data series. 3–F.3 Improving Statistical Coverage of the Justice System We have already noted in Section 2–C certain missing topics in BJS’s statistical coverage of events in the justice system, and our comments on building a framework for BJS’s law enforcement collections in Section 3–F.2 also point out areas in which the topic coverage of the agency’s existing surveys can be improved by adding supplements. We close our assessment of BJS’s overall portfolio by noting areas within the scope of the agency’s existing collections where expanded coverage and greater depth would be beneficial. However, the same point that we made in introducing Section 2–C ap- plies here: A discussion such as this risks descending into a “wish list” for a statistical agency facing mounting costs and a flat budget, for there are al- ways areas of interest under the general heading of crime and justice where more data are highly desirable. The intention here is to encourage strate- gic thinking on some particularly high-priority areas while emphasizing the ways in which existing data collections can be used and adapted. Such ef- forts, we believe, would expand the constituencies for BJS data, inform pol- icy, and draw BJS into valuable relationships with other public agencies. The two areas we discuss in more detail in this section are acute needs where objective information from BJS would have high value. One stems

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 153 from the basic conceptual flaw in the funnel model of justice processing, which tacitly treats correctional supervision as an ending state; however, the challenges of prisoners exiting corrections and reentering the general population will be critically important to policy makers in the coming years. The other is one of the most substantial “leaks” in the funnel: cases that drop out of the system because prosecutors decline to pursue legal proceedings or when settlements are reached out of court. Reentry and Recidivism BJS correctional data have made vital contributions to research on trends and disparities in criminal punishment in the United States and on the con- sequences of increasing incarceration rates. One research literature has used the NCRP/NPS series (see Section 3–B.1) to study trends in imprisonment and corresponding effects on the economy. National time series of impris- onment rates have been associated with trends in crime and the economy (this research is discussed by Chiricos and Delone, 1992, and Harcourt, 2006); an alternative design has examined panels of states (e.g., Bridges and Crutchfield, 1988; Jacobs and Carmichael, 2001). Similar research has been directed at the social dimensions and consequences of incarceration rates, studying prison admission rates calculated from the NCRP for pan- els of detailed demographic groups (Western et al., 2006). National and state imprisonment series have also been used extensively to study the effects of incarceration on crime rates; recent contributions include Levitt (1996), Useem et al. (2001), and Johnson and Raphael (2006a). Another thread of research has shifted from studying the scale and effect of aggregate levels of imprisonment to examining variation in imprisonment in the population. BJS has a long-standing interest in racial disparities in in- carceration, publishing long historical series on state-level prison admission rates for blacks and whites (Langan, 1988), and regularly publishing impris- onment rates for blacks, whites, and Hispanics. The Surveys of Inmates of State and Federal Correctional Facilities have been used to construct detailed incarceration rates by age, race, sex, and levels of schooling (Western, 2006). A widely cited BJS study has extended the usual focus on incarceration rates, using the inmate surveys to estimate lifetime risks of incarceration (Bonczar and Beck, 1997; Bonczar, 2003). Life-table estimates of these lifetime risks showed that African American men, at current levels of incarceration, face a 28 percent of chance of going to state or federal prison. The BJS report on lifetime risks spurred other research using the Surveys of Inmates that esti- mated more detailed figures for specific birth cohorts and at different levels of schooling (Pettit and Western, 2004). The analysis has been extended further to study children’s risk of parental incarceration (Wildeman, 2009).

154 JUSTICE STATISTICS Against this backdrop, the two recidivism studies conducted by BJS in 1983 and 1994 have been influential in structuring research on the rela- tionship between crime and incarceration—and hinting at a major looming challenge for policy makers. The studies, which followed two cohorts of prison releasees in selected states for 3 years and recorded their subsequent patterns of arrest and reincarceration, demonstrated that around 60 percent of those coming out of state prison were rearrested within 3 years of prison release. These special studies of recidivism yielded several widely cited BJS reports. The recidivism microdata were also made publicly available, and have been widely studied by researchers and policy analysts (e.g. Solomon et al., 2005; Travis and Visher, 2005). About 700,000 people are now released annually from state and federal prison (Sabol et al., 2007:1). Another 5 million are currently under some kind community supervision either on parole (800,000) or on probation (4.2 million) (Glaze and Bonczar, 2007:2). The most recent estimates of recidivism, from 15 states in 1994, suggest that about two-thirds of prison releasees will be rearrested within 3 years, and a quarter will be reincarcer- ated with a new sentence (Langan and Levin, 2002). The significant growth of imprisonment rates over the past several decades has highlighted the pol- icy and social science challenges presented by historically large cohorts of released prisoners and has increased the urgency of developing techniques for community reintegration in order to deter recidivism. Between the 1994 recidivism study and BJS’s recent reorganization to el- evate “recidivism, reentry, and special projects” as a program priority, BJS’s direct role in reentry issues has been outpaced by research efforts mounted by other units in the Justice Department and external researchers. For ex- ample, NIJ funded a major evaluation of the Serious and Violent Offender Reentry Initiative (SVORI), a collaborative grant program funded by five cabinet departments that instituted reentry programs in 69 sites around the country.17 SVORI programs included in-prison training programs prior to release as well as postrelease programs; programs included substance abuse and mental health treatment, housing assistance, and faith-based programs. The multisite evaluation is intended to identify effective approaches; find- ings from the evaluation in progress are described by Lattimore et al. (2005) and Lattimore et al. (2004). Other researchers have begun to intensively study the consequences of incarceration for the employment, family, and health outcomes of men and women released from prison; see, for example, Pager (2003), Lopoo and Western (2005), Kling (2006), and Johnson and Raphael (2006b). 17 Specifically, the Departments of Education, Health and Human Services, Housing and Urban Development, Justice, and Labor contributed funding to SVORI programs. The NIJ- funded Multi-site Evaluation of SVORI Programs was administered by RTI International and the Urban Institute.

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 155 Though its direct work in the field has been limited to date, BJS can and should be a major source of quantitative information on prisoner reentry, recidivism, and community-based supervision issues. Nearly 80 percent of prisoners are released to community supervision, and so data collections on prisoner reentry would significantly extend the coverage of parolees. More than this, regular data collections on reentry and recidivism would advance the core charge of compiling statistics on crime and analyzing its correlates. In the current period of historically unprecedented incarceration rates, reen- try and recidivism data would also offer valuable information about the so- cial impacts of incarceration, an area now regarded as of pressing policy sig- nificance. Data collection efforts should evolve in response to social trends and their policy context; developing a program on reentering prisoners re- flects the new reality of large cohorts of releases, and the significance of these cohorts for crime and social cohesion in the general population. Recommendation 3.14: BJS should mount a feasibility study of the flow of individuals between correctional supervision and community settings. Repeated interviews of samples of about- to-be-released prisoners that track their successes and failures in reintegrating with the community would enhance understanding of this critical policy issue. In early 2008, the Second Chance Act became law, instituting and au- thorizing a wide variety of reentry programs. Included in its provisions is a section defining a role for BJS data collections; see Box 3-5. The act permits BJS to routinize its earlier recividism studies, calling for BJS to conduct them on a triennial basis. In the legislative context, of course, authorization is dif- ferent from appropriation, and the act’s language that BJS “may conduct research” rather than “shall” stops short of a direct mandate. Still, the act is an important signal that correctional programming will likely gain renewed support over the coming decade, and that a BJS role is both expected and re- quired. The act also authorizes NIJ to carry out research studies along these lines; exactly how large efforts such as a major recidivism study would be di- vided between and administered by a statistical agency (BJS) and a research agency (NIJ) would need to be carefully determined. Although BJS’s coverage of custodial correctional populations is strong and it has a fairly complete picture of annual flows of persons moving in and out of prison, its current coverage of the population released from incar- ceration is seriously incomplete at present. Hence, higher priority to these issues and congressional authorization are both welcome developments. As BJS approaches the problems of recidivism and prisoner reentry—significant frontier issues that should weigh heavily in the agency’s strategic planning— we suggest some possible topics that form part of this planning and shape expanded data collections:

156 JUSTICE STATISTICS Box 3-5 The Second Chance Act of 2007 Signed into law on April 9, 2008, and codified as 42 USC § 17551, the Second Chance Act of 2007 directs that the Bureau of Justice Statistics (BJS): may conduct research on offender reentry, including— (1) an analysis of special populations (including prisoners with mental illness or substance abuse disorders, female offenders, juvenile offenders, offenders with limited English proficiency, and the elderly) that present unique reentry challenges; (2) studies to determine which offenders are returning to prison, jail, or a juvenile facility and which of those returning offenders represent the greatest risk to victims and community safety; (3) annual reports on the demographic characteristics of the population reentering society from prisons, jails, and juvenile facilities; (4) a national recidivism study every 3 years; (5) a study of parole, probation, or post-incarceration supervision popula- tions and revocations; and (6) a study concerning the most appropriate measure to be used when reporting recidivism rates (whether rearrest, reincarceration, or any other valid, evidence-based measure). In debating the act on April 12, 2007, bill sponsor Edward Kennedy (D-Mass.) cited existing BJS corrections data series in arguing for the bill’s merits (Congressional Record, pp. 4430–4431): • “Large prison populations and high recidivism rates place heavy burdens on prisons, communities, and taxpayers. Of the 2.2 million persons housed in prisons today—an average annual increase of 3 percent in the past decade—97 percent will be released into the community. Overcrowding continues to plague the system. State prisons are operating at full capacity and sometimes as much as 14 percent above capacity, and Federal prisons are 34 percent above capacity. In 2005, prison populations in 14 States rose at least 5 percent. Recidivism and inadequate reentry programs add to the problem. Over 600,000 prisoners are released each year, but two-thirds of them are arrested again within 3 years.” • “According to a recent Bureau of Justice Statistics report, of the approximately 50 percent of prisoners who met the criteria for drug dependence or abuse, less than half participated in drug treatment programs since their admission to prison.” • “The Bureau of Justice Statistics reports that only 46 percent of incarcerated individuals have a high school diploma or its equivalent. The limited availability of education and vocational training programs exacerbates the problem. Only 5 percent of jail jurisdictions offer vocational training, and 33 percent of jurisdictions offer no educational or vocational training at all.” • Reinstitute sample survey of probationers and parolees: As described in Section 3–B.6, BJS’s current coverage of persons under community supervision is limited to administrative data collected through the An- nual Probation and Parole Surveys administered to supervising agen- cies. These surveys provide counts of probationers and parolees disag- gregated by race, sex, and offense category. However, these adminis- trative data are extremely limited for studying reentry and recidivism; they provide little information about the conditions of supervision or

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 157 the circumstances of success or failure in individual cases, and they yield no information about past criminal history. Actual person inter- viewing with a sample of probationers has only been conducted once by BJS, in 1995, yet it is this kind of rich information on the experi- ences of those who have already reentered the community that would be most valuable in shaping emerging reentry strategies. Occasional surveys of the parole and probation populations would improve un- derstanding of the process of reentry, recidivism, and successful rein- tegration into the community. A reinstituted representative sample of probationers and parolees should elicit data on risk factors (schooling, social background, health status, and criminal history, for example), the conditions of commu- nity supervision and its intensity, and participation in assistance pro- grams. Information on the spatial distribution of parole and probation populations (e.g., distance from previous “home” communities prior to incarceration and limitations on geographic mobility) would also likely advance understanding of recidivism and reentry. The current administrative data also do not speak to the circumstances of arrest, revocation, conviction, or incarceration of parolees and probation- ers. Regular statistics are collected on arrest and recommitment to prison, but there is little detail on technical violations or the admin- istrative procedure of parole and probation revocation; a survey pro- gram, complemented by revision of the content of the administrative survey questionnaire, could help fill these gaps. Survey costs for this population are likely to be substantial, and meeting these costs will likely require long-range planning and part- nering with other agencies. Still, because the population of released prisoners is now so large, agencies within the Departments of Health and Human Services, Education, Housing and Urban Development, Labor, or Veterans Affairs (as well as other statistical agencies such as the Census Bureau and the Bureau of Labor Statistics) may have shared interests in the probation and parole populations and be a source of input and funding. • Routinize the national recidivism studies: As acknowledged in the de- bate on the Second Chance Act and exemplified by a direct request in the act’s language, the BJS recidivism studies of 1983 and 1994 have made major contributions to understanding of postprison experiences of state prisoners and should be conducted on a more regular basis. Recidivism studies are also important by significantly expanding the empirical scope of the BJS data collections by including those who have completed sentences and are no longer under any kind of super- vision at the time of their return to incarceration. The previous recidi-

158 JUSTICE STATISTICS vism studies were based on a large and complex record linkage effort that joined correctional records to arrest and court data. Although exemplary, the paradigm could be pushed further with either survey interviews or linked records for several terms of prison incarceration. (This was typically infeasible in the recidivism study because of the relatively short 3-year follow-up period.) Survey data from released prisoners would be particularly informative about the social context of recidivism, describing in greater detail the economic and social sit- uation of those coming out of prison. • Measuring jail flows: A point inherent in our suggestion to improve measurement of transitions in BJS’s existing corrections data (Sec- tion 3–F.1) is worth reemphasizing here. Not as much is known about those persons passing through the nation’s jail system as BJS’s data re- veal about the federal and state prisons. While about 700,000 people annually enter and exit prison, some 10 million people are estimated to pass through local jails in a given year. Although jail incarceration is likely common for released prisoners and releasees might cycle in and out of jail before returning to prison, there are no national statis- tics to document the pattern. Likewise, relatively little is known about the frequency with which the same individuals go in and out of the jail system—for instance, whether frequent contacts with law enforcement and numerous short spells spent in jails or police lock-ups constitute a de facto form of community supervision. • Alternative approaches to studying the unsupervised population: Just as prison sentences expire, so too do sentences of probation or other community supervision. As rules for probation supervision change (in part due to state efforts to grapple with growing costs of corrections), the size of the released and unsupervised population is growing. Some may argue that those who have “maxed out” of prison or corrections supervision fall outside the statistical jurisdiction of BJS, but the point remains that the unsupervised population may be at high risk of re- arrest, and their criminal histories place them at risk of an array of diminished life chances. If BJS data collections are going to be sig- nificantly informative about recidivism and reentry, the released unsu- pervised population should be contemplated as targets for new data collection. Unlike parolees, the unsupervised present acute difficul- ties for data collections. Here, linking criminal justice to noncriminal justice (say, social welfare) administrative records, may provide one promising path for data collection. • Studying the demographic significance of the penal system: While the topics for BJS strategic planning might speak to the challenges of un- derstanding recidivism and reentry, they also speak to the broader de-

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 159 mographic significance of the penal system. As an institutionalized influence on the life course and spatial mobility, the penal system now commands a large demographic influence that is mostly hidden from the nation’s statistical system. Though prisoners and other institu- tionalized populations were counted in the 2000 decennial census and are included in the group quarters component of the Census Bureau’s American Community Survey, nothing is known of the geographic ar- eas from which they originate or much about their lives immediately before institutionalization. By trying to capture the flow of people into and out of prison, and their return, the reentry and recidivism perspective highlights the significant influence of the prison on basic population processes. The ideas discussed here suggest a variety of new BJS activities focused on formerly-incarcerated men and women. In the current climate of tight budget constraints, and short of the authorizations in the Second Chance Act actually yielding significant appropriations of new funds, the birth of extensive new data collections seems unlikely. Still, BJS should think op- portunistically about (a) partnering with other agencies, (b) linking records across databases, and (c) augmenting existing administrative surveys of pro- bation and parole agencies. First, BJS should study the possibility of partnering with other statisti- cal agencies. Such partnerships would provide two kinds of benefits. Items about involvement in the criminal justice system could be added to house- hold and other surveys, expanding understanding of the reach of the justice system in the noninstitutionalized population. For example, questions about prior arrests or incarceration could be asked of the large samples in the American Community Survey, the Current Population Survey, or the NCVS. Because of the missions (and already large scope) of these surveys, such ad- ditional queries would not be highly detailed; however, they would provide general indicators at the national level and present the opportunity for some disaggregation by geography and demographic subgroups. In return, the BJS is also uniquely placed to provide detailed information about the in- stitutionalized population. Questions about education, health status, aging, or demography, for example, could all be of interest to other agencies that have largely focused on the noninstitutionalized population. We return to this point in Section 5–B.11. Second, record linkage holds great promise for expanding understand- ing of how people move in and out of institutional settings, pass through the formal labor market, and use social services. Those released from custo- dial supervision may be easier to track through their contacts with criminal justice and other public agencies than through surveys. Thus linking admin- istrative records may yield special benefits in the study of recidivism and

160 JUSTICE STATISTICS reentry. Such a system could connect, for example, records from different files of the NCRP providing longitudinal records of movements from prison , to parole, and return to prison. In contrast to the special studies of re- cidivism among release cohorts in 1983 and 1994, linked records of NCRP would provide an automatic and ongoing measure of reimprisonment and successful parole completion. In its most ambitious implementation, a sys- tem of linked records could join criminal justice to social service data such as unemployment insurance records or welfare enrollment. A broad linked sys- tem of administrative records would provide help to place released prison- ers in a much broader social context, providing measures of their legitimate earnings, poverty status, and use of social services. There are two main obstacles to exploiting the potential of record link- age for gathering data on recidivism and reentry. Severe practical difficul- ties are associated with matching records from different databases for the same individual. Because identifiers differ across databases, record linkage is expensive and prone to error. A unified system of identifiers for a range of databases—say, all BJS correctional microdata collections—would unlock the potential of record linkage. The practical challenges to a unified system are substantial but we urge BJS to explore concrete steps in this direction. The other obstacle to large-scale record linkage, particularly linking crimi- nal justice to social service records, relates to privacy protections. Because of the sensitivity of the linked data, BJS and cooperating agencies would need to take special steps to protect the confidentiality of records. Some kind of institutional review, monitoring data security and research data centers, may provide a process for ensuring the privacy of individuals recorded in a linked system of administrative records. Finally, administrative surveys of correctional, probation, and parole agencies could be expanded to explicitly incorporate policy interest in re- cidivism and reentry. Administrative surveys have been relatively inexpen- sive and accurate sources of counts of different correctional populations. The surveys could speak more directly to interests in recidivism and reen- try by obtaining more detailed information about program participation and conditions of supervision. Enrollment counts of correctional populations in specific programs, and program spending and staffing information would help measure the resources applied to reintegration and criminal desistance. Some of this information is already reflected in surveys of correctional insti- tutions, though spending on different categories of correctional programs is largely unmeasured. To advance understanding of the criminal justice system as an institutionalized influence on population processes (births, deaths, and migration), the administrative surveys could usefully collect more detailed demographic data. In addition to information about race and sex, which is currently reported for the prison, probation, and parole populations, a more detailed survey could obtain population counts by race and sex for given age

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 161 and education groups. Spatial data describing, say, counties of origin and destination for entering and released prisoners would also help map the spa- tial distribution of the criminal justice system’s reach into the community; the accuracy of those data would, of course, have to be evaluated. Understanding Prosecution and Declination Decisions The adjudication phases of the crime funnel model are areas of major “leaks” that are not well understood or measured. The percentage of cases that actually go to trial (enter the formal court system, where they might be more readily followed and tracked) can be small, and that percentage can vary strongly by jurisdiction and by type of court. Mechanisms for reso- lutions and alterations through “bargaining” at various stages are not well understood or studied: “charge bargaining” between attorneys prior to fil- ing, “plea bargaining” as alternative to trial, “sentence bargaining” during or after trial. The prosecution function or component in the funnel includes the charg- ing decision (including plea bargaining), filing decision, the pretrial custody decision, tests of evidentiary strength, and all pretrial motions (e.g. discov- ery). In addition to information on these decisions, a useful statistical system describing the prosecutorial function would contain data on the social orga- nization of the prosecutorial and the defense process. This would include resources, such as the number of staff, but also the way in which those staff are assigned and organized. Is the chief prosecutor elected or appointed? Is the staff specialized by stage of litigation or crime type? Is the indigent defense bar staffed by public defenders or court-appointed attorneys? Since the nation has a state and federal justice system, statistical systems describing the prosecutorial function would address both levels. Currently BJS describes the state-level prosecution function with two dif- ferent but related data collections—SCPS and NPS—which we summarized in Section 3–D; federal prosecution is described by the Federal Justice Statis- tics Program. There was a one-time survey of indigent defense in 1999. Some observations on the ways in which these collections cover (and do not cover) important parts of the prosecutorial function follow: • The SCPS series does not cover a number of the decisions included in the prosecutorial function. The most important omission is the declination decision wherein the prosecutor decides not to file charges on arrests brought by the police or from some other source. This decision is not reviewable by anyone, and it is the single greatest source of prosecutorial discretion. Commenting on 1996 data from the NJRP that preceded SCPS, Forst (2000:26) lamented that: [The program] gives no information about cases rejected or dropped by prosecutors; more than a few people might like to

162 JUSTICE STATISTICS know why over 80% of all arrests for motor vehicle theft fail to end in conviction, and why about 60% of all arrests for robbery and burglary fail as well. On one hand, since the data collection in SCPS begins at filing, all of the decisions made prior to filing are lost. On the other hand, many important decisions made after filing are captured in this data collection. For those cases filed, it is possible to assess the amount of charge mobility that occurs from arrest to filing, through adjudication and sentencing. There is also extensive information on pretrial custody decisions and status, time that it takes to complete stages of processing, as well as some criminal history data. • The key limitation of the NPS in understanding trends in prosecution is that it is to prosecution what the LEMAS survey is to law enforce- ment: a strictly partial look at basic administrative and management information. As an establishment survey, it has the same potential re- spondent selection problems (effects) as other surveys; there is also a certain inherent amount of noncomparability across prosecutorial units, because one can serve a single county and another an entire state. For what information it does provide on the dynamics of per- sonnel and workload in prosecutor’s offices, the NPS has suffered as a measurement device because of its unstable periodicity (shifting from a 2-year to a 5-year cycle). It has also been unstable in the degree to which it has been conducted and treated as a sample or a census, as we described in Section 3–D; in the “census” years, content is particularly pared back, excluding all but the basic administrative questions. • The Federal Justice Statistics Program operated by the Urban Institute with BJS sponsorship links administrative records across decision point in the federal justice system. It does provide the “flow” data that the “funnel” promises, and it does cover more of the decisions made by prosecutors than the SCPS series (including the declination decision). However, it is strictly limited to the federal justice system. There are senses in which prosecution and prosecutorial decisions should be amenable to data collection efforts, among them the fact that basic concepts and definitions are relatively invariant. In broad strokes, Forst (2000:22) observes that changes in prosecution “have mostly followed rather than led developments outside the prosecution domain. The basic nature and goals of prosecution, the role of the victim as witness in a mat- ter between the state and defendant, the essential steps in processing cases through the courts and systems of public accountability have all remained fundamentally unchanged over the past 30 years.” However, a major reason for this resistance to procedural change—the relative insularity of prosecu- tors, as opposed to the police and elected officials whose work has a larger

OVERVIEW OF BUREAU OF JUSTICE STATISTICS DATA SERIES 163 profile—also serves to create a culture that works against openness in pro- viding data. By and large, “the prosecutor’s work is invisible to the public at large” (Forst, 2000:23), and the prevailing inclination is to keep things that way. Another reason for prosecutorial insulation is adherence to a basic maxim of their adversarial culture: “Do not divulge the particulars of your case to anyone who is not in a position to help you win it.” Accordingly, prosecutors “typically see little to gain and considerable risk in divulging any information that is not required by law” (Forst, 2000:25). As a means to “improve the systems by which prosecutors are held ac- countable” and to make the operations of their offices more transparent, Forst (2000:42) argues for: the annual publication of uniform office performance statistics and a formal periodic survey of all who depend on prosecutors: victims, wit- nesses, judges, police, defense bar, and the general public. Private sec- tor organizations have long used surveys to obtain systematic feedback about the effectiveness of service delivery, including measures of con- sumer satisfaction about specific elements of service. Police departments and other public agencies are turning increasingly to such assessment systems, and so can prosecutors. An effort along these lines should be coordinated by the Bureau of Justice Statistics to minimize political adulteration of the system, perhaps in collaboration with national asso- ciations of district attorneys and state attorneys general. Such an idea seems no more farfetched than that of a uniform crime reporting system with the cooperation of virtually all 20,000 independent police depart- ments in the United States, a program that has been operating for most of the twentieth century. That vision remains far off. However, as noted above in Section 3–D, BJS’s investment in redesigning the SCPS program raises interesting possibilities. In working with local jurisdictions to participate in SCPS-type collections, it will also be important to assess whether prosecutor’s offices may be able to provide similar types of information. This is particularly the case if methods to work with electronic submissions from court and prosecutor databases continue to develop. As a short-term measure—and consonant with our advice to expand the concept of “law enforcement” data beyond the strict management focus of LEMAS (Section 3–F.2)—BJS should consider low-cost means to gather at least some procedural information in its existing NPS. A question or set of questions asking for basic counts of resolutions reached by the prosecutor’s office within some time window—ideally, broken down to include cases han- dled through alternative dispute resolution techniques such as mediation— would provide a partial picture of prosecutorial activity, but a fuller one than currently exists.

Next: 4 State and Local Partnerships »
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics Get This Book
×
Buy Paperback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The Bureau of Justice Statistics (BJS) of the U.S. Department of Justice is one of the smallest of the U.S. principal statistical agencies but shoulders one of the most expansive and detailed legal mandates among those agencies. Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics examines the full range of BJS programs and suggests priorities for data collection.

BJS's data collection portfolio is a solid body of work, well justified by public information needs or legal requirements and a commendable effort to meet its broad mandate given less-than-commensurate fiscal resources. The book identifies some major gaps in the substantive coverage of BJS data, but notes that filling those gaps would require increased and sustained support in terms of staff and fiscal resources.

In suggesting strategic goals for BJS, the book argues that the bureau's foremost goal should be to establish and maintain a strong position of independence. To avoid structural or political interference in BJS work, the report suggests changing the administrative placement of BJS within the Justice Department and making the BJS directorship a fixed-term appointment.

In its thirtieth year, BJS can look back on a solid body of accomplishment; this book suggests further directions for improvement to give the nation the justice statistics--and the BJS--that it deserves.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!