Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 209
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics –5– Principles and Practices: BJS as a Principal U.S. Federal Statistical Agency OUR CHARGE DIRECTS US to provide guidance to the Bureau of Justice Statistics (BJS) regarding its strategic priorities and goals. Before doing this, it is important to consider the functions—and expectations—of BJS from a higher, agency-level perspective. One important filter through which to view the priorities and operations of BJS is its role as one of the principal statistical agencies in the U.S. federal statistical system. Relative to other countries, the U.S. federal statistical system is highly decentralized. Whereas other countries vest the primary authority for collection and dissemination of statistical data in a single agency—the Australian Bureau of Statistics, Statistics Canada, and Statistics Netherlands, for example—authority for production of official statistics in the United States is divided across numerous agencies.1 These agencies are by no means equal in terms of their staffing levels and budgetary re- 1 The statistical system of the United Kingdom is also frequently cited as an example of centralization; it is currently in a state of change. Effective as of April 2008, a new Statistics and Registration Service Act formally abolished the legal role of the Office for National Statistics (ONS), previously the United Kingdom’s dominant statistical agency. ONS functions continue, but the office is now a subsidiary of the Statistics Board, created as an independent corporate body as the arbiter and producer of official statistics in the country. As discussed in our panel’s interim report (National Research Council, 2008b), the British Crime Survey is an example of a United Kingdom data collection that is not collected by ONS or the Statistics Board; it is administered by the Home Office.
OCR for page 210
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics Figure 5-1 Estimated direct funding levels for principal federal statistical agencies, fiscal year 2008 NOTES: NSF, National Science Foundation. Including the costs associated with the decennial census would add $797.1 million to the Census Bureau total. SOURCE: U.S. Office of Management and Budget (2007:Table 1). sources. As shown in Figure 5-1, the three largest statistical agencies—the Census Bureau, the Bureau of Labor Statistics, and the National Center for Education Statistics—dominate the others in terms of resources even though the subject-matter portfolios of the smaller agencies—justice, transportation, agriculture, and so forth—are undeniably important. It is appropriate, in the panel’s judgment, to evaluate BJS in the context of the larger federal statistical system, especially the principal statistical agencies whose primary mission is the collection and dissemination of statistical information. (There are 60–70 other federal agencies that spend more than $500,000 per year on statistical information dissemination, but whose program duties outweigh their statistical focus.) The panel benefited from a preexisting, fully vetted set of evaluative criteria for a federal statistical agency. The observations that we make in this chapter are generally structured around the Principles and Practices for a Federal Statistical Agency, a white paper of the Committee on National Statistics (CNSTAT) (National Research Council, 2005b). Principles and Practices ar-
OCR for page 211
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics ticulates the basic functions that are expected of a unit of the U.S. federal statistical system; it also outlines ideals for the relationship between individual statistical agencies and their parent departments. As such, it has been widely used by various statistical agencies in their interactions with Congress and with officials in their departments. Indeed, BJS has already embraced such evaluative criteria: a summary version of the Principles and Practices is featured prominently on the front page of BJS’s current strategic plan (Bureau of Justice Statistics, 2005a) and as a top-level link on BJS’s website (under “BJS Statistical Principles and Practices”). The two sections of this chapter assess BJS, its products, and its performance relative to the Principles and Practices of a Federal Statistical Agency; each subsection begins with a précis of the relevant descriptive text from the fourth edition of Principles and Practices (National Research Council, 2009). In the course of this review, we provide extended discussion of two recent “flashpoints” in recent BJS experience—the circumstances surrounding release of data from the 2002 Police-Public Contact Survey (PPCS) that led to the dismissal of a BJS director and the reporting requirements imposed by the Prison Rape Elimination Act of 2003—that are particularly relevant to examination of the major principles of a statistical agency. We defer conclusions and assessments based on the chapter as a whole to Chapter 6, a more comprehensive statement on strategic goals for BJS. 5–A PRINCIPLES OF A FEDERAL STATISTICAL AGENCY 5–A.1 Trust Among Data Providers A federal statistical agency must have the trust of those whose information it obtains. Data providers, such as respondents to surveys and custodians of administrative records, must be able to rely on the word of a statistical agency that the information they provide about themselves or others will be used only for statistical purposes. An agency earns the trust of its data providers by appropriately protecting the confidentiality of responses. Such protection, in particular, precludes the use of individually identifiable information maintained by a statistical agency—whether derived from survey responses or another agency’s administrative records—for any administrative, regulatory, or law enforcement purpose. (National Research Council, 2009:5–6) In a democracy, government statistical agencies depend on the willing cooperation of resident respondents to provide information about themselves and their activities. This willingness requires assurance that their information will not be used to intervene in their lives in any way. When respondents to BJS data collections provide data to the agency (or contractors representing the agency) they are told that their individual data will never be used to harm them; they are told that the purposes of the data collection will be
OCR for page 212
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics fulfilled by publicly available statistical results; they are informed that the agency is an independent statistical organization transcending the current administration in power; they are informed that their data will be kept confidential. Agencies that fulfill such pledges build over time with their data providers a sense of trust that the agency’s intentions are benign and that the agency respects their rights as data providers. When political interference is suspected by data providers, the trust that their reports are being used appropriately—solely to create statistical information—can be shaken, and restoring that trust can be a much slower process than its destruction. BJS has many different target populations of data providers. Some are very large (e.g., the entire U.S. household population for the National Crime Victimization Survey (NCVS) or the full set of state courts of general jurisdiction) whereas others are quite small (e.g., state-level departments of correction or federal prisons). Small populations of data providers generally are repeatedly asked for data in ongoing BJS series. In turn these data providers are often more interested in the outcome of the data collections and may use the statistical information for their own purposes. From its review of BJS documents, knowledge of its data sets, and interactions with its respondents, the panel concludes that BJS and its data collection agents are generally very diligent in preserving the confidentiality of responses from its respondents. This is particularly true for the NCVS, the effectiveness of which is wholly predicated on building trust and rapport between interviewer and respondent in order to obtain full and accurate accounts of victimization incidents. The use of respondents’ data solely for statistical purposes is generally well known and presented. However, we note that this is only “generally” true in that there exists a flagrant exception. In the judgment of the panel, the reporting requirements of the Prison Rape Elimination Act of 2003 (PREA) oblige BJS to violate the principle of trust among its institutional data providers. Specifically, the provision of information to a statistical agency is fundamentally different from the provision of information to a regulatory or enforcement agency. Regulatory agencies, by their very nature, have the goal of intervention in individual activities when they are found to violate some prescriptive actions sanctioned by the government. The crux of the problem is that the PREA reporting requirements assign to BJS a quasi-regulatory role, directly using data collected from responding institutions to impose sanctions. In the remainder of this section, we describe this breach of principle by describing the history and the implementation of the PREA reporting requirement. Historical Development of the Prison Rape Elimination Act In Farmer v. Brennan (511 U.S. 825 ), the U.S. Supreme Court ruled that “deliberate indifference” to serious health and safety risks by
OCR for page 213
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics prison officials constitutes a violation of the Eighth Amendment protection against cruel and unusual punishment. The particular case in Farmer involved a preoperative transsexual prisoner who was raped and beaten shortly after transfer to a federal penitentiary; the Court’s ruling vacated lower court rulings that rejected the plaintiff’s argument on the grounds that prison officials had not been demonstrated to be criminally reckless. By 2002–2003, the general problem of sexual assault in prison drew legislative interest in Congress. Ultimately, the legislative initiative produced PREA. The final act is lengthy, including specification of grant monies targeted at reduction strategies, the establishment of a national commission, and adoption of national standards. However, in this section, we focus on the specific demands put on BJS by a section of the act covering “national prison rape statistics, data, and research”—reporting requirements that, in certain respects, run counter to the proper and accepted role of a federal statistical agency. In the 107th Congress, identical versions of a proposed “Prison Rape Reduction Act” were introduced in both houses (H.R. 4943 and S. 2619). On the occasion of the introduction of the measure in the Senate, cosponsor Sen. Edward Kennedy (D-Mass.) described what little was known quantitatively about the extent of sexual assault in U.S. prisons:2 Prison rape is a serious problem in our Nation’s prisons, jails, and detention facilities. Of the two million prisoners in the United States, it is conservatively estimated that one in ten has been raped. According to a 1996 study, 22 percent of prisoners in Nebraska had been pressured or forced to have sex against their will while incarcerated [(Struckman-Johnson et al., 1996)].3 Human Rights Watch recently reported, “shockingly high rates of sexual abuse” in U.S. prisons [(Human Rights Watch, 2001)].4 Cosponsor Sen. Jeff Sessions (R-Ala.) concurred, and briefly described the statistical analysis section of the bill:5 Some studies have estimated that over 10 percent of the inmates in certain prisons are subject to rape. I hope that this statistic is an exaggeration.… 2 Congressional Record, June 13, 2002, p. S5337. 3 Struckman-Johnson et al. (1996:69–70) distributed questionnaires (for response by mail) to all inmates and staff at two maximum security men’s prisons, one minimum security men’s prison, and one women’s facility, all of which are “in the state prison system of a rural Midwestern state.” The state is not explicitly identified, but later discussions of the results included the acknowledgment of Nebraska as the survey site. In all, 1,801 prisoners and 714 staff members at these facilities were eligible to participate; 528 inmates and 264 staff members responded. 4 No formal survey or statistical data collection was used by Human Rights Watch (2001); instead, the report’s observations were based on written reports from about 200 prisoners, responding to announcements in publications and leaflets. 5 Congressional Record, June 13, 2002, pp. S5337, S5338.
OCR for page 214
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics [This] bill will require the Department of Justice to conduct statistical surveys on prison rape for Federal, State, and local prisons and jails. Further, the Department of Justice will select officials in charge of certain prisons with an incidence of prison rape exceeding the national average by 30 percent to come to Washington and testify to the Department about the prison rape problem in their institution. If they refuse to testify, the prison will lose 20 percent of certain Federal funds. In both chambers, the legislation was referred to Judiciary subcommittees and no further action was taken (save that the Senate Judiciary Committee held a hearing on the bill on July 31, 2002). In the 108th Congress, legislation identical to the previous bill was introduced by Rep. Frank Wolf (R-Va.) and Rep. Bobby Scott (D-Va.) in the House as H.R. 1707 on April 9, 2003.6 However, deliberations between members and staff in both chambers were progressing toward a revised, bipartisan proposal, and these deliberations resulted in rapid passage of the bill. On June 11, 2003, the House Subcommittee on Crime, Terrorism, and Homeland Security replaced the existing text of H.R. 1707 with substitute language and favorably reported it to the full Judiciary Committee. In turn, the Judiciary Committee approved the revised bill on July 9. The Judiciary Committee’s report on the bill, H.Rept. 108-219, offers no explanation for the revised wording in the BJS data collection section of the act. On July 21, Sen. Sessions introduced S. 1435—consistent with7 the revised House language, but now bearing the name “Prison Rape Elimination Act.” Upon introduction, the bill was immediately passed by unanimous consent without debate or amendment; the House took up the Senate bill on July 25 and passed it without objection; and the bill was signed on September 4, becoming Public Law 108-79. Text of the Act and Reporting Requirements Box 5-1 shows the alterations to the section of PREA concerning BJS data collection between its original introduction in the 107th Congress and final passage. Both the original and final versions of the bill establish a Review Panel on Prison Rape; the original would have administratively housed the Review Panel in BJS while the final version makes it an organ of the Justice Department. To be clear, it is important to note that the Review Panel is more limited in scope than the National Prison Rape Elimination Commission created by other sections of the act. The Review Panel’s work is structured around the BJS work, while the formally appointed Commission 6 A variant on the same bill, with the same reporting requirements on BJS, was introduced on April 10, 2003, as H.R. 1765 but progressed no further than referral to committee. 7 Judiciary Committee Chairman James Sensenbrenner (R-Wisc.) described the Senate bill as “substantively identical to H.R. 1707” in his floor remarks on passage of the act (Congressional Record, July 25, 2003, p. 7765).
OCR for page 215
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics Box 5-1 Statistical Reporting Provisions of Original and Final Versions of the Prison Rape Elimination Act The following excerpt compares text from Section 2 of H.R. 4943 (107th Congress) and Section 4 of S. 1435 (108th Congress), the latter of which was enacted as Public Law 108-79. Subsections (d) and (e) on contracts and authorization of appropriations are omitted. Deletions from the earlier version are marked in strikethrough text; additions in the newer version are shown in italic type. NATIONAL PRISON RAPE STATISTICS, DATA, AND RESEARCH. ANNUAL COMPREHENSIVE STATISTICAL REVIEW- IN GENERAL- The Bureau of Justice Statistics of the Department of Justice (in this section referred to as the ‘Bureau’) shall carry out, for each calendar year, a comprehensive statistical review and analysis of the incidence and effects of prison rape. The statistical review and analysis shall include, but not be limited to the identification of the common characteristics of— inmates who have been involved with prison rape, both victims and perpetrators both victims and perpetrators of prison rape; and prisons and prison systems with a high incidence of prison rape. CONSIDERATIONS- In carrying out paragraph (1), the Bureau shall consider— how rape should be defined for the purposes of the statistical review and analysis; how the Bureau should collect information about staff-on-inmate sexual assault; how the Bureau should collect information beyond inmate self-reports of prison rape; how the Bureau should adjust the data in order to account for differences among prisons as required by subsection (c)(3); the categorization of prisons as required by subsection (c)(4); and whether a preliminary study of prison rape should be conducted to inform the methodology of the comprehensive statistical review. SOLICITATION OF VIEWS- The Bureau of Justice Statistics shall solicit views from representatives of the following: State departments of correction; county and municipal jails; juvenile correctional facilities; former inmates; victim advocates; researchers; and other experts in the area of sexual assault. SAMPLING TECHNIQUES- The analysis under paragraph (1) shall be based on a random sample, or other scientifically appropriate sample, of not less than 10 percent of all Federal, State, and county prisons, and a representative sample of municipal prisons. The selection shall include at least one prison from each State. The selection of facilities for sampling shall be made at the latest practicable date prior to conducting the surveys and shall not be disclosed to any facility or prison system official prior to the time period studied in the survey. Selection of a facility for sampling during any year shall not preclude its selection for sampling in any subsequent year. SURVEYS- In carrying out the review required by this subsection and analysis under paragraph (1), the Bureau shall, in addition to such other methods as the Bureau considers appropriate, use surveys and other statistical studies of current and former inmates from a sample of Federal, State, county, and municipal prisons. The Bureau shall ensure the confidentiality of each survey participant.
OCR for page 216
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics PARTICIPATION IN SURVEY- Federal, State, or local officials or facility administrators that receive a request from the Bureau under subsection (a)(4) or (5) will be required to participate in the national survey and provide access to any inmates under their legal custody. REVIEW PANEL ON PRISON RAPE- ESTABLISHMENT- To assist the Bureau in carrying out the review and analysis under subsection (a), there is established, within the Bureau Department of Justice, the Review Panel on Prison Rape (in this section referred to as the ‘Panel’). MEMBERSHIP- COMPOSITION- The Panel shall be composed of 3 members, each of whom shall be appointed by the Attorney General, in consultation with the Secretary of Health and Human Services. QUALIFICATIONS- Members of the Panel shall be selected from among individuals with knowledge or expertise in matters to be studied by the Panel. PUBLIC HEARINGS- IN GENERAL- The duty of the Panel shall be to carry out, for each calendar year, public hearings concerning the operation of each entity identified in a report under clause (ii) or (iii) of subsection (c)(2)(B) the three prisons with the highest incidence of prison rape and the two prisons with the lowest incidence of prison rape in each category of facilities identified under subsection (c)(4). The Panel shall hold a separate hearing regarding the three Federal or State prisons with the highest incidence of prison rape. The purpose of these hearings shall be to collect evidence to aid in the identification of common characteristics of inmates who have been involved in prison rape, both victims and perpetrators both victims and perpetrators of prison rape, and the identification of common characteristics of prisons and prison systems with a high incidence of prison rape that appear to have been successful in deterring prison rape. TESTIMONY AT HEARINGS- PUBLIC OFFICIALS- In carrying out the hearings required under subparagraph (A), the Panel shall request the public testimony of Federal, State, and local officials (and organizations that represent such officials), including the warden or director of each prison, who bears responsibility for the prevention, detection, and punishment of prison rape at each entity, and the head of the prison system encompassing such prison, who bear responsibility for the prevention, detection, and punishment of prison rape at each entity. VICTIMS- The Panel may request the testimony of prison rape victims, organizations representing such victims, and other appropriate individuals and organizations. FAILURE TO TESTIFY- If, after receiving a request by the Panel under subparagraph (B)(i), a State or local official declines to testify at a reasonably designated time, the Federal funds provided to the entity represented by that official pursuant to the grant programs designated by the Attorney General under section 9 shall be reduced by 20 percent and reallocated to other entities. This reduction shall be in addition to any other reduction provided under this Act.
OCR for page 217
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics SUBPOENAS- ISSUANCE- The Panel may issue subpoenas for the attendance of witnesses and the production of written or other matter. ENFORCEMENT- In the case of contumacy or refusal to obey a subpoena, the Attorney General may in a Federal court of appropriate jurisdiction obtain an appropriate order to enforce the subpoena. REPORTS- IN GENERAL- Not later than March June 30 of each year, the Bureau Attorney General shall submit a report on the activities of the Bureau (including the Review Panel) and the Review Panel, with respect to prison rape, for the preceding calendar year to– Congress; and (B) the Attorney General; and the Secretary of Health and Human Services. CONTENTS- The report required under paragraph (1) shall include— with respect to the effects of prison rape, statistical, sociological, and psychological data; and with respect to the incidence of prison rape— statistical data aggregated at the Federal, State, prison system, and prison levels; an identification of the Federal Government, if applicable, and each State and local government (and each prison system and institution in the representative sample) where the incidence of prison rape exceeds the national median level by not less than 30 percent; and an identification of jail and police lockup systems in the representative sample where the incidence of prison rape is significantly avoidable. a listing of those institutions in the representative sample, separated into each category identified under subsection (c)(4) and ranked according to the incidence of prison rape in each institution; and an identification of those institutions in the representative sample that appear to have been successful in deterring prison rape; and a listing of any prisons in the representative sample that did not cooperate with the survey conducted pursuant to section 4. DATA ADJUSTMENTS- In preparing the information specified in paragraph (2), the Bureau shall, not later than the second year in which surveys are conducted under this Act, Attorney General shall use established statistical methods to adjust the data as necessary to account for exogenous factors, outside of the control of the State, prison system, or prison, which have demonstrably contributed to the incidence of prison rape differences among institutions in the representative sample, which are not related to the detection, prevention, reduction and punishment of prison rape, or which are outside the control of the State, prison, or prison system, in order to provide an accurate comparison among prisons. Such differences may include the mission, security level, size, and jurisdiction under which the prison operates. For each such adjustment made, the Bureau Attorney General shall identify and explain such adjustment in the report. CATEGORIZATION OF PRISONS- The report shall divide the prisons surveyed into three categories. One category shall be composed of all Federal and State prisons. The other two categories shall be defined by the Attorney General in order to compare similar institutions.
OCR for page 218
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics has a broader charge to develop national standards for the detection and prevention of sexual violence in correctional facilities. It also appears that one of the intended roles of the Review Panel was to “assist” BJS in its data collection efforts (as is explicitly stated in both versions of the bill). This assistance function is consistent with concerns expressed at a congressional hearing on the bill, arguing that BJS should have an advisory group to work out definitional issues in measuring prison rape (U.S. House of Representatives, Committee on the Judiciary, 2003:19). The critical difference in the legislative texts in Box 5-1 lies in the reporting requirements to support public hearings by the Review Panel. The original proposal called for public hearings with officials from institutions with high and low incidences of prison rape (facilities “where the incidence of prison rape exceeds the national median level by not less than 30 percent” and facilities “where the incidence of prison rape is significantly avoidable”). However, the final law directs that—each year, for different facility types—the facilities with the three highest and two lowest incidence rates be summoned to appear at hearings. Comparing the different versions of section (b)(3)(C) in Box 5-1, the original version of the act threatened institutions that refused to testify before the Review Panel with a 20 percent reduction in federal grant monies. The final version of the bill removed that threat but granted the Review Panel full subpoena power.8 In addition to identifying the highest- and lowest-ranked institutions, the final legislative text also required the Review Panel (presumably using BJS’s work) to provide a complete listing of all the facilities in the sample, “ranked according to the incidence of prison rape. The original designation of “high” prison rape incidence—a value more than 30 percent greater than the national median—was a curious and intriguing one. Depending on the distribution of incidence rates across facilities, the criterion might have obliged the Review Panel to hear from an unworkably high number of parties, and perhaps that consideration drove the revision. Alternatively, singling out “the” highest-rate facilities may have been viewed by legislators as more consistent with the themes of accountability and action (as with the change in nomenclature from a “Prison Rape Reduction” to a “Prison Rape Elimination” Act). From the record, it is unclear exactly how and why the change came about. Indeed, both Rep. Scott’s prepared statement for the Judiciary Committee markup of the bill on July 9, 2003 (H.Rept. 108-219, p. 114), and floor statement on the Senate bill 8 Although the text does not appear in H.R. 3493 in Box 5-1, Corlew (2006) notes that the bill as originally proposed “would have granted a ten percent funding increase to prison systems that, because of their high percentages of prison rape, were required to provide testimony to the Review Panel.” Both the American Correctional Association and the Association of State Correctional Administrators objected that this provision “appeared to reward undeserving systems”—another reason for the change to subpoena authority in the final bill text.
OCR for page 219
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics on July 25 (Congressional Record, p. H7764), refer to “conduct[ing] public reviews of institutions where the rate of prison rape is 30% above the national average rate”—even though that provision no longer existed in the revised language. The principal congressional hearing on the bill was held before the House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security on April 29, 2003. Being a House hearing, the bill referred to at the hearing was the original version of the legislation with the 30-percent-above-median reporting requirement. At that hearing, the only discussion of BJS’s reporting role was raised by then-principal deputy attorney general for the Office of Justice Programs (OJP) Tracy Henke, and that concern came as a brief ending to her opening statement. Although Henke’s remarks hinted at the inappropriateness of BJS’s use of data for administrative and regulatory purposes, the specific objection was raised to the original bill’s vague definition of low-prevalence facilities (U.S. House of Representatives, Committee on the Judiciary, 2003:13–14): I know my time is up, but real quickly, sir, another concern to the Department is that the Department believes that [it is of the utmost importance that9] the integrity of the statistical collection and analysis by the Bureau of Justice Statistics be preserved. The legislation currently requires BJS not only to collect but also to analyze data and produce reports on that analysis in a very short timeframe. We recognize the need for quick access to this information, but it must be balanced by providing BJS the opportunity to accurately and sufficiently analyze the data collected. Finally, the law authorizing BJS prohibits BJS from gathering data for any use other than statistical or research purposes. By requiring BJS to identify facilities “where the incidence of prison rape is significantly avoidable,” the legislation calls for BJS to make judgments about what level of prison rape is “significantly avoidable”. This responsibility goes beyond BJS’s authorized statistical role. BJS Data Collections and Reports in Support of the Act In response to the enactment of PREA, BJS organized a series of data collection efforts, summarized in Bureau of Justice Statistics (2004c), that have been characterized as “a quantum leap in methodology and our knowledge about the problem” of prison rape (Dumond, 2006). The main efforts in the PREA-related data collections are an annual administrative-records-based inventory dubbed the Survey of Sexual Violence (SSV) and a recurring National Inmate Survey program. For the SSV, BJS contracted with the U.S. 9 This grammatical insertion uses the wording of Henke’s prepared statement, printed in the hearing record after the spoken remarks (U.S. House of Representatives, Committee on the Judiciary, 2003:16).
OCR for page 266
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics The grounds for criticism of this extremely scant statement are numerous: The proposed collection shares with its fellow special-agency censuses a lack of clarity over whether the collection is intended as a “survey” or a “census.” Throughout the rest of the document, and in the title of the collection, “survey” had been used; in the Universe and Respondent Selection section, “census” suddenly becomes the preferred choice. Regardless of the “survey” or “census” label, the primary source of contact information is the existing LEMAS survey; even if the aviation unit study is meant as a census, the method of construction of its frame/address list (LEMAS) should be described in more detail. Part A suggests that the LEMAS listings would be supplemented by listings from the Airborne Law Enforcement Association and International Association of Chiefs of Police; coverage properties for either of those lists is missing, as is any hint of how many additional units might be added through reference to those lists. The restriction to agencies with 100 or more officers is not previously mentioned, or described further. The statement is absent of any notion of whether and how the contact strategy differs from that of the main LEMAS collection or, indeed, who will carry out the collection. Likewise, any formal connection to the basic LEMAS survey (e.g., whether the results of the aviation-specific study might be used to revise questions on the main survey) is unspecified. The reference to providing “on-site assistance” is vague—does it refer to follow-up by a field interviewer? The reference to response rates in previous law enforcement surveys is interesting but unpersuasive; a better point of comparison might be similarly scoped attempts to canvass special units within departments rather than the main LEMAS survey. The final section, on testing of procedures, is particularly uninformative. How were the pilot jurisdictions chosen? Were there any difficulties encountered in the questionnaire, such as terminology usage? Did specific comments from the pilot respondents lead to changes in the contact strategy? The Law Enforcement Aviation Unit ICR is an example of particularly weak justification and technical specification statements, but reading of other BJS-prepared ICRs show similar deficiencies. BJS’s request for clearance of the 2007 Survey of Law Enforcement Gang Units (ICR 200705-1121-001) shared some gross features of the aviation unit ICR, again using the “survey” nomenclature but describing the effort as a “nationwide census of all law enforcement gang units operating within police agencies of 100 or more officers.” The supporting statement for the gang unit study does not explain whether any other data sources besides previous LEMAS returns are to be used to build the frame of dedicated gang units, leaving it unclear whether the collection is indeed a census (a canvass of all known gang units) or a survey (probability sample). In another example, the section on testing of procedures in the ICR for the 2007/2008 National Survey of Prosecutors says that “the survey instrument was previously pretested with 310 jurisdictions during the 2005 data collection whereby BJS received a 99% response rate ” (ICR 200704-1121-004). However, other portions of the statement make clear that the newer 2007/2008 version was purposely designed as a complete census of prosecutor offices, meaning that questions were revised and the number of questions was scaled back. Since this makes the newer survey different in scope and character than the 2005 version, the 2005 response rate—though impressive—fails to answer the question of experience in pretesting the questionnaire.
OCR for page 267
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics is true of other statistical agencies facing tight resources, BJS has been forced into an overriding focus on basic production of a set of data series and standard reports, at the expense of research, development, and innovation. As we discussed in Section 3–F.2, the performance measures in BJS’s strategic plan are largely ones of volume and throughput—counts of file access on the NACJD, number of reports and supporting material accessible on the BJS website, number of data collections performed or updated each year—that lack a forward-looking focus on improvements in methodology and options for improving content. A statistical agency should be among the most intensive and creative users of its own data, both to formally evaluate the quality and properties of its data series but also to understand the findings from those data and shape future refinements. BJS’s “Special Reports” series have, in the past, gone into depth on topics not routinely studied by the agency’s standard reports or have taken unique looks at BJS data, such as age effects in intimate partner violence (Rennison, 2001), the interaction between alcohol and criminal behavior (Greenfeld, 1998; Greenfeld and Henneberg, 2001), and the prevalence of ever having served time in prison among the U.S. population (Bonczar and Beck, 1997; Bonczar, 2003). They have also provided some opportunity for BJS analysts to make use of multiple BJS data sets or combine BJS data with non-BJS data sets in interesting ways: To study educational attainment in the correctional population, Harlow (2003) studied data from BJS’s prisoner and jail inmate surveys, its 1995 Survey of Adults on Probation, the Current Population Survey of the Bureau of Labor Statistics, and the 1992 National Adult Literacy Survey of the National Center for Education Statistics. Zawitz and Strom (2000) combined data from the NCVS and multiple data series from the National Center for Health Statistics to describe both lethal and nonlethal violent crime incidents involving firearms. Greenfeld (1997) combined information from the UCR, the NCVS, and BJS’s corrections and adjudications to summarize the state of quantitative information on sex offenses including rape and sexual assault. Moreover, in fairness, BJS deserves credit for several innovative tacks that it has taken. Although full use of electronic questionnaires took considerable time, BJS and the NCVS were (through its work with the Census Bureau) relatively early adopters of computer-assisted methods in major federal household surveys. And, though we have argued at length that the reporting requirements are inappropriate, BJS’s work on data collections in support of PREA led the agency to make great strides in the use of ACASI and other techniques for interviewing on sensitive topics. BJS has also demonstrated itself to be effective and innovative in developing data collection instruments
OCR for page 268
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics to confront very tough methodological problems: identity theft, hate crimes, police-public contact, and crimes against the developmentally disabled. But innovative in-house data analyses by BJS have slowed in recent years as the focus on production has increased and resources have tightened; major methodological innovations such as the use of ACASI were possible because PREA carried with it substantial funding. BJS’s need to update longstanding products and keep activities in place, for basic organizational survival, has too frequently trumped innovative research and intensive exploration of new and emerging topic areas. Indeed, the principal means for identifying “emerging data needs” cited in BJS’s strategic plan is not examination of the criminological literature or frequent interaction with criminal justice practitioner communities, but rather “emerging data needs as expressed through Attorney General priorities and Congressional mandates” (Bureau of Justice Statistics, 2005a:32).18 In our assessment, the lack of a research program (and the capacity for a research program) puts BJS and its data products at risk of growing stagnant and becoming less relevant. Finding 5.9: The active investigation of new ways of measuring and understanding crime and criminal justice issues is a critical responsibility of BJS. The agency has lacked the resources needed to fully meet this responsibility and, for some issues, has fallen behind in developing such innovations. Finding 5.10: BJS has lacked the resources to sufficiently produce new topical reports with data it currently gathers. It also lacks the resources and staff to routinely conduct methodological analyses of changes in the quality of its existing data series and to fully document those issues. Instead, the BJS production portfolio primarily is limited to a routine set of annual, biannual, and periodic reports and for some topics, the posting of updated data points in online spreadsheets. In our interim report, we made specific recommendations to stimulate research directly related to the NCVS, specifically calling for BJS to initiate studies of changes in survey reference period, improvements to sample efficiency, effects of mixed-mode data collection, and studies of nonresponse bias (National Research Council, 2008b:Recs. 4.2, 4.7, 4.8, 4.9). In response, BJS quickly issued requests for proposals for external researchers to conduct such studies, and has also signaled its intent to conduct a survey design competition to evaluate broad redesign options (Rec. 5.8 in National Research Council, 2008b). This is a laudable reaction that is a step toward laying out more concrete options for and future activities related to 18 “In addition,” the plan notes shortly thereafter, “BJS staff meet regularly with Federal, State, and local officials to identify emerging data needs or desirable modifications to existing collection and reporting programs” (Bureau of Justice Statistics, 2005a:32).
OCR for page 269
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics the NCVS, BJS’s largest data program, but a fuller research program is critical to future-oriented option development for BJS’s non-NCVS programs. It is also critical to avoid implementation problems such as those experienced in the 2006 administration of the NCVS. As we noted in our interim report, “design changes made (or forced) in the name of fiscal expediency, without grounding in testing and evaluation, are highly inadvisable” (National Research Council, 2008b:83). To this end, a short recommendation that we offered in our interim report (National Research Council, 2008b:Rec. 4.1) is worth formally restating here: Recommendation 5.13: BJS should carefully study changes in the NCVS survey design before implementing them. It follows that this guidance can be applied to changes to other BJS data collections, and that such evaluative studies are not possible without the resources necessary to make innovative research a priority for the agency. Congress and the administration cannot reasonably expect BJS to shoulder daunting data collection requests without the agency engaging in ongoing research, development, and evaluation. Going forward, a key priority should be detailed error analysis of the NCVS to get a sense of how big a problem survey nonobservation may be in specific socioeconomic subgroups, as the basis for understanding where improvements may most properly be made. On a related matter, BJS research activities should also be directed at improving outreach and data collection coverage of groups that are traditionally hard to reach by survey methods; such groups include new immigrant groups and persons and households where English is not the primary spoken language, young minorities in urban centers, and the homeless. Recommendation 5.14: BJS should study the measurement of emerging or hard-to-reach groups and should develop more appropriate approaches to sampling and measurement of these populations. In the following, we suggest a few selected areas for a BJS research program. These should not necessarily be interpreted as the only or as the most pressing research priorities, but we believe they are all important directions. In terms of methodological innovations, BJS should consider greater use of model-based estimation. In our interim report, we recommended investigation of such modeling for the generation of subnational estimates from the NCVS (National Research Council, 2008b:Rec. 4.5); improving the spatial and, perhaps, temporal resolution of estimates from the NCVS remains the highest priority in this regard, but the methodology could be brought to bear in other areas. The development of small-area estimates is particularly pressing because the agency is often criticized for not being able to speak to subnational areas. Modeling can also refer to the use of multivariate analyses to control for factors that mask real changes in the phenomenon of
OCR for page 270
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics interest. Just as many economic indicators are adjusted for inflation or seasonal fluctuation, it would make sense to adjust crime rates for factors that mask important variation. Age-adjusting crime rates, for example, would help separate the effects of macro-level social changes (over which one has little control) from more troubling and actionable changes in the incidence of crime. The same can be said of incarceration rates: adjusting admission rates for the volume of crime would provide a perspective on the use of incarceration not available in simple population-based rates. Modeled data should surely be used when we know that right or left censoring of data makes data incomplete and inaccurate. For years BJS published estimates of time served in prison using exiting cohorts when they knew that this seriously underestimated the time served. This is a case where model-based estimates would most certainly have been more accurate than data-based estimates. However, greater use of model-based estimates must be done with caution, for several reasons. One is the challenge of interpretation: Modeling may not be understood by many consumers of BJS data. This may be largely a presentational problem that can be solved by presenting the estimates simply and then providing the detailed description of modeling elsewhere. The use of double-decrement life tables by Bonczar and Beck (1997) (later updated by Bonczar, 2003) is a good illustration of how modeling could be presented in BJS reports. Another challenge is that models are always based on assumptions, assumptions that can be more or less accurate or robust (and there can be wide disagreement over what is accurate or robust). Hence, situations where the choice of assumptions may be interpreted as reflecting political or other bias should be avoided. A more basic methodological development, but still complex research effort, would be for BJS to invest in the creation and revision of basic classifications and typologies for crime and criminal justice matters. Its role in coordinating information from a variety of justice-related agencies and promoting standards through National Criminal History Improvement Program–type grants for improvement of source databases gives BJS unique advantages in taking on such an effort. The classification of “index crimes” used in the UCR has changed little in 80 years and remains the nation’s major crime classification; its implications for what crimes are most serious are central to the definitions used in the NCVS and other BJS collections. Yet the interest in crime and the amount of information available on crime has changed greatly over those 80 years, and the basic classification of crime should be revisited to keep pace with these changes. BJS should also invest some effort in getting denominators for risk rates that are more reflective of the at-risk population. Major cities, for example, are disadvantaged in the annual crime rankings of jurisdictions based on UCR data because their rates are based upon their residential population—a
OCR for page 271
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics base that excludes the commuters, shoppers, and culture seekers who contribute to the numerators of the rates. Likewise, incarceration rates based on the entire population are technically correct but may be otherwise misleading, because very young and very old populations are not at risk. The generation of risk rates should not be restricted to the data generated by BJS but should use other data as long as the quality and periodicity of those data are acceptable. To report estimates from BJS’s inmate surveys as proportions of the prison population misses a great opportunity to understand much better how the nation uses its prison resources; incarceration rates reflecting the general household population (as in Bonczar, 2003) may be uniquely informative. 5–B.9 Strong Internal and External Evaluation Program Statistical agencies that fully follow [this set of prescribed practices] will likely be in a good position to make continuous assessments of and improvements in the relevance and quality of their data collection systems.… Regular, well-designed program evaluations, with adequate budget support, are key to ensuring that data collection programs do not deteriorate. (National Research Council, 2009:47, 48) The practice of instituting a strong internal and external evaluation program is a new addition to the fourth edition of Principles and Practices of a Federal Statistical Agency. It is similar to the practice of an ongoing research program (Section 5–B.8) but has slightly different connotations, emphasizing not only the continuous quality assessment of individual data collection programs but periodic examination of the quality and relevance of an agency’s entire data collection portfolio. It is very much to BJS’s credit with respect to following this practice that it has periodically sought the advice of external users and methodologists on specific methodological problems, that it engaged in the intensive rounds of testing and evaluation that led to the redesigned NCVS in the early 1990s, that it regularly receives feedback on data quality from its state SAC network and JRSA, and that it actively sought and encouraged this panel’s review of the full BJS portfolio. Like other small statistical agencies, BJS’s ability to mount large-scale evaluation efforts is limited by available resources. Still, attention to internal and external evaluation is critical. Indeed, some of the guidance we offer in this report—for instance, on emphasizing the flows from step to step in the justice system within existing BJS data sets and facilitating linkage between current data sets (Section 3–F.1)—depends critically on careful evaluation of the strengths and limitations of current data collections and structures as a first step. One general direction for improvement by statistical agencies, including BJS, is greater attention to known data quality issues and comparisons with
OCR for page 272
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics other data resources as part of the general documentation of data sets. BJS reports are generally careful to include a concise methodology section, and the public-use data files that are accessible at the NACJD typically include additional detail in their codebooks. Still, as a general practice, BJS should work to find ways to improve the documentation on its major data holdings that is directly accessible from BJS. This could include developing and making available technical reports based on specific user experiences and providing direct links to Census Bureau (and other BJS-contracted data collection agents) technical reports on developing specific survey instruments. As part of an evaluation program, it would also be useful for BJS to move beyond individual series examinations and approach critiques of the relative quality of multiple sources. This work should be done in partnership with other statistical agencies or data users, such as we describe below in Section 5–B.11 for comparing BJS’s prison and jail censuses with the data quality and resolution provided by the Census Bureau’s ACS. Other examples for multiple-source evaluation include: Examination of differences between homicide rates computed from the UCR data and those from the cause-of-death data coded in the vital statistics that are compiled by the National Center for Health Statistics; Reconciliation of the number of gunshot victims known to the police (or measured in emergency room admissions data) with the number of self-reported gunshot victims in the NCVS (see, e.g. Cook, 1985); and Examination of the reasons why serious-violence victimization rates from the NCVS and School Crime Supplement differ from those derived from CDC’s Youth Risk Behavior Surveillance System. 5–B.10 Professional Advancement of Staff To develop and maintain a high-caliber staff, a statistical agency must recruit and retain qualified people with the relevant skills for its efficient and effective operation, including analysts in fields relevant to its mission (e.g., demographers, economists), statistical methodologists who specialize in data collection and analysis, and other specialized staff (e.g., computer specialists). (National Research Council, 2009:12) At the panel’s request, BJS supplied biographical information for its staff members as of fall 2008. A total of 32 of the 53 staff members hold positions with labels connoting direct statistical work (statistician, senior statistician, or branch chief); 12 have doctoral degrees (with an additional five listed as being Ph.D. candidates) and nearly all list master’s degrees. However, none holds a doctoral or master’s degree in statistics, although two statisticians have completed master’s degrees in the Joint Program in Survey Methodology of the University of Maryland, the University of Michigan, and Westat.
OCR for page 273
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics Indeed, the only formal statistics degree on the full BJS staff is a bachelor’s degree, held by a specialist on the support staff. As is not surprising, advanced degrees in criminology (or criminal justice) or sociology abound, though other fields such as social psychology, social welfare, and public affairs are also included. The statistician ranks in BJS also include one holder of a law degree. Our review of the staff biographies—and of BJS’s publications, throughout this report—suggests a very capable and dedicated staff, with a median length of service of about 8 years and including several career staff members of 20 years or more. Our intent is not to impugn the good work of the BJS staff. However, in Section 5–B.7 and our interim report, we commented on the need for more highly skilled technical leaders within BJS; we think this is necessary to put BJS on a better footing in dealing with its external data collection agents, to cultivate a climate of research and innovation, and to safeguard the continued credibility and quality of BJS data. Going further, we suggest that BJS would benefit from additional staff expertise in mathematical and survey statistics; computer science and database management are also notable deficiencies in staff expertise, given the agency’s role in executing grants to improve criminal justice databases and the importance of record linkage for conducting longitudinal studies of flows in the justice system. Recommendation 5.15: BJS must improve the technical skills of its staff, including mathematical statisticians, computer scientists, survey methodologists, and criminologists. At the same time, the panel notes that the recruitment problem for technical staff to all statistical agencies is a large one. The agencies in the federal statistical system that seem to do better on this score are those who are actively supporting advanced degrees among their junior staff—that is, making human capital investments in bachelor’s-level staff and assisting their graduate studies to yield more technically astute staff in 2–4 years. In addition, agencies have sponsored dissertation fellowships on their own data, using the contact with the Ph.D. candidate to recruit talented staff. 5–B.11 Coordination and Cooperation with Other Statistical Agencies Although agencies differ in their subject-matter focus, there is overlap in their missions and a common interest in serving the public need for credible, high-quality statistics gathered as efficiently and fairly as possible. When possible and appropriate, federal statistical agencies should cooperate not only with each other, but also with state and local statistical agencies in the provision of data for subnational areas. (National Research Council, 2009:13)
OCR for page 274
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics There are some valuable and mutually productive partnerships between BJS and other statistical agencies. These include relatively long-term arrangements such as the National Center for Education Statistics’ sponsorship of the School Crime Supplement as well as one-time collaborations, such as a joint report by BJS and CDC staff on findings from the NCVS on injuries sustained in the course of violent crime victimizations (Simon et al., 2001). BJS has also enjoyed some collaborative work with the National Center for Health Statistics, including use of vital statistics data collected from state public health departments and registrars. BJS has also, on occasion, worked with agencies that are not principal statistical agencies but that do conduct statistical work; for instance, BJS sponsored the Consumer Product Safety Commission to add a Survey of Injured Victims of Violence as a module to the commission’s National Electronic Injury Surveillance System—a sample of hospitals that provide their emergency department records for coding and analysis (Rand, 1997). Of course, BJS’s most intensive relationship with another statistical agency is with the Census Bureau. Although there are some cooperative aspects of the partnership between the two agencies, the panel believes that there are some fundamental strains in the relationship. One is that, as noted in the preceding section, BJS has lacked the strong statistical expertise to fully engage with the Census Bureau staff on design (and redesign) issues, and so its role in modifying the NCVS to fit within budgetary constraints has largely been one of deciding which Census Bureau–developed cost-saving options are least objectionable. Another element of strain is discussed in our interim report (National Research Council, 2008b:Sec. 5–D): the failure of the Census Bureau to provide transparency in its costs and charges for data collection to BJS (or its other federal agency sponsors), which makes assessments of the trade-offs between survey costs and errors impossible. Agencies that contract out much of their work—and BJS is one of the extreme cases within the statistical system in that regard—can easily evolve into ones where contract management is the dominant focus. While more (and more sophisticated) technical staff will not solve the BJS budget problems, they can make BJS a stronger partner to the other statistical agencies with which it works. On substantive grounds, an important area in which a healthy BJS–Census Bureau relationship and collaboration would beneficial is in reconciling BJS’s corrections data series with the Census Bureau’s measures of the correctional institution population. The American correctional apparatus has grown enormously since the mid-1970s; there are now on the order of 2.3 million persons in prison or jail, and the incarceration rate has grown fourfold since 1980. Another 800,000 people are on parole, and 4.2 million are on probation. Virtually all the growth in incarceration since 1980 has been among those with less than a high school education. In this context, the
OCR for page 275
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics BJS data collections are a valuable supplement to the large Census Bureau household surveys which are drawn exclusively (or nearly so) from the noninstitutional household population. BJS collections on the population under correctional supervision are not just an important part of an accounting of the criminal justice system, but an increasingly important part of the nation’s accounting for the population as a whole. Those groups overrepresented in prison populations—minority men under age 40 with little schooling—are also significantly undercounted in household surveys and other data collections from the general population. The Census Bureau’s ACS now contains the detailed social, demographic, and economic questions that were traditionally asked of a sample of the population through the “long form” questionnaire of the decennial census. When the ACS entered full-scale collection earlier this decade, it also included coverage of the group quarters (nonhousehold) population, including prisoners. The first 3-year-average estimates from the ACS for areas with populations of 20,000–65,000 only became available in 2008, and the first 5-year-average estimates for all geographic areas (including those under 20,000 population) are only slated for release in 2010. Hence, the properties of these estimates—much less their accuracy for segments of the relatively small group quarters population—are only beginning to be studied and understood. Going forward, an important question will be how the most accurate picture of the prison and jail population can be derived, balancing the ACS estimates with the annual count (and basic demographic) information from BJS’s prison and jail censuses and the detailed information available from BJS’s inmate surveys. 5–C SUMMARY The panel believes that BJS and DOJ should conduct continual examination of BJS’s fulfillment of the principles and practices of a federal statistical agency. Our panel’s review found that the perceived independence of the agency was severely shaken by recent events. We found that the trust of data providers is threatened by BJS directly assisting regulatory activities. We also found that a renewed emphasis on increasing the technical and research skills of BJS’s staff is needed.
OCR for page 276
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics This page intentionally left blank.