The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Ensuring the Quality, Credibility, and Relevance of U.S. Justice Statistics
The reason why we rejected explicitly suggesting that some data series be cut in order to pay for others is certainly not that the current collections are perfect. The improvements we suggest in the recommendations are testimony to that. Rather, we cannot accept the underlying premise and inherent assumption that BJS can achieve its legislated goals by cutting programs. In our assessment, we think it can be stated as a fact: BJS has been given more responsibilities than can be achieved with current resources. The resources provided to BJS to carry out its work are not commensurate with the breadth—and importance—of the responsibilities assigned to the agency by its authorizing legislation.
Because of this, the agency has for some years walked a tight line of small cuts of sample or measurement, short delays of publications, and temporary hiring freezes—each of these tolerable in itself, but cumulating over the years such that core functions have broken down. On a routine basis, decisions must be made about addressing certain responsibilities and not addressing others; trade-offs must be made in the periodicity and completeness of data collections. Maintenance and continuation of existing data collections must also be balanced with the need to comply with directives from Congress and the Department of Justice, complicating resource allocation decisions and the setting of priorities. Such decisions are hardly unique to BJS—at some point, all organizations must make such trade-offs—but BJS’s mismatch between resources and responsibility makes the decisions particularly difficult.
Thus, in setting priorities, BJS directors have perforce had a short time horizon—responding to a certain set of demands even though those decisions may have negative long-term consequences for individual data collections and the health of the agency. Certainly, in the midst of year-to-year juggling of data series in order to keep production moving, longer-term investments in research and innovation become difficult or impossible to make. The most striking example of the consequences of this extremely tough climate is the current state of the NCVS: What was once, clearly, the best victimization survey in the world is now unable to satisfy its basic function of providing annual estimates of level and change in common-law crime. This decay happened gradually as BJS administrators were attempting to respond to immediate exigencies, aggravated by an overly broad mandate. Each single cut in sample size, or other cost-cutting measure, was justifiable given then-present alternatives. Cumulatively—as demonstrated most vividly by the declared “break in series” with the 2006 NCVS data—they lead to the conclusion that “the [current] NCVS is not achieving and cannot achieve BJS’s legislatively mandated goals” (National Research Council, 2008b:Finding 3.1).