National Academies Press: OpenBook

Ballistic Imaging (2008)

Chapter: 6 Operational and Technical Enhancements to NIBIN

« Previous: 5 Current Ballistic Image Databases: NIBIN and the State Reference Databases
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 162
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 163
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 164
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 165
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 166
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 167
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 168
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 169
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 170
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 171
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 172
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 173
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 174
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 175
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 176
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 177
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 178
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 179
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 180
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 181
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 182
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 183
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 184
Suggested Citation:"6 Operational and Technical Enhancements to NIBIN." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 185

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

6 Operational and Technical Enhancements to NIBIN As discussed in Chapter 1, the committee’s interpretation of our charge is focused on offering advice on three basic policy options: maintain the National Integrated Ballistic Information Network (NIBIN) system as it is, enhance the NIBIN system in several possible ways (without expanding its scope to include new and imported firearms), or establish a national refer- ence ballistic image database as a complement or adjunct to the current NIBIN. The first of these options may readily be viewed as something of a “straw man,” particularly given the open-ended nature with which we were asked to consider enhancements or improvements to NIBIN. No program is perfect: there is always opportunity for refinement and improvement, and such is the case with NIBIN. The underlying concepts of NIBIN are sound—facilitating transfer of information between geographi- cally dispersed law enforcement agencies and giving those agencies access to technology that could generate investigative leads that would otherwise be impossible. However, the program falls short of its potential in several respects, and this chapter proposes some directions for improvement. After briefly reviewing other perspectives that have been raised about improving the content and performance of the NIBIN system (Section 6–A), our comments focus on possible and suggested enhancements. The second section (6–B) considers operational enhancements, those that concern the administration of the program and the use of the system in general. The third section (6–C) considers technical enhancements, those that deal with the specific technology used by the NIBIN program; this section builds on Chapter 4’s discussion of the current Integrated Ballistics Identification System (IBIS) platform. 162

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 163 In phrasing some of our recommendations, we opt for generic descriptions—“ATF and its NIBIN contractors” or the “NIBIN technical platform”—since they describe functionality that should apply regardless of the specific platform or vendor. One major possible enhancement of interest to the committee—a change in the basic imaging standard from two-­dimensional photography to three-dimensional topography—is not discussed here; instead, we give the topic more detailed examination in Chapters 7 and 8. 6–A  Other Perspectives on NIBIN Enhancement The Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) and the NIBIN program have made strides to gather feedback on system procedures and performance from the user base, efforts for which they should be commended. Formally, forums for the gathering of feedback have included periodic meetings of the ATF-established NIBIN Users Congress since November 2002; users are also asked to serve as regional outreach coordinators, providing a sounding board for comments both informally and through the user group sessions. Based on the user group meetings, ATF and Forensic Technology WAI, Inc. (FTI), periodically update (and describe progress in addressing) a “top 10” list of user concerns and sug- gestions for improving NIBIN and the IBIS platform. In addition, NIBIN program staff periodically collect reports from the regions on indicators of system usage—e.g., cross-regional searches and number of correlation requests that have not been reviewed by local sites—that go beyond the monthly operational statistics. The committee chair and staff attended the sixth NIBIN Users Congress meeting at FTI’s U.S. training center in Largo, Florida, in October 2004. That session suggested a strong commitment among program managers and local users to making the system work more effectively as a key part of routine investigations. Concerns expressed at the meeting ranged from time-consuming software glitches (e.g., the focus jumping to the top of the list when an already-viewed comparison report is deleted rather than advancing to the next line) to serious interface issues (e.g., problems with the lighting filter on the microscope, particularly for side light images, that led some agencies to jury-rig fixes using Post-It notes to get acceptable images). This particular session came in the wake of the rollout of a new version of IBIS software meant to be compliant with federal government and Department of Justice cybersecurity requirements. The switch to the new version was problematic and debilitating in some sites, effectively shut- ting down evidence entry for days or weeks; user feedback helped assess the scope of the implementation problems and can suggest better practices for future major revisions. Some of the enhancements we suggest below reflect

164 BALLISTIC IMAGING comments from the Users Congress meeting, as well as other observations from committee member visits to local NIBIN installations. Another source of commentary on specific enhancements to improve NIBIN is the operational audit of the program conducted by the U.S. Depart- ment of Justice, Office of Inspector General (2005). The audit offered 12 formal recommendations to ATF; see Box 6-1. The audit included examina- tion of a complete snapshot of the NIBIN database and its attempt to link NIBIN data to Uniform Crime Reports data based on Originating Agency Identifier (ORI) codes: hence the specific recommendations to ensure ORI BOX 6-1 Recommendations from 2005 U.S. Department of Justice Inspector General Audit of NIBIN Program Based on its review of NIBIN practices, the U.S. Department of Justice, Office of Inspector General (2005) offered 12 specific recommendations to ATF in its audit report:   1. Determine whether additional IBIS equipment should be purchased and d ­ eployed to high-usage nonpartner agencies, or whether equipment should be redistributed from the low-usage partner agencies to high-usage non- partner agencies.   2. Provide additional guidance, training, or assistance to the partner ­agencies that indicated they did not perform regional or nationwide searches ­because they either lacked an understanding of the process or lacked manpower to perform such searches.   3. Ensure that NIBIN partner agencies enter the [Originating Agency Identi- fier (ORI)] number of the contributing agency for all evidence entered into NIBIN.   4. Resolve the duplicate case ID number issue in the NIBIN database for the Colorado Bureau of Investigation–Montrose; and the Rhode Island State Crime Laboratory.   5. Research the reasons why 12 agencies have achieved high hit rates with relatively low number of cases entered into NIBIN and share the results of such research with the remaining partner agencies.   6. Establish a plan to enhance promotion of NIBIN to law enforcement agen- cies nationwide to help increase participation in the program. The plan should address steps to: (1) increase the partner agencies’ use of the system, (2) increase the nonpartner agencies’ awareness and use of the system, and (3) encourage the partner agencies to promote the NIBIN program to other law enforcement agencies in their area.   7. Determine whether new technology exists that will improve the image q ­ uality of bullets enough to make it worthwhile for the participating agencies

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 165 reporting (about 55,000 records in the databases had missing ORI codes) and the specific identification glitch detected for cases in Colorado and Rhode Island. The Inspector General report also offers sound advice to evaluate the user base for the portable Rapid Brass Identification (RBI) units, which have the potential for permitting cartridge case entries by other agencies without a full IBIS set-up but which have been found to be problematic by previous users. We generally concur with the Inspector General’s recommendations and advance some themes from those recommendations in our own guid- to spend valuable resources to enter the bullet data into NIBIN, and deploy the technology if it is cost-effective.   8. Perform an analysis of the current [Rapid Brass Identification (RBI)] users, and any other potential users, to determine if they would use an improved system enough to warrant the additional cost. If the analysis concludes that another system would be cost-effective, then ATF should pursue funding to obtain the system.   9. Provide guidance to partner agencies on the necessity to view correlations in a timely manner and to ensure that correlations viewed in NIBIN are properly marked. 10. Monitor the nonviewed correlations of partner agencies and take corrective actions when a backlog is identified. 11. Research ways to help the partner agencies eliminate the current backlog of firearms evidence awaiting entry into NIBIN. The research should con- sider whether the partner agencies can send their backlogged evidence to the ATF Laboratories or to other partner agencies for entry into NIBIN, and whether improvements to the efficiency of NIBIN would facilitate more rapid and easy entry of evidence. 12. Coordinate with Department of Justice law enforcement agencies that seize firearms and firearms evidence to help them establish a process for enter- ing the seized evidence into NIBIN. Asked to review a draft of the audit report, ATF noted its partial or full concur- rence with every recommendation; the ATF response comprises Appendix XV of the audit report. SOURCE: Text of recommendations excerpted from U.S. Department of Justice, Office of Inspector General (2005).

166 BALLISTIC IMAGING ance below. As noted in Box 6-1, ATF reviewed a draft of the Inspector General’s audit and was asked for comment; the agency indicated partial or full concurrence with all 12 specific recommendations. 6–B  Operational Enhancements Suggesting operational enhancements to the NIBIN program is a com- plicated task due to the program’s very nature. At its root, NIBIN is a grant-in-aid program that makes ballistic imaging technology available to law enforcement agencies to an extent that would not be possible if depart- ments had to acquire the necessary equipment on their own. However, although ATF provides the equipment, the state and local law enforcement agencies must supply the resources for entering exhibits and populating the database. Accordingly, the incentive structures are complex: promot- ing top-down efforts by NIBIN administration to stimulate NIBIN entry necessarily incurs costs by the local departments. So, too, does suggesting that local NIBIN partners make concerted outreach efforts to acquire and process evidence from other agencies in their areas. The benefits that may accrue can be great, providing the vital lead that may put criminals in jail or generating the spark that may solve cold cases. Yet those benefits are not guaranteed, and the empirical data needed to inform the tradeoffs—on the number and nature of queries or on the success of NIBIN in making “warm” hits where there is some (but perhaps weak) investigative reason to suggest links between incidents—are not collected. Accordingly, our suggested operational enhancements follow two basic themes. First, the process for acquiring evidence should be improved and, when possible, streamlined in order to promote active participa- tion by NIBIN partners and to make ballistic imaging competitive for scarce ­forensic laboratory resources with DNA and other types of analysis. S ­ econd, the NIBIN management must have the information and resources necessary to allocate and reallocate equipment to agencies in order to maximize system usage. 6–B.1  Priority of Entry In suggesting ways to improve the entry of evidence, a natural place to start is to suggest a prioritization or a structure for entry: which types of ballistics evidence, generally or from specific types of crimes, should be given top priority in order to maximize chances of obtaining hits and gener- ating leads? On this point, the current composition of the NIBIN database suggests preferences that have emerged among partner agencies: more car- tridge casings are entered than bullets and, in both instances, exhibits from test firings of recovered weapons are more frequently entered than indi-

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 167 vidual specimens recovered as evidence from crime scenes. Recommenda- tion 7 of the Inspector General audit of NIBIN (U.S. Department of ­Justice, Office of Inspector General, 2005) urges a general reconsideration of the imaging of bullets, motivated by survey responses from agencies about why they do not enter bullet evidence. Reasons cited for not entering bullets into NIBIN included the time-consuming and difficult nature of acquiring bullets, as well as a perceived low probability of success in generating hits. It has also become common practice by NIBIN users to acquire only firing pin and breech face images from cartridge casings and not the ejector marks when those are available. From observations of NIBIN sites, this seems to be largely due to the added time required to acquire that image (free-hand tracing of the region of interest), even though some research described in Section 4–E documents increased chances of generating hits when all three images are collected. Understanding that decisions on entry priorities must be made at the local level, as determined by available resources, we suggest one basic ordering. Recommendation 6.1: In managing evidence entry workload, NIBIN partner sites should give highest priority to entering cartridge casings collected from crime scenes, followed by bullet evidence recovered from crime scenes. This recommendation is based in part on the findings of our study of completed hits in 1 year’s worth of operational data from NIBIN; evi- dence suggests that the prompt acquisition and processing of cartridge case evidence results in the greatest number of hits. We do not discount the importance of the hits that arise from the entry of specimens test fired from firearms recovered by the police; links drawn to past cases (and past crimes) can be very useful in effective prosecution of criminal suspects. However, we believe that the system’s greatest benefit may come from its use as a tool for working with active, open case files, generating investiga- tive leads that may lead to the apprehension of at-large suspects rather than confirming other offenses associated with a gun (and suspect) already in police custody. Though our committee’s focus on a national reference ballistic image database has led us to focus more on the imaging of cartridge cases than bullets, we give the entry of evidence bullets a slight edge in priority over the entry of nonevidence (test-fired) cartridge casings. This again favors emphasizing the use of NIBIN in the most active crime investigations. However, this choice will ultimately be contingent on continuing improve- ments to the technology, streamlining the image acquisition process and improving comparison results for bullets. (We discuss related concerns

168 BALLISTIC IMAGING on the tension between entering bullet and casing evidence in Section 6–B.3.) A rough priority order for the entry of evidence would be the following: (1) cartridge case evidence recovered at crime scenes, (2) bullet evidence recovered at crime scenes, (3) casings test fired from weapons recovered by police that will not be destroyed or removed from circulation (i.e., must be returned to owner), (4) bullets test fired from weapons recovered by police that will not be destroyed or removed from circulation, (5) casings from weapons recovered at crime scenes that are to be destroyed, (6) bullets­ from weapons recovered at crime scenes that are to be destroyed, and (7) evidence entries that are archival in nature (e.g., working through and modernizing a backfile). 6–B.2  Expanding System Usage Hits are only possible in the NIBIN system if evidence is entered into the database, and local departments will only put priority on entering evidence into NIBIN if they see tangible benefit in the form of hits. In this circle, we believe that it is important that the potential for NIBIN to gener- ate active investigative leads be the primary emphasis; to the extent that NIBIN entry is viewed as drudgery or simply “feeding the beast” to no apparent end, participation will wane. Recommendation 6.2: In order to promote wider use of NIBIN resources and to ensure that entry of ballistics evidence into NIBIN is a high ­priority, ATF should work with state and local law enforcement agencies to encourage them to incorporate ballistic imaging as a vital part of the criminal investigation process. This work should include early and continued involvement of agency forensic staff in work- ing with detectives on cases involving ballistics evidence and regular department reviews of NIBIN-related cases. This kind of promotion should include encouragement of programs like the Los Angeles Police Department’s “Walk-In Wednesdays,” a designated time for detectives to consult with firearms examiners and IBIS technicians, enter evidence into NIBIN, and analyze resulting comparison results. The lessons learned in areas like Boston (as described in Appendix A), where cross-jurisdictional NIBIN searches have proven highly successful, should also be studied and disseminated to the broader NIBIN partner base. Through its “Hits of the Week” program, the central NIBIN program administration has provided limited anecdotal data on the system’s perfor- mance in jurisdictions and in solving a variety of crime types. These kinds of case stories can serve to instill confidence in the system and promote con-

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 169 tinued “buy-in” by NIBIN partner sites. As described in Chapter 5, though, the “Hits of the Week” most often chronicle cases in which NIBIN analysis is only brought into play when a firearm—and frequently a suspect—is in custody. The “Hits of the Week” that speak to links between evidence cas- ings and bullets are less satisfying as short anecdotes because they typically have to be left unresolved, noting that “investigation is continuing” or that leads are being followed up. The NIBIN program would be well served by adding to the staccato “Hits of the Week” more detailed investigative stud- ies of completed cases that describe the contribution of NIBIN-generated leads. On the subject of hits, the NIBIN program has the capacity to make a simple change that may help participation by overcoming an odd quirk and subtle disincentive in the current structure. Recommendation 6.3: A separate count variable of cross-jurisdictional hits should be added to the system’s basic operational statistics, credit- ing both the originating jurisdiction of linked evidence and the site that confirms the hit. As described in Box 5-3, the NIBIN program currently credits completed “hits” to the site that actually completes the microscopic examination that confirms the match. In many cases, matches will be made between pieces of evidence within the same agency and the same NIBIN site. However, other hits may be made locally (including evidence from nonpartner agencies submitting evidence to a NIBIN site), regionally, or cross-­regionally. Both agencies are instructed to mark completed hits in their system, but only the agency confirming the hit is supposed to report it to NIBIN manage- ment. Moreover, “if a hit occurs between two sites, the information is not transferred to the other site by the system. Rather, the other site must be [separately] notified to create the hit in its own database” (U.S. Department of Justice, Office of Inspector General, 2005:110). It is a serious impediment that data on interagency hits are not auto- matically or systematically recorded as part of the NIBIN program’s default operational statistics; without that information, it is difficult to have a complete sense of the system’s usage. But the current asymmetric definition of a hit also sharply undercuts the “network” aspect of NIBIN: Agencies that serve as good partners (or who take the trouble to route evidence to NIBIN partners in their area) by entering their data in a timely fashion should receive credit when their effort bears fruit, even if the hit is actu- ally made in another place. Ideally, tabulations should be made not only of hits across NIBIN sites but across different ORI codes as well, in order to ­better detect current nonpartners who might benefit from NIBIN equip- ment installation.

170 BALLISTIC IMAGING Alternatively, the NIBIN definitions of a “hit” could be revised to be symmetric, creating both the source(s) and the verifier of evidence matches. However, this change is undesirable because it would double-count (or more) the number of NIBIN-generated investigative leads. 6–B.3  Improving Image Entry Protocols The acquisition of evidence into NIBIN can be very time consuming, particularly for bullet evidence. Even for cartridge casings, the mechanics of positioning evidence under the microscope and taking the images is only a part of the time demand. The time needed to collect the images may be topped by the time needed to clean, prepare, and mount the evidence; the time to prepare necessary paperwork, notes, and reports on entry; the time to prepare written reports on possible and completed hits; and the filing (or refiling) of evidence into storage. There is a need for the acquisition process to be routinized and rigorous; analysis is for naught if anything in the acquisition process compromises the chain of evidence and renders the exhibits inadmissible in court. When local agencies have affirmed a commitment to ballistic imaging as part of their analyses and revised procedures for the entry and filing of evi- dence, streamlined procedures have been developed to make NIBIN entry more rapid. A notable example of this type of procedural review was com- pleted by the New York City Police Department (NYPD), which reviewed its evidence processing routines and revamped them into the “Fast Brass” system (see Box 6-2). Building from models like the New York example, other departments may find ways to work through existing backlogs and realize more benefits from their NIBIN participation. Recommendation 6.4: State and local law enforcement agencies should be encouraged to streamline the ballistic image acquisition process and reporting requirements as much as possible, in order to facilitate rapid data entry and avoid evidence backlogs. The California technical evaluation of a potential state reference b ­ allistic image database made reference to low levels of bullet hits achieved by the NYPD. The ATF critique of the technical evaluation attributed this to one part of the Fast Brass process: the department’s policy of entering only casings if both bullets and casings are recovered from the same crime scene. Thompson et al. (2002:17) commented that “ATF utilizes both the bullet and cartridge casing entry aspects of IBIS, and we recommend that our NIBIN partner agencies do the same in entering their crime gun evidence.” They argue that the NYPD policy jeopardizes the chances to

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 171 BOX 6-2 New York City Police Department “Fast Brass” Processing The New York City Police Department policy is to enter ballistics evidence into its IBIS within 24 to 48 hours of its delivery to the department’s crime lab. A typi- cal IBIS entry workload is on the order of 10–40 bullets and 100–150 cartridge casings per week. In 2002, faced with an IBIS entry backlog of about 1,300 cases, the depart- ment sought to streamline its entry process to eliminate redundancy. The resulting “Fast Brass” process pared the inventory and case note report filed for ballistics evidence to a limit of one page and required a full report (of less than five pages) only for IBIS-generated hits. In cases in which multiple bullets or casings were r ­ ecovered and all were of the same type and caliber, the Fast Brass rules put prior- ity on immediately entering only one of the exhibits (presumably, the one judged to have the clearest toolmarks). Phased in over the course of 2003, the new Fast Brass protocols succeeded in eliminating the IBIS entry backlog; about 9,650 items were entered into IBIS, and 310 hits were achieved in 2003, compared with 8,400 items and 195 hits in 2002. Another evidence protocol maintained by the NYPD is based on a prioritization of resources and assessment of current system performance: if both bullets and casings are recovered from the crime scene and they are of the same caliber, only the casings are entered into IBIS. Of the nearly 1,400 IBIS hits obtained by the NYPD from October 1995 through December 2004, fewer than 10 were generated by bullet evidence—hence a higher priority on cartridge case entry. SOURCE: McCarthy (2004). make hits in crimes where casing evidence is not likely to be recovered: “drive-by shootings in which the bullets are found at the scene but the casings remain in the shooter’s vehicle, for example.” It is impossible to fully evaluate the tradeoff between entering bullets and entering casings without a line of empirical research that is lacking at present: When both casings and ­bullets are recovered from the same scenes or collected in test firings and both are entered into NIBIN, how do relative scores and ranks on the cartridge case markings compare to those for bullets? Further work in this area could also help finalize a priority for exhibit entry, as described in Recommendation 6.1, suggesting whether potential gains in generating hits compare with resource efficiencies inherent in favoring the entry of casings over bullets.

172 BALLISTIC IMAGING 6–B.4  Formalize Best Practices One of our committee’s plenary meetings was held in the Phoenix metro- politan area, where several NIBIN sites at various levels of jurisdiction—state police, county sheriffs, and municipal police departments—are clustered. Another of our meetings included presentations by the NYPD and officials from the Boston area, commenting on usage of ballistic imaging technology in that area. In addition, each member of the committee and its staff visited at least one NIBIN site or IBIS installation. Our discussions at these sites corroborate what is evident from NIBIN operational data, including the analysis done in the Inspector General audit of NIBIN (U.S. Department of Justice, Office of Inspector General, 2005). That is, active participation in NIBIN and image entry into the system spans a continuum, from vigorous users who put high priority on use of the system to agencies for which data entry (like the resulting number of hits) is much more limited. In the preceding sections we have touched on some of the reasons for this variability, including the time-consuming nature of bullet entry and per- ceptions of limited payoff in terms of confirmed hits; our recommendations in the rest of this chapter try to address some other points of aggravation by NIBIN users. As we noted above, ATF has done a commendable job in soliciting feedback from its users, and it is important that this continue. But we also believe that it is important that—drawing on local users’ expe- rience—NIBIN management take a detailed look at sites that have most successfully and productively used the system. Through such a review, it would be useful to distill “best practices” by high-achieving agencies—for example, means of obtaining high-level commitment by agency officials, methods for working through returned lists of comparison scores, or inter- acting with detectives and beat officers—for dissemination to all NIBIN partners. Recommendation 6.5: Local NIBIN experience should be a basis of research and development activities by ATF, its contractors, and the National Institute of Justice. Local experience could usefully contribute to such efforts as “best practices” for image acquisition, investigative strategies, data archiving standards, and the development and refine- ment of NIBIN computer hardware and software. 6–B.5  Entry of Multiple Exemplars Although many of our recommendations are intended to make NIBIN image acquisition less burdensome, there is one point on which we believe that a slight loss in efficiency will ultimately lead to greater effectiveness in

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 173 producing investigative leads. This point is the question of what evidence should be entered into NIBIN when multiple exhibits are possible, whether this is because multiple pieces of evidence are recovered from the same crime scene or because a gun is recovered by police (and hence can be test fired more than once). In some instances, this may also include crimes for which a firearm is left at the crime scene, as well as spent bullets or casings, but a suspect may not yet be apprehended. For expedience, some agencies may only enter a single exhibit that a firearms examiner or an IBIS opera- tor judges to be the “best” marked of the exhibits; we have observed this kind of assessment at individual law enforcement agencies, and it is also the standard practice for New York’s Combined Ballistic Identification System (CoBIS) reference database when more than one casing is included as the required ballistic sample. A recurring message from the studies of IBIS performance reviewed in Chapter 4—as well as our committee’s own experimental work, discussed in Chapter 8—is that ammunition type is extremely consequential in obtaining high-probability matches. The NIBIN program maintains a list of standard protocol ammunition for various firearms calibers. This protocol ammuni- tion is meant to provide the best conditions for depositing toolmarks, and that is an important consideration. However, George (2004a:288) phrased a fundamental point most clearly and bluntly: “criminals do not feel obli- gated to use the ammunition our laboratory equipment may prefer, and firearms submitted at a later date may have a different brand of ammuni- tion than was used at earlier, unknown, crime scenes.” Based on the sizable impact ammunition type appears to play in IBIS comparison scores, and on the inherent variability in the production of marks from shot to shot, we conclude that the NIBIN system’s ability to generate hits is hindered by policies of including only a single exemplar for cases. Nennstiel and Rahm (2006b:29, 30) noted the tension inherent in this choice. Concluding that IBIS comparison performance degrades with data- base size, they argued that the size of a caliber group in a database “should be achieved as small as possible”; in addition to basic database filtering, they suggested that this could be achieved by rotating evidence out of the database after a certain time period. That said, they argued, “multiple ammunition specimens of the same test firearm should be used for an elec- tronic comparison,” as their study indicated that this increases the success rate in finding hits. Moreover, “if available, there should be more than one   n one of our site visits, we discussed some archival cases and retrieved some exhibits O from the archives for reanalysis. For one of these instances, both a firearm and a spent casing were recovered from a crime scene; only the casing obtained by test firing the weapon (using department protocol ammunition) had been entered into NIBIN, not the actual crime scene evidence.

174 BALLISTIC IMAGING single (two, or, better yet, three) specimens of the same unrecovered firearm included in the setting up of the open case databases”; they acknowledge that this directly contradicts the guidance to keep databases as small as pos- sible, but that the gain in performance merits the additional entry. De Kinder (2002a:200) reached the same conclusion. Although his remarks apply specifically to the construction of a reference ballistic image database, they apply equally to NIBIN. Entry of more than one specimen per firearm, when possible, accounts for the variability inherent in firings using different ammunition makes. In regular casework [in Belgium], we use about two to three different brands of ammunition to account for a different metallic composition of the primer (brass and nickel) and the bullet (brass, nickel and lead). This will substantially ease later microscopic comparisons. It also accounts for some variation in the ductility of the primer material. As the presence of a good quality mark out of the headstamped areas is needed, three car- tridges are fired with each brand of ammunition. More experience has to be acquired with automated comparison systems to see what number of test firings has to be performed. Argaman et al. (2001:270) documents similar protocols used by the Israel National Police, entering two cartridge casings when possible and possibly more. “Although inputting two [casings] almost doubles entry time and increases workload, the authors believe that the benefits outweigh the costs.” “The two [casings] entered into the system should differ from each other as much as possible” in order “to increase the chances of find- ing a match (a possible hit) and for better evaluation of the correlation results.” The ATF critique of the California technical evaluation (see Section 4–G) discounted the findings of a strong ammunition effect. “It is worth noting that ammunition difference is not necessarily prohibitive to the discovery of a hit; most of the hits at ATF labs are between evidence from different ammunition manufacturers” (Thompson et al., 2002:16). No exact data was provided in support of this assessment. However, disputing an assertion that large databases necessarily drown out potential hits by forcing potential matches lower in the list of rankings, Thompson et al. (2002:19) commented that, “in actual fieldwork, IBIS correlation scores seem to actually improve with ‘sister’ test casings acquired, as the computer refines its search capability.” Practically, of course, there is a limit to how much data local agencies should (or will want) to enter for particular cases; in a shooting incident involving a semiautomatic firearm, where a dozen or more casings may be recovered from the scene, basic resource constraints will preclude entering all of the possible evidence into NIBIN. However, it makes intuitive sense

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 175 to include more than one to maximize the chances of finding connections to other incidents that might involve the same gun. Likewise, in test firing a weapon in police custody, all manner of variations are possible, and we do not suggest that agencies try to anticipate every possible shooting condi- tion. What we do suggest is that more than one exhibit be put into NIBIN, ideally representing some span of ammunition makes. Recommendation 6.6: The NIBIN program should consider a ­protocol, to be recommended to partner sites, for the entry of more than one exhibit from the same crime scene or test firing when more than one is available. For crime scene evidence, more than one exhibit—but not necessarily all of them—should be entered, rather than having examiners or technicians select only the “best” exemplar. For test-fired weapons, it is particularly important to consider entering additional exhibit(s) using different ammunition brands. To be truly effective, this recommendation necessarily incurs a basic technical enhancement to the current IBIS platform; see Recommenda- tion 6.10; some of the usability enhancements suggested in Recommenda- tion 6.13 also complement the notion of multiple exemplars. 6–B.6  Reallocation of NIBIN Resources The final operational enhancement we suggest is an echoing of Recom- mendation 1 in the Inspector General audit of NIBIN (U.S. Department of Justice, Office of Inspector General, 2005). The NIBIN program does have procedures in place for monitoring low-usage sites and sending warning messages. As ATF commented in its reply to a draft of the audit report, “consideration must be given to the availability of IBIS technology to law enforcement agencies that reside in regions that historically have low usage based on the amount of firearms crimes” (U.S. Department of Justice, Office of Inspector General, 2005:131). That is, ATF is aware that a strict quota of evidence entries per month is an unfair benchmark, since agencies vary in the number of gun crimes (and hence the number of possible NIBIN entries) they encounter. That said, systemic low usage should be grounds for reallocation of scarce program resources to other agencies who can be more effective partners in the system. Recommendation 6.7: Priority for dispensing NIBIN system technol- ogy should be given to high-input environments. This entails adding machines (and input capacity) to sites that process large volumes of evidence and especially to sites that lack their own NIBIN installations but that routinely and regularly submit evidence to regional NIBIN

176 BALLISTIC IMAGING sites for processing. For NIBIN partner agencies with low volume of entry of crime scene evidence, the ATF should continue to develop its procedures for reallocating NIBIN equipment to higher performance environments. 6–C  Technical Enhancements Several of them deal with the specific functionality and interface of the current IBIS platform; others are broader in scope and speak to the type of information that should be recorded for the NIBIN system as a whole. Put another way, these recommendations are not a “to do” list for the current IBIS or its developers, but will require collaboration between system devel- opers, NIBIN management, and the program’s user base. A common theme of our technical recommendations extends from our general assessment of the IBIS platform in Section 4–F: that it is a sorter and a tool for search that is commonly, and unfortunately, confused with a vehicle for verification; the two are very different functions. The recom- mendations we offer are meant to improve the system’s effectiveness as an engine to search and process large volumes of data and to give its users more flexibility to explore possible connections between cases. 6–C.1  The Language of “Correlation” We begin with a matter that is inherently technical, even though it does not deal directly with computer hardware or software: It is an issue of nomenclature, of what to call the basic process performed by the IBIS technology. As described in Chapter 4, Forensic Technology WAI, Inc., and the IBIS user base describe the process as “correlation,” even though sys- tem training materials repeatedly stress that the actual correlation “scores” are of little consequence and that what matters is the rank of particular exhibits. We avoid using “correlation” throughout this report, describ- ing the algorithm and process as “comparison” instead. In statistics, and as has seeped into common parlance, the correlation coefficient measures the strength of linear association between two random variables. Scaled to fall between 0 (no relationship) and 1 (perfect linear relationship), the correlation coefficient provides a clear and easy to understand measure of association. That IBIS uses the same term in labeling its scores imparts to the process—however subtly—an undue degree of quantitative confidence. This is not to say that the IBIS procedures are either unreliable or unsophis- ticated; indeed, we argue quite the opposite in Chapter 4. To fully warrant the term correlation, the scores reported by ballistic imaging systems would have the same easily understood interpretation as a

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 177 correlation coefficient; this is almost certainly an unrealizable goal. Absent that, what would be helpful is any kind of benchmark or context that can be attributed to system-reported scores. Recommendation 6.8: Normalized comparison scores—such as statisti- cal correlation scores, which scale to fall between 0 and 1—are vital to assign meaning to candidate matches and to make comparison across searches. Though current IBIS scoring methods may not lend them- selves directly to mathematically normalized scores, research on score distributions in a wide variety of search situations should be used to provide some context and normalization to output correlation scores. Possible approaches could include comparing computed pairwise scores with assessments of similarity by trained firearms examiners or empiri- cal evaluation of the scores obtained in previous IBIS searches and confirmed evidence “hits.” 6–C.2  Collecting the Right Data Audit Trail As discussed in Chapter 5, it is impossible to make a full evaluation of the NIBIN program and its effectiveness because the data that are systematically collected on system performance is far too limited. The monthly operational reports that are reviewed by the NIBIN program con- sist of basic counts of evidence (entered that month and cumulative) and of completed hits. Even within this extremely limited set of variables, the information collected is not rich enough to answer important questions, such as whether hits are more often realized when connecting two pieces of crime scene evidence or in linking a crime scene exhibit to one test fired from a recovered weapon. Completely absent from the standard operational statistics are any indicators of the searches performed by the system (save for the fact that the entry of every piece of evidence should incur a local search by default). Certainly, some of the data that one would like to have to evaluate the system’s effectiveness are not items that can or should be maintained within the IBIS platform; these items include any of the indications of the quality of the investigate leads generated by completed hits, whether an arrest was made in a particular case (or cases), and whether convictions are achieved. But we believe that IBIS at present is too “black box” in nature and that it is not amenable to analysis or evaluation; the system should be capable of generating a fuller audit trail and operational database than the inadequate

178 BALLISTIC IMAGING monthly summaries currently generated and assembled by NIBIN program staff. Recommendation 6.9: ATF should work with its NIBIN con­tractor to ensure that the system’s hardware and software systems generate an audit trail that is sufficient to adequately evaluate system usage and effectiveness. In most cases, these data should be generated automati- cally by the software; however, others will require changes to the soft- ware so that data may be entered manually (as is currently the case with the recording of hits). The data items that should be routinely tallied and evaluated include (but are not limited to): • counts of manually requested database searches, such as those against other regions or the nation as a whole; • information on the origin of the case with which a hit is detected (not just the case number and agency that detects and verifies the hit); and • characteristics of cases in which a possible match is deemed suf- ficiently strong to request the physical evidence for direct comparison by an examiner, including the “correlation” scores and ranks for the match, an indicator of which image(s) motivated the request, and an indicator of the disposition of the case (either a hit or a nonhit). Ammunition Type The previous recommendation addressed our concern that the NIBIN machinery does not currently produce the right operational data, for effec- tive analysis. We now turn to how the system could benefit from collection of a fundamental variable during the demographic entry stage of image acquisition. In our observations of IBIS at work, a major deficiency in the current set-up is the inability to specify what is known about the ammuni- tion used in the exhibit. Some information about ammunition make can be entered in a “notes” field on the demographic entry screen, but ammunition brand and type should be a standard variable that agencies can use in filter- ing or sorting their comparison score reports (see Recommendation 6.13). It could also be used as a presorting variable to narrow down the search space before initiating a manual search, as might be desirable in following up a series of shootings for which links and common features are suspected in advance. In Recommendation 6.6, we urge the entry of multiple exem- plars, particularly involving the use of multiple ammunition types when test firings from a weapon are possible. Having ammunition as a viewable variable would be invaluable in interpreting the results of comparison runs in cases where multiple exemplars are in the database.

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 179 In offering this recommendation, we recognize that it is not as simple a fix as it may appear. To promote more consistent entry, headstamp informa- tion would likely have to be entered using a drop-down list, which could be lengthy and would have to adjust to changes in the ammunition market (as is the case with built-in lists of firearms manufacturers). The best way to implement this change, including the easiest spot in the data entry pro- cess in which to insert the new item, should be determined on the basis of feedback from NIBIN users. Recommendation 6.10: ATF and its technical contractors should facili- tate the entry of ammunition brand information for exhibits, when it is known or apparent from the specimens. In consultation with its NIBIN user base, ATF should also consider allowing entry of other relevant fields, such as the composition of the primer and the nature of the jacketing of the bullet. 6–C.3  Improving Search Strategies and Server Workload Refinement to the image acquisition process—making it more accurate and less burdensome—is critical to full use of NIBIN resources. So, too, are refinements to the nature of searches conducted. To be most effective, searches have to be easy to specify (if they are not automatic) and must be relevant and important to the local law enforcement agencies using the system. We do not suggest or advocate that nationwide searches against the whole NIBIN database should be routine and default, but we do concur with the Inspector General audit of NIBIN that it is important that agencies have the knowledge and training to initiate nationwide searches if conditions in a case warrant a sweeping search. It is not surprising that agencies rarely conduct national searches given that, at present, a national search must be carried out by searching each NIBIN region separately. What is disturbing about some agency responses to the Inspector General’s survey is that some partners use only the default local search because they do not know how to initiate wider searches or because they consider those searches irrelevant. Accordingly, we echo the Inspector General’s Recommendation 2 and amplify it. As a matter of routine, we believe that NIBIN management should periodi- cally conduct national or multiregional searches on samples of evidence, both to get a sense of the ease with which those searches can be conducted and to determine whether the searches indicate possible (or spurious) matches. Recommendation 6.11: Even though national or cross-regional searches against the NIBIN database may be rare, the capacity for such a search to be conducted should exist and should be well communicated

180 BALLISTIC IMAGING to NIBIN partner agencies. A protocol for national or multiregion searches, whether initiated by individual agencies or in regular system checks by ATF, should be promulgated, with an eye toward providing some investigative spark in open but cold crime investigations. In consultation with its user base, the NIBIN program should also work to ensure that the default searches performed by the system are adequate for user needs. This entails periodically reviewing the region and partition structure of the NIBIN database; it may also involve working with IBIS developers to define easily accessible “shortcut” searches, rather than work through display maps and a drop-down list every time a certain search region is desired. Recommendation 6.12: Based on information from NIBIN users, ATF and its technical contractors should: • regularly review the partition structure of the NIBIN database (which defines the default search space for local agencies) for its appro- priateness for partner agencies’ needs, and • develop methods for flexible and user-designed searches that may be more useful to local agencies than the default partitions. These types of searches could be based on the frequency of contacts between local law enforcement agencies or intelligence on the nature and dynamics of known gun market corridors, among other possibilities. Additional flexible search possibilities could include searches in areas of known gang activity or between jurisdictions where connections were successfully made in previous investigations. A peculiar and disturbing finding from the U.S. Department of Justice Inspector General audit of NIBIN is that there are NIBIN partner agencies that enter exhibits into the database but do not regularly (or ever) review the comparison scores that are returned by the NIBIN regional servers. It is difficult to say why this is the case. In part, though, it may be due to the structure of the NIBIN database itself, funneling all evidence and compari- son requests through IBIS correlation servers at three ATF laboratories. It is unrealistic to expect completely instantaneous results, even if each site had its own servers (which we do not suggest). Yet the distributed nature of the network necessarily involves some considerable amount of waiting: wait- ing for new images and requests to be uploaded to the servers, waiting for comparison routines to be performed, and waiting for comparison scores and images to be pushed back to the local installations. Our committee and staff site visits included trips to two of the ATF laboratories; at both we saw the general slow-down at IBIS stations when

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 181 the local NIBIN sites were “polled” for new images and processing was being performed. Given the time involved, it is not difficult to imagine local agency staff moving on to other duties rather than waiting on returned results. Again, we do not suggest that there is necessarily anything wrong with the NIBIN program’s strategy of consolidating servers at a limited number of sites, and we do not suggest that this strategy and the waiting time that it incurs is the complete, direct cause of agencies not following up comparison score results. What we do suggest is that NIBIN management must also periodically consider whether the regional server workload is bal- anced so that the time from image acquisition to comparison score results is as small as possible for NIBIN users. 6–C.4 User Improvements for NIBIN as a Search Tool As we discuss in Section 4–F and above in this chapter, we think that the NIBIN program and the IBIS platform would be best served by breaking away from a strict top-10, verification-focused posture; it is best conceived as a tool for search, analysis, and discovery. The current IBIS is fairly rigid in its structure, affording users little or no flexibility in defining the reports that are generated by the system or the interface they view on screen. Comparison scores are repeated in a basic spreadsheet layout, and users are effectively limited to choosing which column to sort, which row to highlight, and which row (exhibit-to-exhibit) comparison to pull up for viewing. No graphical indication of the distribution of scores is provided (as might be useful to see clear “breaks” or gaps in the scores), and it can be difficult to see where a particular exhibit (or set of exhibits) fall in the rankings across the different scores. As another example, the IBIS Multiviewer interface allows users to see several exhibit-to-exhibit comparisons at once, showing the images in an array; however, useful text or labels of what exhibits or cases are currently being shown in the Multiviewer are lacking. Moreover, the Multiviewer comparisons are anchored to the reference exhibit that was run in the com- parison request; as examiners peruse multiple images, it would be useful to pull up pairs of nonreference exhibits from the score results for closer examination, to find possible “chains” of three or more same-gun exhibits found in the same set of scores. The enhancements we suggest include some user-interface modifications that would make the IBIS platform more useful for analysis, but is not meant to be exhaustive of all such modifications. Recommendation 6.13: To enhance the NIBIN technical platform as an analytical tool, ATF and its technical contractors should:

182 BALLISTIC IMAGING • allow users to filter and sort the returned lists of comparison scores and ranks by such variables as gun type, ammunition type, reporting agency, and date of entry; • use persistent highlighting or coloring to allow users to readily see the relative positioning of specific exhibit(s) across the rankings for different marks (e.g., to be able to see where the top five exhibits by breech face score fall in the rankings by firing pin, or to see where multiple exhibits from the same case lie in any of the rankings); • use visual cues to alert reviewers of comparison scores that exhibits have already been physically examined and deemed a hit (or examined and found not to be a hit); • permit flexibility in the Multiviewer screen (on which multiple images can be displayed in an array) so that two nonreference exhibits can easily be compared side by side, thus permitting easier examination of chains of potentially linked exhibits; and • permit flexibility in specifying the printed reports produced by the system so that listings of multiple exhibits are more informative than the current exhibit/case number and score layout. 6–C.5  Side Light Images Although IBIS computes comparison scores for breech face impression using an image taken using a center ring light, examiners generally prefer visually examining the alternative image taken using a side light when reviewing potential comparisons. The side light image is a representation more akin to what examiners are able to see looking directly at a cartridge casing through a microscope; the side light adds contrasts that give a better sense of depth and of the texture of the primer surface. Given this prefer- ence, George (2004a:288) argued for additional work on imagery akin to the side light image: “[FTI] needs to develop images which are more com- patible with those the user actually views on the comparison microscope. The user must be able to visually eliminate or associate candidates in order to have any level of confidence that a match is not being overlooked.” We agree that users should have a clearer visual benchmark to consider when examining comparison score results, even if the actual image acquired by the system for use in deriving signatures and computing scores is differ- ent and taken under conditions most favorable to the comparison process. However, we also suggest that IBIS developers explore ways to make use of the auxiliary information collected in the side light image: Methods for computing an alternative comparison score based on the side light image should be developed and tested to see how they perform relative to the IBIS-standard methodology using the center light image.

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 183 Recommendation 6.14: Because the side light image of the breech face impression area is more consistent with firearms examiners’ usual view of ballistics evidence—and may be the basis for pulling potential matches for direct physical examination—the side light imagery should be a more vital part of the NIBIN process. Users should have the option to view (if not actually capture) the side light image before acquiring the center light image, for easier inspection of the casing’s alignment and basic features. IBIS developers should experiment with comparison scores and rankings based on the side light image, and compare those with scores using the standard center light image. 6–C.6  Operator Variability In the current IBIS system, users entering images into the system are confronted with several system-computed default suggestions—on image focus, image lighting, and the suggested placement of region-of-interest delimiters. Users have the capacity to adjust or override these defaults. In our site visits, we observed a variety of such adjustments, less on image focus but much more frequently on the intensity of lighting. At some sites, operators would increase the lighting slightly because their firearms exam- iners found the slightly brighter images easier to work with; at other sites, operators would do exactly the opposite. The exact placement of region- of-interest delimiters is obviously crucial to subsequent comparisons, as it dictates the image content used to derive a mathematical signature, but the effects and tolerances on the other user-adjustable parts of the acquisition process are not well documented. Research on these lines—for instance, looking at the impact on scores when comparison images are lightened or darkened by degrees—should be conducted and used to promulgate best practices throughout the NIBIN system. FTI is continuing to develop a new system, dubbed BrassTRAX, that is very literally more of a “black box” than the current IBIS/BRASSCATCHER   On a site visit to the New York Police Department, we had opportunity to try one such adjustment. We requested that an examiner acquire breech face and firing pin images from the same image three times. Twice, the examiner entered the image as normal, adjusting the lighting slightly if he deemed it appropriate; this allowed us to see a near-perfect match (and resulting score). The third acquisition was set several steps brighter than the examiner would ordinarily prefer, though it was far short of complete saturation and a pure-white image. Both scores were fairly robust to the lighting change; the two normal-lighting images were returned as the top-ranked pair on both scores, with breech face and firing pin scores of 315 and 351, respectively. The scores against the over-bright image only degraded slightly for the breech face but more so for firing pin—302 and 282, respectively—but they were still comfortably the number-2 ranked comparison.

184 BALLISTIC IMAGING platform; as described in Box 4-1, the system is already being positioned as the next-generation IBIS. Physically, the unit is a box with only one spot for entry or adjustment: A cartridge casing is inserted into the tray at one corner of the box. The equipment then automatically handles all parts of image acquisition (save for demographic data entry), including the alignment and rotation of the casing. Development of such a platform is intriguing, but—consistent with Recommendation 6.14—it is important that users also be comfortable with viewing and interpreting the imagery generated by the system. As complete automation of the image acquisition process continues to evolve—reducing the effect of operator variability—it is particularly important that systems be developed with procedures for routine calibra- tion and validation. System performance over time in processing known, standard exhibits should be a regular part of system monitoring, and the capacity for logging these calibration data in a simple and recoverable man- ner (for subsequent analysis) should be a priority. Further specification of calibration and validation routines should make use of exhibits that can be entered and compared at different points in time and at different NIBIN sites, including ongoing efforts by the National Institute of Standards and Technology to develop a “standard bullet” and a “standard casing” as known measurement standards. 6–C.7  Revisiting the Comparison Process and 20 Percent Threshold Finally, we turn to a critical part of the current process: the coarse com- parison pass, in which all eligible exhibits are compared with the reference exhibit using a rougher comparison score, and only the top 20 percent of scores (for any of the types of markings) are retained for subsequent pro- cessing. As discussed in Chapter 4, this threshold was originally intended as a computational aid, restricting the pool of candidates for more detailed comparison beyond the prefiltering imposed by subsetting the database by demographic data (e.g., incident date and caliber family). However, the major analyses of IBIS performance described in Chapter 4—particularly the George (2004a, 2004b) studies, in which the coarse comparison step was completely waived—demonstrate that the sharp thresholding does cause known sister exhibits to be excluded from consideration. We see the same behavior in our own analyses in Chapter 8. In some of the experi- ments we performed, loss of potential matches was virtually guaranteed: The database was small and heavily concentrated with sister exhibits from the same guns, and so the imposition of any threshold or removal of exhib- its from final consideration would incur some losses. But we also observed known sister exhibits to be screened out by the coarse comparison pass in runs against much larger segments of the New York CoBIS database.

OPERATIONAL AND TECHNICAL ENHANCEMENTS TO NIBIN 185 Figure 4-2 shows the basic printed report generated by IBIS, the top 10 ranked pairings by the different cartridge case markings. Reported prominently on the sheet is a sample size of 12,353. In discussing this type of report with other parties—such as investigating detectives, depart- mental superiors, and legal counsel—the meaning of “sample size” can be explained relatively easily as (roughly) the subset of the database matching the reference exhibit in caliber. But no information is readily provided on the effective sample size that is most relevant to the scores presented on the page—the number of exhibits retained after the coarse pass, for which the full scores were computed. That this effective sample size can be as small as 2,470 would be surprising, and potentially misleading, to observers without a detailed knowledge of all the steps in the IBIS comparison process. We do not argue that there is anything inherently wrong with a first, coarse cut of the database or the specific method used; however, research should still be done to determine whether 20 percent is an appropriate measure, balancing gains in processing time with the potential to miss hits. We also believe that NIBIN users should have the capacity to easily adjust the threshold level in regenerating comparison score results. Particularly if circumstances lead to court trials where an IBIS-suggested linkage is the primary (or very important) evidence, it would behoove agencies and examiners to be able to demonstrate that the suggested pairing came about in a process where all eligible exhibits were subjected to the same score and rank, rather than roughly 20 percent of them. As with national and cross- regional searches, we also suggest that 100 percent full-comparison requests (that is, waiving the coarse comparison entirely) should be performed by NIBIN management as a matter of routine research and evaluation. Recommendation 6.15: In light of improvements in computer process- ing time, the relatively ad hoc choice of 20 percent of potential exhibit pairs from the coarse comparison step should be reexamined. IBIS developers should consider removing the 20 percent threshold restric- tion or revising the percentage cut if it does not seriously degrade search time over moderate database sizes. In any event, IBIS developers should make it easier for local agencies to adjust the threshold level or to waive the coarse comparison pass altogether if specific investiga- tive cases warrant a full, unfettered regional search of evidence at the expense of some processing speed.

Next: 7 Three-Dimensional Measurement and Ballistic Imaging »
Ballistic Imaging Get This Book
×
Buy Paperback | $64.00 Buy Ebook | $49.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Ballistic Imaging assesses the state of computer-based imaging technology in forensic firearms identification. The book evaluates the current law enforcement database of images of crime-related cartridge cases and bullets and recommends ways to improve the usefulness of the technology for suggesting leads in criminal investigations. It also advises against the construction of a national reference database that would include images from test-fires of every newly manufactured or imported firearm in the United States. The book also suggests further research on an alternate method for generating an investigative lead to the location where a gun was first sold: "microstamping," the direct imprinting of unique identifiers on firearm parts or ammunition.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!