National Academies Press: OpenBook

Ballistic Imaging (2008)

Chapter: PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology

« Previous: 3 Firearms Identification and the Use of Ballistics Evidence
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 89
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 90
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 91
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 92
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 93
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 94
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 95
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 96
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 97
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 98
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 99
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 100
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 101
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 102
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 103
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 104
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 105
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 106
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 107
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 108
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 109
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 110
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 111
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 112
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 113
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 114
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 115
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 116
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 117
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 118
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 119
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 120
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 121
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 122
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 123
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 124
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 125
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 126
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 127
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 128
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 129
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 130
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 131
Suggested Citation:"PART II: Current Ballistic Imaging and Databases, 4 Current Ballistic Imaging Technology." National Research Council. 2008. Ballistic Imaging. Washington, DC: The National Academies Press. doi: 10.17226/12162.
×
Page 132

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

PART II Current Ballistic Imaging and Databases

4 Current Ballistic Imaging Technology “It has been common practice for firearms examiners to maintain an ‘open-case file’ of physical evidence from unsolved crimes, sorted by cali- ber,” Thompson et al. (2002:8) note of the traditional approach to generat- ing investigative leads through firearms identification. “When faced with a crime on which little evidence was available, the examiner would then go to the storage area for evidence from unsolved cases and choose some potentially similar cases for examination of the originals.” This process can be extremely time consuming—not only the direct examination of evidence, but also the steps of retrieval, filing, and reporting. “Because of the time required for the manual comparison of evidence, the effectiveness of this method can be severely limited by the staffing and workload of an agency’s examiners (which determines how much time examiners have to search the open-case file).” In this chapter, we briefly review the background of imaging technology in firearms identification (Section 4–A), the basic structure of Integrated Ballistics Identification System (IBIS) equipment (4–B), and the manner in which the IBIS equipment is used to acquire images (4–C). Section 4–D dis- cusses what is publicly known about IBIS procedures for scoring, ranking, and analysis, crucial to assessing the technical capability of this technical platform to “scale up” to meet the demands of a much larger database. Sec- tion 4–E reviews the major studies that have been conducted to date on IBIS performance, particularly with large-scale databases or datasets consisting of test fires from new weapons. Section 4–F presents basic assessments of the current technology (specific recommendations related to IBIS usage are in Chapter 6). An appendix to the chapter, Section 4–G, summarizes and 91

92 BALLISTIC IMAGING elaborates on technical evaluation tests performed on IBIS by the state of California. Because it is important to consider IBIS and the National Inte- grated Ballistic Information Network (NIBIN) together, a summary and our conclusions on the evidence in this chapter are in Chapter 6, together with those from Chapter 5. 4–A Background Contemporary ballistic imaging technology is the latest step in a grad- ual move over several decades to use technology to make it easier to main- tain and search open case files of ballistics evidence, including cases distant in time. During the 1970s, calls were made to develop automated index systems to assist examiners in search, as well as to explore new directions for the imaging of ballistics evidence. Biasotti (1970:12) made an early call for a computer-based open case file that would permit examiners to describe observed class characteristics “for all rifled weapons [and] uniden- tified bullets and cartridge cases” in a central repository. However, in this early vision, imaging was not considered; instead, characteristics were to be expressed using an alphanumeric string (e.g., FW105-100-1357-20-0102- 001-001), coding such factors as the measured caliber and land widths of bullet evidence. When new evidence arrived, a query on the database could then determine whether cases with similar class characteristics or modi operandi were on file. On the technical side, other researchers sug- gested the utility of more high-powered microscopy techniques for the comparison of ballistics evidence, including several papers arguing for the use of scanning electron microscopy (Gardner, 1979; Goebel et al., 1980; Grove et al., 1972). Grove et al. (1972:20) considered scanning electron microscopy “ideally suited for firing pin impression examination because of its ability to reveal topographical features at the base of the impression.” The researchers examined “series of up to 50 rounds” from “numerous .32 caliber semi-automatic pistols,” analyzing the first, second, tenth, “and in some instances the fiftieth firing pin impression.” “In all the firing pin impressions examined, a match could be made using a criteria of 4 or more points of identification” whereas “no points of identification” could be found for firings from different guns; moreover, they concluded that “the first and fiftieth impressions can be matched.”   s A summarized by Grove et al. (1972:20), scanning electron microscopy “consists basically of a finely focused beam of electrons which sweeps over the sample surface. This primary electron beam causes the formation of low energy electrons (secondary electrons) due to i ­nteraction with the sample surface. These secondary electrons are then collected and displayed on a cathode ray oscilloscope producing an image that gives extremely good topographical information with great depth of field.”

CURRENT BALLISTIC IMAGING TECHNOLOGY 93 In the 1960s, the Los Angeles Police Department (LAPD) developed a “Balliscan” camera designed specifically to photograph the exterior surface of a bullet, using a rotated slit to expose the film as a drum turned the bullet at the same speed. Blackwell and Framan (1980) suggested an Automated Firearms Identification System for the analysis of bullet evidence, based on the consecutive matching striations methodology of Biasotti (1959) and uti- lizing scanned versions of Balliscan images as the image data. Though they sketched a schematic diagram for such a system and did some preliminary analysis of bullets used in the Biasotti (1959) study, no apparent further action on developing the system was taken. In 1989 the Federal Bureau of Investigation (FBI) announced a program called DRUGFIRE, which used a system for acquiring images from cartridge evidence. A few years later, the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) adopted the BULLETPROOF system for imaging bullet evidence, marketed by what is now Forensic Technology WAI, Inc. (FTI), of Montréal, Canada, as the basis for its CEASEFIRE network. As described in more detail in Chapter 5, the two databases operated in par- allel for several years until CEASEFIRE evolved into the NIBIN program, using as its platform the IBIS formed by combining BULLETPROOF with a BRASSCATCHER apparatus for imaging cartridge casings. IBIS was made the technical base for the new NIBIN database, and the major ballistic image databases in operation today (including NIBIN and the state reference ballistic image databases in Maryland and New York) use IBIS. IBIS is also in use by law enforcement agencies in several foreign countries; through IBIS, FTI is essentially the only provider of ballistic imaging technology. At root, the IBIS platform combines a microscope with a camera that acquires two-dimensional greyscale images of bullet and cartridge case evidence; features of the traditional comparison microscope can then be emulated using the images, and the images can be compared with each other to assess similarity. Box 4-1 makes an important note about current usage of the term “IBIS.” 4–B  IBIS Equipment Formally, IBIS represents the integration of two separate systems. The BULLETPROOF microscope and comparison apparatus for acquiring images from bullets was developed first, beginning in 1991. It was aug-   subsidiary of McDonnell-Douglas, Corp., later produced the Balliscan camera based on A the LAPD design (Blackwell and Framan, 1980). Balliscan images became prominent in later years because images made following the assassination of Robert F. Kennedy were reexam- ined by firearms examiners in the mid-1970s, in support of the work of the U.S. House Select Committee on Assassinations.

94 BALLISTIC IMAGING BOX 4-1 “IBIS” Terminology As of January 2007, Forensic Technology WAI, Inc. (FTI), repositioned its line of products to emphasize its existing BulletTRAX-3D platform and developing BrassTRAX-3D platform for the acquisition of three-dimensional measurements from bullets and cartridge cases, respectively. Both of these are said to constitute the “IBIS-TRAX 3D” line, and as such has begun referring to these as IBIS (e.g., they formally refer to the product as “IBIS BulletTRAX-3D”). The IBIS described in this chapter—based on two-dimensional photography—has now been designated the “IBIS Heritage Series” on the firm’s Web site (http://www.fti-ibis.com), and FTI suggests that the two-dimensional product is no longer ­actively marketed. Though the name has now been linked with the new three-dimensional p ­ roducts, we use the term “IBIS” throughout this report to refer exclusively to the two-­dimensional photography system, dating from the combination of the separate BRASSCATCHER and BULLETPROOF components and running through ver- sion 3.4 of the IBIS software. We do so because of the context of our study, which includes offering advice on the existing National Integrated Ballistic Information Network (NIBIN) and suggesting enhancements to it: the entire infrastructure of NIBIN is built on the two-dimensional photography IBIS. What is now dubbed the “IBIS Heritage Line” is in fact the current platform deployed to NIBIN partners; ac- cordingly, it is the appropriate benchmark of comparison for our study. Likewise, the experimental research conducted by the National Institute of Standards and Technology (NIST) in support of the committee’s work—described in Chapter 8—compared the current IBIS two-dimensional to a prototype three- dimensional acquisition system. This is because we consider three-­dimensional topographic measurement as a possible enhancement within the current NIBIN system, so that it is appropriate to get a sense of how well the three-dimensional measurements and scores compare with the IBIS two-­dimensional currently used in NIBIN. mented in 1995 by BRASSCATCHER, which adapted the apparatus to work with cartridge case evidence (McLean, 1999). Most of the IBIS installations under the NIBIN program take the form of Remote Data Acquisition Stations (RDASs). One component of an RDAS is the Data Acquisition Station (DAS), a microscope with two built-in cameras mounted to it (one for bullets and one for cartridge cases). The RDAS also includes a computer so that demographic data associated   hese T auxiliary data might more accurately be described as metadata, but we retain “ ­ demographic data” as common usage in the field.

CURRENT BALLISTIC IMAGING TECHNOLOGY 95 with a case (e.g., gun caliber, date of crime, and firing pin shape) can be entered by an operator; the microscope cameras display their output on the computer monitor, so that the operator can determine how the image will be acquired, as described below. In an RDAS, the computer also serves as a Signature Analysis Station (SAS), where the results of comparisons with other images can be reviewed. However, the key component that an RDAS lacks is the “correlation server” that processes results from acquired images and compares them against other cases in a database. An RDAS alone must transmit its images to a correlation server for processing and await the results from the server. Standalone systems that include a correlation server along with the base RDAS equipment are referred to as hubs. As discussed in Chapter 5, the NIBIN program also makes use of three other related FTI products in addition to the base IBIS RDAS. As it is currently structured, all comparisons of images are routed through cor- relation servers (separate from an IBIS hub) located in ATF’s three national laboratories. To ease the task of reviewing results from image comparisons, FTI also markets Matchpoint systems—essentially, the computer hardware and software of a SAS, except that they are not built into the same physi- cal cabinet as the DAS in an RDAS. Finally, several NIBIN sites make use of Rapid Brass Identification (RBI) units, portable suitcase-size microscope setups that allow technicians to acquire breech face and firing pin images in the field, including at crime scenes. RBI units are meant only for acquisition of images (and transmittal, through an RDAS, to a correlation server), and not the result of image comparisons. 4–C  Data Acquisition The obvious first step in working with IBIS (and NIBIN, using the IBIS platform) is to have bullet or cartridge case evidence to enter into the system. This evidence may be bullets or casings recovered at crime scenes, or it may be test firings from weapons obtained by the police in the course of investigations. In the first case, casings and (particularly) bullets present challenges because they may be damaged and may require cleaning prior to examination and entry. In the case of test firings, the ammunition used in the firings—typically done into a water tank, to facilitate capture of the undamaged bullet—is a critical choice. To the greatest extent possible,   ector (2002) considers the effect of one cleaning process on IBIS performance for match- R ing bullet evidence. An ultrasonic bath—in which high-frequency sound waves produce vapor bubbles in a liquid—can be used to dislodge some foreign materials that can prove stubborn to conventional means, including soil and drywall. However, Rector (2002) observed that immersion in an ultrasonic cleaner for longer periods of time (up to 30 minutes) generally reduced IBIS scores and that the surface etching done by the cleaner was directly visible on lead bullets.

96 BALLISTIC IMAGING examiners prefer to match the test fire ammunition to the ammunition used in crimes involving a suspected gun; in order of suitability, from most to least, De Kinder (2002b:9) characterizes the typical preference hierarchy for test firing: • ammunition from the same lot as the recovered bullets and casings; • ammunition of the same brand and make as the recovered bullets and casings; • ammunition from the same manufacturer as the recovered bullets and cartridge cases, having the same primer or bullet jacket composition but not necessarily being exactly the same type; and • ammunition having the same primer or bullet jacket composition, but not necessarily being from the same manufacturer. Often, however, no such information is possible—and in the context of creating an reference ballistic image database (RBID), it can never be known what type or lot of ammunition will be used with a new firearm. To address ambiguous cases, ATF recommends certain “protocol ammuni- tion” for particular calibers to its NIBIN agencies in the hopes of “[giv- ing] the best chance overall for [test-fire] items to find matching evidence b ­ ullets and casings in a database.” The protocol ammunition is chosen to be “intermediate in recording toolmarks and impression hardness,” having bullet metal and primer surfaces that are neither too hard nor too soft for registration of marks (Thompson et al., 2002:15). 4–C.1  Mounting of Evidence and Demographic Data Entry To begin an IBIS entry, operators open a “case,” which can contain one or more constituent “exhibits,” bullets or cartridge casings. A case can also include information about a firearm, if it has been recovered. Links suggested by IBIS in comparing exhibits are made between exhibits and not cases as a whole. Although the case identification number is displayed in a column when comparison results are returned for analysis, the system does not make it readily apparent where individual exhibits from a particular case fall in the list of rankings reported by IBIS. IBIS training materials emphasize the importance of correct entry of aux- iliary, context data about evidence and exhibits, the “demographic data.” For cartridge case markings, the training guide indicates that “automatic correlation requests use all of the following demographic information”— occurrence date, caliber, firing pin shape, and event type—“to select the test candidates from the database,” and all these pieces of information are described as “crucial for the correlation process” (Forensic Technology WAI, Inc., 2002a:2-10, 3-2). IBIS defines six basic event types, four for

CURRENT BALLISTIC IMAGING TECHNOLOGY 97 exhibits related to crime and two for test firings. The crime-related event codes are homicide (HOM), assaults with a deadly weapon (ADW), other crime (OTH), and unknown (UNK). The distinction between the two test fire events is whether the firearm is retained by police (and hence is out of circulation on the street) or whether it is returned to the owner after firing; these are coded as TF and TFR, respectively. (The basic manner in which the demographic data are used as filters is described in Section 4–D.1.) B ­ ullet exhibits are linked to operator-entered information on certain general rifling characteristics that can be derived from the bullet and can narrow down the database search. These include caliber, twist (the orientation of the land and groove impressions, left or right, when looking from the base of the bullet), and the number of lands and grooves on the bullet. The composition and type (e.g., jacketed or hollow point) of the bullet may also be recorded. Although accurate demographic data entry is essential to the IBIS comparison process, the physical positioning of bullet or cartridge evidence under the microscope (and camera) is crucial to the acquisition of quality, comparable images. Indeed, Tontarski and Thompson (1998:644) observe that “the greatest initial concern using this technology was whether or not different examiners could enter projectile and cartridge casing images in a sufficiently consistent way for the database to be able to locate a match.” Though they go on to assert that “the equipment’s image capturing system and its robust algorithm have all but eliminated operator variability as a concern,” proper positioning of exhibits is still emphasized in IBIS training, and some studies (e.g., Chan, 2000) suggest that substantial misalignment can still cause problems in comparison. In its documentation, FTI suggests standardized protocols for orienting evidence that have also been adopted as standards by the NIBIN program. For instance, a cartridge bearing roughly horizontal breech face marks across the primer surface is supposed to be oriented so that the marks are as flat (not at an angle) as possible, rotated so that the ejector mark on the cartridge rim is in the southern hemisphere of the image. If the cartridge shows evidence of a firing pin drag mark, where the pin has scraped against the surface, the cartridge is supposed to be rotated so that the drag mark is at or around the 3 o’clock position. 4–C.2  Specification of Regions of Interest IBIS allows technicians to designate regions of interest on an image. Because these regions are circular for the markings left on cartridge cas- ings, the regions of interest are also known as ring limits. For a breech face impression, the region of interest is indicated based on two circles. The computer derives an automatic, “default” placement of the rings, but

98 BALLISTIC IMAGING they can be adjusted directly by the operator. The outer (blue) circle is to be set to the edge of the primer surface of the stamp and the inner (red) circle marks off the firing pin impact region; the image used in compar- ing breech face marks is based on the doughnut-shaped area between the two circles. Though marks on the cartridge case area may be irregularly shaped—the pit of the firing pin impression and areas where the primer metal has been pushed back out of the firing pin impression—the region of interest rings are strictly circular. Hence, the IBIS operator must make some judgment about exact placement of the circle, assessing the potential for “washout” areas (reflected light off of the jagged edge of the pit of the firing pin impression) to show up in the final image. Operators may also adjust procedures to accommodate specific firing pin types; for example, Glock firearms have a distinctive rectangular firing pin, and therefore technicians place the inner circle so that it circumscribes the four corners of the impres- sion. Figure 4-1(a) shows an IBIS breech face image with the two circular delimiters superimposed. Once the regions of interest are set for acquiring a breech face, the image is taken using the IBIS standard ring lighting, intended to provide uniform illumination, and the system automatically suggests a lighting intensity “to provide optimum lighting for acquisition.” However, the IBIS training materials note (Forensic Technology WAI, Inc., 2002a:2-18): In numerous cases the suggested lighting may not appear optimal (for e ­ xample, with smooth surfaces or uncommon metal primer compositions). In these cases, you will need to manually adjust the light setting with the light scroll bar in order to minimize washout. Eliminating the washed out (white halo) area surrounding the firing pin impression improves correlation accuracy as this area is sometimes a common feature between cartridge cases. This will increase score results on marks of lesser value. Always keep in mind that your goal is to find the lighting intensity that will provide the best contrast with the least washout. After acquiring the breech face image using the center light, the user has the option of taking a second picture using alternate lighting, a side light located at the 6 o’clock position relative to the mounted cartridge, while holding the cartridge fixed in the same orientation. Figure 4-1(b) illustrates a side light image of a cartridge breech face impression, side by side with the standard center light image, Figure 4-1(a). The side light image is better for seeing some impression of three-dimensional detail, though it ­ necessarily also casts shadows on other parts of the image. If the side light image is acquired, it is filed with the case and remains available for viewing later on (including the “Multiviewer” interface for viewing multiple exhibits simul- taneously, as when reviewing comparison scores). However, the side light

CURRENT BALLISTIC IMAGING TECHNOLOGY 99 a b c FIGURE 4-1  IBIS breech face images. 4-1.eps NOTES: The three images are (a) breech face image using the standard ring, center light; (b) breech face image using the side light; and (c) firing pin image using the standard ring, center light, acquired from the same cartridge casing. Although they are difficult to see in this reproduction, circular region-of-interest delimiters are indicated on images (a) and (c). The area between the outer circle and inner circle (a) defines the breech face impression, and the area inside the single circle (c) defines the firing pin impression.

100 BALLISTIC IMAGING image is strictly used for an alternative view by the current IBIS; it is not used in the derivation of a mathematical signature from the image or used in the system’s automated comparison with other images. For the firing pin impression (for exhibits from centerfire guns), the region of interest is defined by a single circle that the computer attempts to automatically place just inside the top edge of the impression; see Fig- ure 4-1(c) for an example. The intent of imaging the firing pin impression is to focus on the footprint or base of the impression; hence, the region of interest is meant to exclude any drag marks, washout lighting, or primer flowback areas. As with the breech face impression, the user may manually adjust the level lighting, but the ring light is the only light source option for the firing pin. The IBIS operators’ manual describes “optimal lighting” for the firing pin impression as “eliminat[ing] as many clusters of washed out pixels in the firing pin region as possible, without the region being too dark” (Forensic Technology WAI, Inc., 2002a:2-27). The ejector mark and the firing pin impression from a rimfire gun (where the firing pin strikes the headstamp area on the rim of the cartridge) differ from the other marks in that the region of interest is free form and the computer does not suggest a default region. Looking at the zoomed image on the computer screen, operators click with the mouse to draw an outline around the mark; they are directed to try to remain about 1 cm from the edge of the impression, as it appears onscreen. In both these cases, the manufacturer’s headstamp may interfere with the toolmark; operators try to eliminate as much of the headstamp as possible from their trace of the mark. Two separate images are made of an ejector mark after the region of interest is defined, one using a side light from 3 o’clock and the other from 6 o’clock. Rimfire impressions that are rectangular (noncircular) in shape are also imaged twice, with the 3 and 6 o’clock side lights, while circular rimfire impressions use the ring light. The added difficulty of acquiring images of ejector marks (free-hand specification of regions of interest and the capture of two images), may explain why many law enforcement agen- cies do not routinely acquire ejector marks for inclusion in NIBIN. Bullets require more complicated and time-consuming image acquisi- tion. As described in Chapter 2, the raised areas on the interior of a firearm barrel (lands) leave corresponding marks on bullets, dubbed land engraved areas (LEAs); the LEAs are separated by groove engraved areas (GEAs), and the transition points between the LEAs and GEAs are called shoul- ders. Though GEAs can pick up striation marks as the bullet moves down the barrel, most of the marks are registered in the LEAs, where contact is greatest. Consequently, IBIS images focus on the LEAs. For each LEA on the perimeter of the bullet, the IBIS operator positions two “anchor lines” based on the image on their computer screen; though the shoulders are useful for helping technicians recognize LEAs, they are not intended to be

CURRENT BALLISTIC IMAGING TECHNOLOGY 101 included in the image, and so the anchor lines are meant to be placed just inside the shoulder boundaries. The image section between the anchor lines is used for comparison with other images. The IBIS software attempts to automatically place the anchor lines, but they are typically adjusted by the operator and reoriented so as to be parallel with detectable striation marks in the center of the LEA. The process of placing these lines must be repeated for each of the LEAs. Since the surface of a bullet can be complicated to focus, two focus- ing options are available using IBIS. The first is digital optical focusing for which both central and ring lighting are available. Central lighting is most often used in image acquisition; however, ring lighting is available to increase the definition of shoulders, and therefore help verify their position. The other option for focusing an image is by aid of lasers. IBIS has two lasers that intercept a bullet at 45 degree angles. These lasers are not only useful for focusing exhibits, but also for positioning the bullet properly rela- tive to the optical axis, and for finding the “shoulder edges” of bullets. Bullets are prone to be damaged or deformed, and image acquisition processes must adapt to these possibilities. IBIS operators typically acquire images of LEAs along a band near (but not immediately at) the base of the bullet. However, if the base of a bullet is too damaged to acquire, techni- cians can attempt to identify representative marks at the nose of the ­bullet. Alternatively, if there are cannelures (a circumferential groove) on the b ­ ullet, the last cannelure can be considered the base of the bullet. Bullet fragments can also be analyzed; however, each bullet fragment is treated as a separate whole bullet specimen. 4–C.3  Reduction to Mathematical Signature and Processing At the end of the acquisition process, a signature is generated on the basis of the final acquired images. Two versions of a signature for a par- ticular exhibit are derived, which the IBIS users’ guide describes as “big and small signatures. Big signatures contain a high level of detail, but take up a great deal of memory space and take a longer time to process. Smaller signatures are less detailed but more efficient to use” (Forensic Technology WAI, Inc., 2001:129). These signatures are sent—along with the images, for later on-screen viewing—to a correlation server for processing. As discussed in Chapter 5, in the NIBIN system this means transmittal to one of the three national labs of ATF; the signature, image, and related information is archived at these regional sites to populate the central NIBIN database. It is the processed signatures, and not the images themselves, that are further processed, compared against other entries in the database, and scored based on their similarity. The exact manner by which signatures are extracted and compared with others is considered proprietary information

102 BALLISTIC IMAGING by FTI, the maker of the IBIS platform. The description in Section 4­­–D of the steps of the scoring and comparison processes derives from published articles and other public documents. A subgroup of committee members discussed the signature generation process with FTI technical staff in March 2005 under a nondisclosure agreement that precludes disclosing in this report any information provided to the committee by FTI that it has des- ignated as proprietary. However, our assessment and recommendations in Section 4–F and Chapter 6 specific to the IBIS platform are informed by the discussions with FTI. 4–C.4  Image and Signature Size An important consideration in discussing the maintenance of a large database of images and querying of that database from multiple remote locations is the size of the image files. The images collected by IBIS are 256-level greyscale graphics. According to FTI specifications reported by Tulleners (2001:4-3), the raw JPEG-type images of a breech face or a firing pin impression take up 230.4 KB of space. For transmission and archiving, the images are “compressed to a proprietary image [format],” and that compression is approximately 10:1 (e.g., the compressed images take up 21–23 KB). Images are also associated with 1 KB of “textual data” from the demographic data entry. Although the information dates to early incarnations of the IBIS plat- form, Tontarski and Thompson (1998:643–644) report that “approxi- mately 1800 [raw graphic file] cartridge casing images can be stored on a [1.2 Gb] DAS optical disk, and approximately 10,000 compressed images and ‘signatures’ can be stored on a [1.2 Gb] SAS optical disk.” This infor- mation suggests that the combination of a compressed image and signature took up roughly 120 KB in early IBIS. It is not clear whether this estimate corresponds strictly to a single image and its signature or to the complete set of information associated with an evidence cartridge ­ casing: a breech face image and signature, a firing pin image and signature, a side-light breech face image (optional), and an ejector mark image and signature. Bullet images and signatures are substantially larger due to the acquisition of multiple images (for each LEA). Tontarski and ­ Thompson (1998:643–644) indicated that “a bullet with 6 LEAs requires about 2.1 Mb of storage space. Approximately 500 to 600 bullets [(raw images)] can be stored on each DAS optical disk. . . . The compressed image is stored on the SAS optical disk (currently up to 6000 JPEG images) and the ‘signature’ is stored on the SAS 1 Gb hard disk (up to 50,000 projectile ‘signatures’ and associated case data).”

CURRENT BALLISTIC IMAGING TECHNOLOGY 103 4–D  Scoring, Ranking, and Analysis The heart of the IBIS operations is the process of comparing the sig- nature associated with a reference exhibit with hundreds or thousands of other signatures in a database in order to assess their similarity. FTI refers to this process as “correlation”; as we discuss further in Section 4–F, we believe that the use of the term is problematic because of the well-known statistical definition of the word. Here, and elsewhere in the report, refer- ences to source materials may refer to correlation and related constructs (e.g., “correlation servers,” the computer hardware that performs the pro- cessing). However, we refer to the process as a scoring process, or, more generically, as the comparison process. 4–D.1  Filtering An important first step in processing IBIS data is filtering: screening the database based on the information entered by IBIS operators at the time of image acquisition in order to reduce the search space. This filtering—or, equivalently, conditioning on prior information—makes use of what FTI terms the demographic data associated with a case or a specific piece of evidence. Most of these filters are automatic or system defined, but some can be set in nondefault ways during manual correlation requests. One major filter is the specification of the databases against which particular exhibits are to be searched; particularly in the context of NIBIN, this is equivalent to geographic selection. As described in more detail in Chapter 6, NIBIN is structured so that exhibits from a particular agency are, by default, only searched against those agencies that are located in the same “partition” in the NIBIN database; searches against other agencies or wider geographic agencies must be manually requested. Another critical filter is the caliber of the weapon or, more appropri- ately, the “caliber family.” FTI defines a caliber family as the set of calibers “that could be fired by the same gun. For example, .38 auto ammunition can be fired with a 9mm Makarov pistol. This reflects the interchangeability of bullets and cartridge cases in firearms” (Forensic Technology WAI, Inc., 2002a:3-2), but also the reality that nonstandard ammunition can be suc- cessfully fired from a particular weapon. Separate caliber family listings are maintained for cartridge cases and bullets, and the lists have been periodi- cally updated based on input from firearms examiners. The event type and occurrence date entered as demographic data fur- ther narrow the search window. For instance, a reference exhibit coded with any of the crime event codes—HOM, ADW, UNK, and OTH—are compared with exhibits from all other event types, except for TF exhibits entered after the reference exhibit’s occurrence date because the gun is

104 BALLISTIC IMAGING assumed to be out of circulation. Similarly, a TF exhibit used as the refer- ence is compared with crime-event exhibits bearing dates before the test-fire date, but test fires are not compared against each other. The IBIS platform distributed in the NIBIN platform defines four choices for firing pin shape; these must be manually entered as demographic data and are not derived from the image. IBIS installations in some countries list 12 firing pin choices. Kennington (1992) suggests that an exhaustive listing of firing pin shapes could include around 22 choices. The number of exhibits left after the filtering is reported as the “sample size” on IBIS score comparison printouts (see Figure 4-2 and Section 4–D.3). 4–D.2  Steps in Scoring and Ranking For cartridge case evidence, the IBIS “correlation” scoring process is actually better thought of as a multiple step routine, in which the goal is to rank sample exhibits based on the degree of similarity in their derived signature to a reference exhibit. First-pass scores are generated separately for each of the basic markings (breech face, firing pin, and ejector mark), using the compressed, small signature associated with an exhibit (see Sec- tion 4–C.3). This is described as the “crude” correlation (Beauchamp and Roberge, 2005:6) or “coarse” correlation step (George, 2004a, 2004b). The coarse comparison scores are ranked from highest to lowest, separately for each type of mark. After the ranks are derived, a threshold is imposed: only the exhibits falling in the top 20 percent in the ranked lists for any of the three mark- ings is retained for further processing. For example, in a filtered dataset of cartridge casings of size 100, only between 20 and 60 exhibits form the new, effective sample for further analysis (20 if the same exhibits appear in the top 20 percent of all three lists, 60 if each of the three ranked lists have completely different exhibits in their top 20 percent of entries). The 20 percent threshold was doubtless chosen and fixed for computational effi- ciency, though a more stringent 10 percent threshold was apparently used in early IBIS (Thompson et al., 1996:196; Thompson, 1998:98; Tontarski and Thompson, 1998:644). Adjusting the threshold level is not impossible but requires intervention from FTI; the study of IBIS performance by George   To accommodate clerical delays,” Forensic Technology WAI, Inc. (2002a:3-2) notes that “ a 30-day buffer is added before and after the occurrence date for test fires. Presumably, if the reference exhibit does not have an ejector mark image—as is seemingly common practice for some NIBIN installations—the threshold is based only on the breech face and firing pin images. The alternative would let 20 percent of the sorted list of ejector mark scores (all zeroes, and presumably sorted by default on some other datum such as entry date or exhibit number) into the second correlation step.

CURRENT BALLISTIC IMAGING TECHNOLOGY 105 FIGURE 4-2  Sample “cover sheet”: Top 10 ranking report from an IBIS comparison. 4-2.eps NOTE: Page is from the committee’s experimental work at the New York State bitmap image Police Forensic Investigation Center. This particular sheet is from a comparison of a casing retrieved from files and reacquired into the system to find the image already in the database, hence the large top-ranked scores. The operator’s name has been obscured.

106 BALLISTIC IMAGING (2004a, 2004b) described in Section 4–E.3 is one of the few instances for which the 20 percent threshold was completely waived (and all exhibits were subject to the more detailed comparison step). The exhibits that remain after the coarse comparison step are then sub- jected to a more fine comparison based on the full, big signature (described above). As with the coarse comparison, scores are computed independently for each mark. The set of scores for the final, thresholded set of exhibits are then transmitted back to the requesting unit (along with the compressed images, for visual comparison). The process for comparing signatures from bullet evidence does not involve a coarse comparison or threshold step, but is more complex than the cartridge comparison routine due to the nature of the exhibits. ­Exhibits are filtered based on general demographic characteristics, particularly the number of LEAs on the bullet. The complexity arises because each of the LEAs on the reference bullet must be compared with all the LEAs on a comparison bullet, and all possible rotations of the two bullets must be considered to try to see for which rotation the two bullets are most likely to be “in phase” (in correct alignment). In a hypothetical comparison of two bullets with three LEAs each, IBIS computes three “phase scores,” one for each of the possible rotations of the bullets relative to each other; the phase score is the sum of the individual LEA-to-LEA scores for a particular rotation. Based on the phase scores, IBIS computes three summary scores of similarity: • Max Phase, the largest of the individual phase scores; • Peak Phase, the largest LEA-to-LEA score registered for the rota- tion that yielded the Max Phase score; and • Max LEA, the largest LEA-to-LEA score registered for any rotation of the bullets. 4–D.3  Interpreting IBIS Output The results of IBIS comparison requests, whether automatic or manu- ally requested, appear in tabular form on the screen of the SAS (or the standalone Matchpoint unit). Columns are clickable so that the user can review the top-ranked results by any of the marks. For cartridge cases, the initial screen divides the view between the tabular records and side-by-side images of the exhibits from whatever row (pair of exhibits) is selected. In the side-by-side comparison, the IBIS station essentially emulates the func- tion of a comparison microscope; images can be shifted relative to each other and relative to a center line, directly corresponding to the microscope view, so that striations and patterns can be matched between exhibits. Users have the option of switching to a “Multiviewer” screen, permit-

CURRENT BALLISTIC IMAGING TECHNOLOGY 107 ting visual comparison between the reference exhibit and several candidates simultaneously. The Multiviewer screen also permits more than one image per case, so that—even if the results are ranked by breech face score—users may see both the center light and side light breech face image as well as the firing pin image. Full IBIS DAS installations also include a printer. Users can print results for a single pair (including large displays of both images, along with the relevant demographic data). The basic summary report from a correlation request on cartridge case evidence consists of three tables, listing the top 10 ranked results for each of the breech face, firing pin, and ejector mark/ rimfire marks. An example of a cover sheet is shown in Figure 4-2. In this case, only breech face and firing pin marks were acquired, yet the report template still includes a “ranking” by ejector mark or rimfire firing pin (those scores are reported as 0). If desired, users can print a lengthy tabular report, sorted by one of the score columns, listing the scores for all of the exhibits in the filtered and thresholded exhibit set. The basic questions inherent in working with IBIS comparison scores is what meaning to put on a particular score and how deep in a list of sorted results an analyst should look for possible matches. Aside from the basic guidance that “the higher each score is, the more similar the test and reference exhibits are” (Forensic Technology WAI, Inc., 2001:131), IBIS training materials warn against interpreting the system’s scores. “The scores themselves have no intrinsic value; they are only used to establish a rank- ing between pairs” of exhibits, and “no absolute good or bad scores can be given for evidence images” due to the inherent variability in toolmark and image evidence (Forensic Technology WAI, Inc., 2001:128, 139). Users are advised to consider gaps in the distribution of scores—large differences between consecutively ranked pairs—for some idea of where to look for possible matches. However, with no stated justification, the training materials also sug- gest a “guideline” for analysis that has become a widespread standard among NIBIN partner agencies and other IBIS users (Forensic Technology WAI, Inc., 2002a:3-13): Whether you notice a gap [in score distribution] or not, compare at least the top 10 positions for the breech face, firing pin and ejector mark scores. Most times, matches are found within these top 10 positions, depending on the acquisition parameters, and quality and quantity of repeatable marks. Thompson et al. (2002:21) later cited “FTI figures” demonstrating that “a match is found within the top 10 ranked items approximately 97% of the time,” though no further source was given.

108 BALLISTIC IMAGING The training manual provides hypothetical examples of score examples. In one (Forensic Technology WAI, Inc., 2002a:3-14), breech face scores decline fairly monotonically from a maximum of 99 to 40 in the 18th- ranked position, where the score drops to 27. In this case, users are advised to “check top 10 only.” In a second example, the most sizable gap in scores is a drop of 19 points between the seventh- and eighth-ranked positions. The manual suggests that “the highest probability of a match is in the top seven positions but, once again, you should compare the top 10 positions for each region of interest” (Forensic Technology WAI, Inc., 2002a:3-13). The focus on the top 10 “is not an immutable characteristics of IBIS” but rather “a protocol developed from experience in using the system [that is] open to change as the system changes,” note Thompson et al. (2002:21). Indeed, in earlier work with the BULLETPROOF part of what became IBIS, Miller and McLean (1998:22) used the top five as their cutoff, citing their determination from “actual case work using the computer” that “the best possibility of a matching land impression should be found in the top five choices.” The IBIS users’ guide indicates that, “in general, the top five to ten scores in any correlation list are potential matches,” though “your laboratory administrator will decide how many exhibits in each list will be compared.” However, the “top 10” mentality is reinforced by the physical form of IBIS printouts—in the basic “cover sheet” results, only the top 10 scores for each region of interest are listed—but the choice of 10 as the guideline appears to be arbitrary. Reviewing the top 10 results has become a NIBIN program standard, though individual practice varies across police departments; for instance, the New York City Police Department (not affiliated with NIBIN) has made viewing the top 24 pairs its standard for cartridge case comparisons. If examination of the images on screen suggests particularly promising potential “hits,” a request for the physical evidence can be initiated, so a firearms examiner can compare the exhibits using the comparison micro- scope. We discuss the recording of “hits” in the NIBIN program further in Chapter 5. 4–E Uniqueness, Reproducibility, and Permanence of FirearmS Marks as Registered by IBIS As the IBIS technology has matured, its performance has been tested by several firearms examiners and other researchers. Most of the relevant studies are intended to address specific performance issues suggested by the creation of a large-scale RBID, containing many exhibits with common class and possibly subclass characteristics; others have scrutinized specific parts of the IBIS comparison process, such as the 20 percent threshold step.

CURRENT BALLISTIC IMAGING TECHNOLOGY 109 In this section, we briefly review the major studies of IBIS performance that have been performed to date. 4–E.1  California Feasibility Study In 2000, Assembly Bill 1717 was enacted into law, directing the Cali- fornia Department of Justice to undertake a study to evaluate the feasibil- ity and utility of current ballistic imaging systems to handle a California RBID. The “technical evaluation” called for by the law was conducted and reported by Tulleners (2001) and was circulated to stakeholders in late 2001 for review and comment. The report drew extensive comments, including lengthy comments by ATF (Thompson et al., 2002) and FTI (2002b). Based on the stakeholder comments, the Department of Justice requested an external independent review, which was secured from De Kinder (2002b). Attorney General Lockyer (2003) issued the department’s report to the legislature in January 2003, packaging together the techni- cal evaluation, external review, and the comments from ATF and FTI. He concluded (Lockyer, 2003:7) that “it is apparent that existing research is too limited and that further study of current and emerging technologies is needed before creating an RBID in California”; this further research should include alternatives such as microstamping and “would be most compre- hensive if conducted at the federal level.” The report expressed optimism on the “potential to develop ballistic imaging into a powerful crime-solving tool,” and suggested that “a national RBID could be an extremely valuable tool for law enforcement in generating leads and solving crimes” (Lockyer, 2003:9). In conducting the technical evaluation, Tulleners (2001) devised a set of eight “performance tests” or experiments; not all of the tests could be performed due to available resources, but the tests spanned a number of conceptual concerns regarding large-scale RBID performance. As the core data resource for the experiments, the California study made use of a natu- ral opportunity to capture exhibits from a large number of test firings of similar, bought-as-new firearms: proof test firings from a batch of 792 new .40 caliber Smith & Wesson Model 4006 semiautomatic pistols received by the California Highway Patrol (CHP). Tulleners (2001) acknowledged this resource as both a strength and a limitation of the study, a limitation because—projecting from gun sales data—.40 caliber arms would be a small part of a California RBID relative to 9mm arms (which could be as much as 45 percent of the database). IBIS entry and comparison of exhibits for the California tests was done by FTI at its Montréal headquarters. From the description of the tests, only breech face and firing pin marks were entered into the database. The 20 percent threshold (described in Section 4–D.2) was apparently left in

110 BALLISTIC IMAGING effect: describing comparisons on a database with 792 entries, Tulleners (2001:C-1) writes that “the system was set up so that it could only rank to position 160,” which is slightly more than 20 percent of the total database size. Spreadsheets of the test results indicate that any case where a breech face rank is listed as “Not in Selection” also has a firing pin rank “Not in Selection,” and vice versa, suggesting that these are cases where an expected match did not survive the coarse comparison pass and 20 percent threshold. (The tests performed in the California evaluation, and the formal responses to the results of those tests, are summarized in the appendix to this chapter, Section 4–G.) 4–E.2  De Kinder et al. Analysis In the wake of the California feasibility study, the lead author of the California technical evaluation and its independent reviewer collaborated on a follow-up study (De Kinder et al., 2004). This analysis responded to one principal criticism of the California study by including a wider range of ammunition, and it also used weapons of the more common (and com- monly used in crime) 9mm caliber. For the purposes of this committee’s work, the De Kinder et al. (2004) study is particularly important because NIST secured access to the original casings from that study; in Chapter 8, we describe work done by NIST on the committee’s behalf to reanalyze some of these casings by two-dimensional photography, as well as original work with three-­dimensional surface metrology techniques. To create their exhibit set, seven cartridges were fired from each of 600 pistols used by the Sacramento and Modesto, California, police depart- ments. All but 46 of the firearms were SIG Sauer P226 pistols; the 46 excep- tions were of the SIG Sauer P225, P228, or P229 series, but “the general breech face, firing pin aperture, and extractor configurations are essentially the same” between the different models (De Kinder et al., 2004:208). Of the seven cartridges per pistol: • two used Remington-Peters 115 grain FMJ (Remington) cartridges, one of which was entered into an IBIS station to create the test database and the second was retained for querying; and • one shot each was made with each of five ammunition types: W ­ inchester 147 grain JHP, Speer 115 grain FMJ, Wolf 115 grain FMJ, Federal 147 grain FMJ, and CCI 115 grain FMJ. The cartridges to be fired were loaded into a magazine by a ­supervising criminalist before officers fired the rounds, but it is not known whether the same sequence of ammunition was used in each firing. It is also not stated whether the pistols were fired as new, or how they may have varied

CURRENT BALLISTIC IMAGING TECHNOLOGY 111 in age or use. In entering the exhibits into IBIS, the system defaults were not automatically accepted, and De Kinder et al. (2004:Table 2) list the frequencies with which manual corrections were made (the largest of which was adjusting the lighting for the breech face image, done in 37.7 percent of entries). Breech face and firing pin images were acquired for all exhibits, and, in analyzing ranks, De Kinder et al. (2004) used the best rank on either of the two marks as the overall rank for an exhibit-to-exhibit match. A limita- tion of the study is that the IBIS staff entering test queries were instructed to consider and tabulate ranks within the top 30; any ranks higher than 30 were combined into a “More than 30” category. This decision is useful in that it arguably corresponds to a practical limit to the number of com- parisons IBIS technicians might scroll through in a routine examination, if they look beyond the top 10. However, it does not provide insight into the number of possible matches that may be missed because the comparison fails to pass the coarse comparison and IBIS-default 20 percent threshold (under which the effective sample size would be somewhat more than 120). However, concerns about the 20 percent threshold were the focus of the George (2004a, 2004b) study, discussed in the next section. With a quasi-RBID of 600 images, all of exhibits using Remington ammunition, De Kinder et al. (2004) performed a “best-case” matching exercise, querying the database using the second Remington casing for 32 randomly selected exhibits. Twenty-three of the 32 sister Remington images were found in the database in the top 10 ranks, and 18 of those matches were ranked number one. Eight of the possible matches ranked higher than 30 (or were eliminated by the 20 percent threshold). De Kinder et al. (2004:210) then drew 32 exhibits from each of the five non-Remington test firings and queried each of those against the database to try to locate the Remington casing from the same gun. All of the brands performed poorly relatively to the Remington-to-Remington comparisons; the Federal and Wolf firings fared particularly badly, with 24 (Wolf) and 27 (Federal) out of 32 searches ranking “more than 30.” In total, only 21 percent of the comparisons found the sister Remington cartridge in the top 10 ranks. Other tests performed in the study attempted to demonstrate degrada- tion in ranks with database size and to estimate the time needed to perform a comparison as a database grows in size. A portion of the study also asked the IBIS operators to use the output to indicate which casing or casings they would recommend for manual examination by an examiner based on the IBIS output, to get some crude sense of the false negatives or false positives that might result from actual querying of an RBID. De Kinder et al. (2004:214–215) concluded that “the results of our study illustrate that an RBID cannot adequately and efficiently compare specimens, leading us to conclude that such a database is unsuitable for

112 BALLISTIC IMAGING law enforcement work. The current miss rate identified in this study is u ­ nacceptable for an RBID.” 4–E.3 George Study George (2004a, 2004b) conducted two experiments, both of which focused on concerns regarding the IBIS default coarse comparison pass and 20 percent cutoff for detailed scoring and ranking. George (2004a) suggested a high incidence of cases in which known exhibits from the same firearm, even in a relatively small database, were excluded from matching by the 20 percent threshold. A follow-up study (George, 2004b) makes a critical and unique contribution because, through arrangement with FTI, the analysis completely waived the coarse comparison and thresholding steps in IBIS processing. Both George experiments made use of an exhibit set created due to the St. Louis County, Missouri, Police Department’s requirement that test fires from every police officer’s duty weapon be maintained on file and its 2003 decision to change the brand of its standard duty ammunition. Hence, more than 500 Smith & Wesson Model 4006 and 4013 .40 caliber pistols each had four consecutive shots fired through them—two using Remington 165 grain, Golden Saber Bonded JHP ammunition (the new duty ammunition) and two using Federal 165 grain tactical JHP. After firing, and prior to entry in IBIS, all four casings from a particular weapon were inspected using a comparison microscope “to ensure that a match was possible before being chosen as a candidate for the study” (George, 2004a:286). Only the car- tridge cases were imaged, and the images were acquired so as to maximize uniformity; standard procedures for orienting the cartridges were followed, and all the exhibits were prepared and imaged by the same examiner. To be most favorable to IBIS’ default settings, “the lighting was not secondarily adjusted from the automatic setting” suggested by the system. Though breech face and firing impressions were entered for the exhibits, the analysis of scores and rankings in George (2004a) use only the breech face mark. Exhibits were entered into the database, and other casings from the same weapons used to query the database, in several stages; from the nar- rative in George (2004a), it is difficult to ascertain the exact content of the database at each particular instance when comparisons were run. Still, the basic conclusion reached by George (2004a) was that the default 20 percent threshold did hamper IBIS’s ability to generate matches. In total, 183 comparisons were made between an “evidence” or “blind” exhibit and a database containing a sister image, from the same gun and using ammunition of a particular type. (In fact, due to the sequence by which the database was populated, there may have been two or three images from the same gun in the database, but each of the comparisons performed had

CURRENT BALLISTIC IMAGING TECHNOLOGY 113 a single ­“target” exhibit.) Of these 183 comparisons, 76 found the sister exhibit in the top-ranked position and an additional 13 in ranks 2 through 10. However, the 20 percent threshold excluded 64 known matches (35 percent) from final scoring and ranking, a seemingly high rejection rate. These dropouts due to the 20 percent threshold were concentrated among comparisons for which the ammunition type differed, trying to find a Fed- eral-brand casing using a Remington exhibit and vice versa. Only 8 of 74 cross-ammunition-type comparisons returned ranked in the top 10 posi- tions, while 54 (73 percent) were screened by the 20 percent threshold. By contrast, 81 out of 109 same-ammunition-type comparisons (74.2 percent) were found in the top 10 ranks. George (2004b:290) extended this work, first by augmenting the exhibit set. Another 100 service weapons were test-fired and a third ammunition type—Winchester 165 grain Full Metal Jacket (FMJ) target—was added to the firing routine. Six firings were made from each of 100 additional service weapons, two of each of the three ammunition brands. One of the Federal-brand casings was imaged for each of the 100 guns; 25 of the guns were drawn at random and the five extra casings for each was entered. In this manner, “approximately 540 firearms have now been used to establish a database of 850 cartridge case exhibits.” As in George (2004a), stan- dard IBIS entry protocols were followed. Five images were taken of each c ­ asing—two of the breech face impression (including the optional side light image), one of the firing pin, and two of the ejector mark. However, only the breech face comparison scores were analyzed. The results of this analysis are summarized in Table 4-1. Comparisons made between exhibits using the same ammunition type were more success- ful than comparison across ammunition brands: 56 percent of comparisons using the same ammunition found the desired sister image in the top 10 ranks, compared with 17.7 percent for cross-ammunition comparisons. The effect of ammunition choice on IBIS rankings is made vivid by George’s full listing of the ranks; for a particular firearm, the set of 15 comparisons to find known exhibits from that same gun can provide matches within the top 10 (or top 5), but also ranks as low as 843 or 848 in an 850-element dataset. Consistent with the earlier study, George (2004b) notes particular con- cern about the possible effect of the IBIS-default 20 percent threshold. In his analysis, George notes high percentages of cases with rank below 170, which is 20 percent of the database size; as shown in Table 4-1, the effec- tive thresholded sample size is almost certainly higher than 170 because high-ranking exhibits from any of the three marks (breech face, firing pin, and ejector mark) are retained from the coarse comparison pass, and we use 190 as a tabulation comparison. In any event, the George data cannot speak directly to the number of cases that would be lost in a default IBIS

114 BALLISTIC IMAGING TABLE 4-1  Summary Results of George Study of IBIS Cartridge Case Comparison Performance Rank Ammunition Brand: “Evidence” Casing to Below 190/ Sister in Database #1 #2–10 #11–25 #26–190 Threshold Total Federal to   Federal 17 3 1 1 3 25   Winchester 5 23 4 10 8 50   Remington 0 6 1 13 30 50 Winchester to   Winchester 8 5 1 6 5 25   Federal 3 7 7 20 13 50   Remington 2 2 1 13 32 50 Remington to   Remington 3 6 3 9 4 25   Federal 1 1 2 11 35 50   Winchester 1 2 0 20 27 50 Same Ammunition 28 14 5 16 12 75 Different Ammunition 12 41 15 87 145 300 Total 40 55 20 103 157 375 NOTES: Comparisons were made to a database containing 850 exhibits, so that a strict 20 percent threshold would retain only 170 exhibits. However, firing pin and ejector mark images were acquired (even if the resulting scores on those marks were not used in the analysis), so the IBIS 20 percent threshold would include any exhibits in the top 20 percent by any of the three marks. Hence, the effective 20 percent threshold almost certainly involves more than 170 rankings; we use 190 as a rough approximation to the effective thresholded sample size. SOURCE: Tabulations from a prepublication report made available in electronic form at the 2004 Association of Firearms and Tool Mark Examiners Training Meeting; as printed in George (2004b:Table 1), the table is missing three rows. analysis because his data represents the entire correlation results using the full, detailed signature associated with image exhibits, and not the reduced, coarser signature typically used by IBIS in the thresholding step. The data tables in George (2004b:295) also include a code (for each of the 375 performed comparisons) indicating a firearms examiner’s quick visual assessment of “match” or “no match” based on the second breech face image, using side light illumination rather than center light. In total, 77 percent of the 375 side-light-to-side-light comparisons displayed “suf- ficient identifiable features . . . to warrant a microscopic examination.” George argues that some use of the side light image in the correlation process “may be one way to increase the accuracy of the system” “given

CURRENT BALLISTIC IMAGING TECHNOLOGY 115 that in the present correlation system, 75% of the known matches [on 375 searches] failed to appear in the top 10 correlated positions.” 4–E.4  Nennstiel and Rahm Studies Nennstiel and Rahm (2006a, 2006b) authored two studies on the performance of IBIS comparison routines. The first study summarizes and suggests a common notation for direct comparison of the major previous studies of IBIS performance (the same studies we describe in this section). We focus here on the second study (Nennstiel and Rahm, 2006b), the results of their own experimental work with IBIS. The work derives from the experience of the Federal Criminal Police Office (BKA) in Germany, which developed its initial IBIS database in 2000 and has added to it since 2001. The images in the BKA database are sub- ject to a preselection bias, as departmental policy is to only enter into IBIS those exhibits that are deemed “suitable for comparison,” a designation made by inspecting the casings and deciding whether sufficient markings exist so that there is a reasonably high probability that a match could be made by a “normal optical comparison” (Nennstiel and Rahm, 2006b:25). About 77 percent of cartridge cases processed by BKA are deemed suitable for comparison (the rest are labeled either partially suitable or unsuitable, and are not input into IBIS), but only 35 percent of bullets are considered suitable. It is also BKA’s policy to enter “a maximum of two bullets and three cartridge cases” recovered as evidence from a crime scene and “one projectile and two cartridge cases (with the most different firearm mark- ings)” from test-fired weapons, and to inspect the top five IBIS score results (Nennstiel and Rahm, 2006b:26). Nennstiel and Rahm’s analyses add new insight by considering two previously unexplored aspects of IBIS performance. First, as BKA populated its database in 2000, it not only acquired images for evidence in all open cases but also for “all crime links in the collection, known from pre-IBIS conventional microscopic comparison. This was performed to see whether known links would also result in a match with the IBIS” (Nennstiel and Rahm, 2006b:24–25). Specifically, 232 known hits using cartridge cases and 84 using bullets were reassessed, requiring “over 670 correlations with cartridge cases and 180 correlations with bullets” (Nennstiel and Rahm, 2006b:26). They report a success rate of 80.2 percent of finding the match in IBIS using cartridge cases, inspecting results for all three marks down to the fifth-ranked position; the rate increases to 85.8 percent for ranks in the top 10 by any mark (Nennstiel and Rahm, 2006b:29). They also conclude that considering all three marks is the best approach but that, considering each mark separately, the firing pin impression performed best for verifying the connections between cases.

116 BALLISTIC IMAGING The second innovation made by Nennstiel and Rahm (2006b) is that they also recorded attempts to use IBIS to verify “warm hits,” instances in which police investigators suspected a link to other specific offenses already in the database. (This is in contrast to “cold hits,” where there is no intel- ligence to suggest a connection between cases other than the similarity of the ballistic images.) For cartridge cases, these “warm hits” proved more amenable to IBIS confirmation than the larger set of previously known crime links, described above. A success rate of 93.9 percent of finding the suspected matches in the top five positions on any marking only increases by 0.9 percent by examining score lists to the top 10 positions. Based on these analyses and other system tests (including known tests of multiple exhibits from the same firearm), Nennstiel and Rahm (2006b:28–29) conclude: When operating a collection of evidence ammunition [using IBIS], a success rate p in the area of 75–95% for cartridge case comparison and 50–75% for bullet comparison can be achieved in practice under certain conditions. A consideration of the [score] list elements up to n = 5 or n = 10 appears to be sufficient. Evaluations that go further increase the workload and contribute little to the improvement in the success rate. 4–E.5  FTI Benchmark Evaluation As the developer and maintainer of IBIS, Forensic Technology WAI, Inc., enjoys unique advantages in testing the system’s performance, includ- ing the ability to vary the level of thresholding used in the coarse com- parison stage and to directly study the signatures derived from images. More directly, FTI’s position offers great latitude with respect to one key performance variable: Because it can tap image data from IBIS installa- tions worldwide, it can assemble larger image datasets than is possible for any particular agency, including large numbers of exhibits within particu- lar caliber groupings. The images that can be assembled in this manner d ­ iffer from what would be expected in a large-scale RBID—large numbers of exhibits from new guns, highly similar in class and possibly subclass characteristics—but the resulting datasets are arguably the best basis for assessing IBIS performance in the face of sheer sample size. An FTI “benchmark evaluation” of IBIS performance for large data- bases proceeded in stages, taking as its base matched pairs of cartridge case exhibits provided by the Allegheny County, Pennsylvania, Coroner’s Office; a sample of images from this base set is shown in Figure 4-3. Each pair had been fired from the same gun, but the set of guns included a variety of manufacturers and makes within each caliber. The ammunition used in the firings also varied widely, and in some cases the exact ammunition make is

CURRENT BALLISTIC IMAGING TECHNOLOGY 117 unknown; each pair of exhibits did not necessarily use the same ammuni- tion. All images of these reference cases were acquired by FTI. Scores and ranks were generated against the sets of matched pairs themselves, as well as when they were combined with large numbers of completely unrelated exhibits from the same caliber pulled from IBIS sites worldwide. Initial results based on 9mm and .32 Auto pairs were presented by McLean (2004), and the 9mm results were also described by Nennstiel and Rahm (2006a). Results of the tests for other calibers were later summarized by Beauchamp and Roberge (2005). The basic results of the FTI evaluation are summarized in Table 4-2. The flooding of the matched pair data with “noise” images from other sites did generally degrade the rankings, albeit not linearly. For instance, the nearly fourfold increase in the size of the .45 Auto database caused the chance of finding sister images to fall by 11–15 percent, while the 9mm database was increased to about 65 times its original size and yielded a comparatively smaller 21–28 percent reduction in performance. Comparisons of rimfire firing pin impressions from .22 caliber exhibits were effectively invariant to a tripling of database size. McLean (2004) concludes that the results underscore the importance of entering ejector marks into IBIS, along with the quicker-to-acquire breech face and firing pin impressions. Though their IBIS analysis considered all of the casings, and not only ones that were visually reviewed and deemed to be “suitable for compari- son” as in the BKA study, FTI did subsequently have a firearms examiner review each of the 434 9mm matched pairs and grade their ability to be successfully linked by optical examination. In all, 46 percent of the pairs of breech face images were judged “excellent,” as were 54 percent of the firing pin images; 17 percent of breech face pairs and 11 percent of firing pin pairs were deemed “poor” or “no match” (Nennstiel and Rahm, 2006a: Table 6). Beauchamp and Roberge (2005) extended the benchmark evaluation work, reporting the results of similar IBIS comparisons for two additional calibers. They also derive performance curves for IBIS comparisons, training or testing them on the images from the Allegheny County exhibits. Their work forecasts that—when searching a database of 1,000,000 exhibits— IBIS performance in detecting sister pairs within the top 10 ranks looking at both breech face and firing pin marks is on the order of 30–35 percent. Based on the smaller set of 9mm exhibits for which ejector marks were also considered, the estimated success at finding a known match in a 1,000,000- exhibit set is about 50 percent when all three marks are considered.

118 Breech Face Images Firing Pin Images Firing 1 Firing 2 Firing 1 Firing 2 A: Browning firearm; same ammunition in both firings B: Smith & Wesson firearm; same ammunition type 4-3 left.eps C: Walther firearm; same ammunition type

D: Walther firearm; same ammunition brand E: Unknown Luger-type firearm; same ammunition brand 4-3 right.eps F: Smith & Wesson; different ammunition brands FIGURE 4-3  Sample matched pairs of breech face and firing pin images. SOURCE: Images from Forensic Technology WAI, Inc., benchmark evaluation dataset of matched pairs of exhibits, shared with 119 the committee.

120 BALLISTIC IMAGING TABLE 4-2  Summary Results of Forensic Technology WAI, Inc., Benchmark Evaluation Percentage of Success in Locating Sister Image,   by Mark Caliber (No. of Pairs) and Database Size BF FP BF+FP EM BF+FP+EM 9mm (434)   868 53 74 84 — —   56,000 39 53 66 — — 9mm (78)   4,030 51 56 82 46 94 .32 Auto (500)   1,000 35 84 87 — —   10,700 25 72 76 — — .45 Auto (474)   948 55 57 73 — —   3,535 47 49 65 — — .22 (500)   1,000 — — — 87 —   3,070 — — — 87 — NOTES: BF = breech face; FP = firing pin; EM = ejector mark or (for .22 caliber) rimfire firing pin impression. Sources vary on the standard used to define “success” in matching. Beauchamp and Roberge (2005) indicate that “success” refers to finding marks within the top 10 ranks, but Nennstiel and Rahm (2006a) note that comparison scores were reviewed down to 24 ranks. SOURCES: Data from Beauchamp and Roberge (2005:11); see also McLean (2004) and Nennstiel and Rahm (2006a). 4–E.6  Other Studies As is discussed further in Chapter 5, the Office of National Drug Con- trol Policy (ONDCP) (1994) requested a benchmark comparison between the DRUGFIRE system and the early-IBIS BULLETPROOF systems, in the early days of ballistic imaging and as the potential for overlap became apparent. Though the hardware and software components of IBIS have improved since then, including the addition of BRASSCATCHER for imag- ing cartridge cases, this early examination is still noteworthy. As its baseline database, the ONDCP used 150 matched pairs of both bullets and cartridge cases, collected from test fires of 30 weapons in each of five caliber groups (.25 Auto, .38 Auto, 9mm Luger, .38 Special/357 Magnum, and .45 Auto). These base data were then augmented with images acquired from other firearms, selected at random from exhibits in the exist- ing FBI and ATF datasets and representing 100–500 additional weapons

CURRENT BALLISTIC IMAGING TECHNOLOGY 121 in each of the caliber groups. Tontarski and Thompson (1998:645) briefly summarized this benchmark evaluation in their overview paper of the new IBIS platform: During a series of stress tests, images were acquired outside the norms any trained operator would use. The tests included reducing and flaring the light sources, misplacing anchor lines, tilting the image during acquisition, incorrectly designating the striae angle, partially masking striae detail, and obliterating striae information by sanding a land engraved area. Even when combinations of mistakes were made, the system located the correct matching bullet among the top five candidates 85% of the time (22 out of 26 tests). A number of the tests included correlations where the images were acquired by two different operators. More recently, smaller studies of IBIS performance have suggested possible improvements. Chan (2000) documented major changes in breech face and firing pin scores produced when a cartridge case is rotated by 90, 180, or 270 degrees from the FTI-suggested orientation. Staff of the Israel National Police have also generated a number of studies, based on experience in operating an IBIS installation since 1998. Argaman et al. (2001) document that department’s policies for IBIS usage, including the entry of two or more cartridge casings when possible and the specifica- tion of manual date-limited queries in cases where information is known about the gun (e.g., the date it was known to be stolen). Silverwater and Koffman (2000) compare basic strategies for IBIS usage, including a strict policy to review the top five ranked results regardless of the score distribu- tion and the entry of two cartridge cases when possible. Argaman et al. (2001) describe department policy for periodic reacquisition of cartridge case images—including entries from third or fourth firings, if possible—in order to get a sense of IBIS’s reliability, while Giverts et al. (2002) suggest an “average phase” score for bullet comparisons. Schecter and Giverts (2005) suggest a workaround to improve IBIS performance when compar- ing Glock-type cartridge cases, for which the ejector mark impression lies within the casing’s headstamp on the edge of the primer surface, and not on the outer rim of the casing. The suggested solution is to acquire the image of that region in the same manner as a single LEA on a bullet. Fewer studies consider the impact of specific ammunition and firearms manufacturing processes on down-the-road IBIS performance. Hayes et al. (2004) tested whether the presence of a lacquer primer sealant over the entire primer surface (see Section 2–D.2) degrades IBIS scores in com- parison with cases in which the sealant is removed. Sellier and Bellot 9mm rounds—known for a distinctive red lacquer coat over the entire primer surface—were fired through three makes of gun; several of the shells were

122 BALLISTIC IMAGING cleaned with acetone prior to firing to remove the lacquer. Two lacquer- coated and two lacquer-stripped casings fired from each of three different guns were entered into the New York City Police Department’s IBIS and scores were generated; at the time, the number of 9mm Luger, circular fir- ing pin exhibits (the base set for this comparison) in the New York system was estimated at 5,700 images. Generally, guns known to produce clear characteristic breech face marks performed consistently regardless of the presence of lacquer, which is to say that pairs of lacquer-coated exhibits from the same gun were returned in the top ranks as were pairs of ­lacquer- stripped exhibits; guns known to produce fainter breech face marks pro- duced lower-ranked matches, yet still generally in the top 10. However, matching lacquer-coated to lacquer-stripped exhibits from the same gun proved more problematic, apparently failing to clear the coarse correlation and 20 percent threshold steps for guns with weaker propensity to generate breech face marks (score reported as 0 and rank as “none”; Hayes et al., 2004:Table 1). The IBIS function for comparing bullet evidence plays a prominent role in a multi-part examination of criteria for identifying bullet matches, and in particular standards for the number of groups of consecutive matching striations that can be said to define a match (Miller and McLean, 1998; Miller, 2000, 2004; see also Miller, 2001). The committee’s own experimentation, conducted by NIST under a separate contract with the National Institute of Justice, involved reanalysis of some of the De Kinder et al. (2004) cartridge casings as well as construc- tion of a new 144-exhibit set of test-fired casings, varying ammunition brand and gun manufacturer. These casings were processed using both IBIS and three-dimensional metrology techniques, and were also run through IBIS waiving the coarse comparison and 20 percent threshold steps. We also performed limited IBIS experimentation using the New York CoBIS RBID and the independent IBIS database of the New York Police Depart- ment. We discuss the full details in Chapter 8; in brief summary, our own investigation corroborated the major findings of the predecessor studies described in this chapter. 4–F  Assessment The committee was charged to offer advice on the options of maintain- ing the current NIBIN program (limited to crime gun evidence) or enhanc- ing it, and since NIBIN uses IBIS as its technical base, the evaluation of one requires evaluation of the other. Yet focusing too much on assessment of current IBIS is also somewhat unfair in light of the charge to our com- mittee to evaluate the feasibility of a national RBID. As De Kinder et al. (2004:208) note, “currently, no technology has been perfected to deal spe-

CURRENT BALLISTIC IMAGING TECHNOLOGY 123 cifically with very large databases of images of marks made by firearms.” IBIS was developed to deal with smaller, regional “open case files” of images, and it is unreasonable to expect that the full system used to imple- ment a national RBID would follow exactly the same lines as the current IBIS platform. However, an RBID system—perhaps streamlining the image acquisition process, allowing for mass entry of exhibits, and continuing to refine comparison procedures—would likely be based on IBIS, if only to maintain compatibility with NIBIN data. As noted above, a subgroup of our committee discussed the IBIS com- parison algorithm in detail with FTI staff under a confidential agreement. It is our judgment that the algorithm is generally quite sound, novel, and appropriate to the task of comparing images of ballistics evidence. Based on the era in which it was developed, IBIS is a valuable system that is funda- mentally a vast improvement over relying on either human memory or the posting of Polaroids on the forensic laboratory bulletin board for deriving matches to evidence in open case files. Properly used—as we describe in Chapter 6—we believe that IBIS provides an adequate investigative tool for local and regional searches of ballistics evidence images. However, as we explain in fuller detail in Chapter 8, the review of past studies of IBIS performance and our own experimental work suggest that IBIS does not operate at the precision needed for a national RBID. In its structure and implementation, the IBIS platform is a computerized version of the comparison microscope. This is beneficial in certain respects, in that it provides a familiar (albeit not exactly identical) interface for fire- arms examiners to review image data. Yet it is also, fundamentally, a limi- tation of the technology. Since its origins in the early 1990s, the ­progress in developing the existing IBIS platform for ballistic imaging has been evolutionary rather than revolutionary, in that it has remained anchored to the premise of emulating the functions of a comparison microscope. Direct pairwise comparisons of exhibits remain the heart of the process; IBIS was not designed to perform as a true image “search engine,” indexing and comparing across large sets of images, as would be desirable in a national RBID implementation. In its form and function, IBIS functions as a quick sorting and ranking mechanism: a tool for search, but not verification. There is great value in the sorting that is performed with relative ease and speed by IBIS. However, major problems arise when higher expectations are placed on the system than it was designed to accommodate. Users and policy makers bear a large part of the responsibility for “overselling” the system; it is unrealistic to expect “hits” on every database search, as effective use of the system depends as much or more on the timely entry of evidence into the system as on the ability of the system to detect a possible match. The system is also ill-served by the expectations of instantaneous and utterly definitive verification of

124 BALLISTIC IMAGING evidence matches created by portrayals in popular media; Box 4-2 presents an example. Overly high expectations and inaccurate portrayals have the unfortu- nate consequence of fueling the perception of ballistic imaging technology as a test—a source of verification—rather than a search tool. Most recently, this perception arose in litigation in Illinois (People v. Pursley, 341 Ill. App. 3d 230; 2003 Ill. App. LEXIS 784, 2003). In 2000, in light of exonerations due to DNA evidence, Illinois code was amended to give convicts the right to make a “motion for fingerprint or forensic testing not available at trial regarding actual innocence”—that is, to permit appeals for DNA testing. Invoking this provision, a man convicted in 1993 of first degree murder (and sentenced to life), largely on the basis of firearms identification evi- dence, “filed a motion . . . seeking an order requiring that his handgun be tested under the Integrated Ballistics Identification System (IBIS).” The appeals court ruled against the convict’s motion for IBIS “testing,” holding that the relevant statute was intended only to apply to fingerprint and DNA testing. Nowhere in the ruling (or, presumably, the motion) is it indicated what a “test under IBIS” might entail, how a comparison score might be interpreted, or against what database images should be searched. Only once (summarizing the state’s motion to dismiss the convict’s appeal) is it noted that “IBIS is not a new test but a new system for cataloging for ballistics information” and that “application of the IBIS would not produce new, noncumulative evidence.” Following the Pursley decision, Carso (2007) argued that the Illinois statute should be amended to include “ballistics test- ing” using IBIS but also does not describe what such a test would involve. IBIS developers and proponents also bear responsibility for “over-   udge Gertner’s ruling in United States v. Green (405 F.Supp. 2d 104; 2005 U.S. Dist. J LEXIS 34273), described in Box 3-4, is also of interest because IBIS was used in the course of the investigation. It suggests that some basic concepts of IBIS scope and operation can be misconstrued. Section G of the ruling notes: [The sergeant/examiner] also used the Integratable [sic] Ballistic [sic] Identification System (IBIS) in his comparison, although the government represented that it would not offer IBIS results [as testimony]. A national computer database, IBIS allows examiners to identify the most likely matches for the evidence in a given case. IBIS uses a laser measuring device to evaluate shell casings and provides the examiner with a list of possible matches. . . . In fact, the IBIS system has been widely criticized. Its efficacy is limited by the detail with which police departments have scanned old shell casings into the computer and the accuracy of the mathematical algorithms used to compare casings. As with the individual examinations, no evidence was presented about the accuracy of the IBIS matches. . . . In any event, [the sergeant] acknowledged that even if the computer suggests numerous possible matches, he will not bother to check them all. That is, once he decides he has found a match, he will not eliminate all other alternatives by exhausting the IBIS-generated list of potential matches.

CURRENT BALLISTIC IMAGING TECHNOLOGY 125 BOX 4-2 CSI Ballistic Imaging Firearms identification concepts and the use of ballistic imaging have peri- odically been referenced on forensic science-themed television shows. One such example is episode 307 (“Fight Night”) of CSI: Crime Scene Investigation; this particular episode won an Emmy award for best writing. One scene finds a Las Vegas investigator talking with a firearms examiner who is peering intently into the microscope of what—externally—is a complete IBIS RDAS unit. The investigator asks, “Three guns found at the crime scene, none match the bullets recovered from the victim. What does that tell us?” “Shooter kept his weapon,” the examiner replies. “Means he likes his gun, and may have used it before,” says the investi­ gator, as some part of the machinery makes a loud whirring noise. “Which is where the shell case and IBIS come in,” says the examiner cheerfully. “I’ll run it against the national database.” He wheels from the microscope to the keyboard and, off-camera, types a short sequence of characters. “Firing pin impressions and breech face marks—a closer look,” muses the investigator; instantaneously, the system makes a loud shuf- fling sound and several beeps. The camera now shows the “IBIS” screen, which prominently shows a single image of the entire base of a cartridge, headstamp and all; some text indicating “Halo On” and “Magnification 150X,” among other things, is superimposed over the corner of the image. Beside it is a four-column listing of “Case ID,” “Exhibit Number,” “Site Number,” and “Firing Pin;” the entries are obviously not sorted in descending order by the purported firing pin score (that is, three digit “scores” are interspersed with two digit scores). The middle entry (clearly not the highest legible score, albeit close) flashes blue several times as the system beeps; at no point does a second, comparison casing image appear. “Got us a hit,” the examiner intones, now reading off of a new window that has popped up on screen. “Los Angeles County Sheriff’s Department found . . . shell casings from the same gun . . . used in a gang murder two years ago.” The investigator interjects, “They get a conviction on the suspect?” “No. Guy beat the rap,” the examiner continues. “Timothy Fontaine, aka ‘Tiny Tim’ . . . member of the Snakebacks . . . current residence unknown.” The “Criminal Records” window that appears on the screen also includes entries for a vehicle license number and the name of an arresting officer; unfortunately, the space clearly reserved for a photo of the person is labeled “NP AVAILABLE.” The investigator says, “I bet I could find where he stays in Vegas,” and the scene ends. The total elapsed time of the scene is 44 seconds. promising” the system, in at least two crucial and related respects. The first is the pervasive mythology that has come to surround the “top 10” results in an IBIS search. The current IBIS provides as its default printed report a listing of the 10 highest scores by each type of marking, and IBIS training materials undercut guidance to consider gaps and features in the

126 BALLISTIC IMAGING distribution of comparison scores by promoting the examination of the top 10 suggested matches. However, the implied physical or cognitive restric- tion to the top 10 results is not likely to be appropriate in all searches or all database sizes, and the focus on the top 10 results is inadequate for assessing the system’s performance and for understanding the variability of scores by demographic characteristics (e.g., gun make and model). We know of no substantiated rationale for the ad hoc cutoff at rank 10; the resulting assumption that nothing outside of the top 10 ranked is valuable puts unduly high expectations on the system. The second basic flaw is the use of the term “correlation” to describe the IBIS comparison process, which imputes to the system an unjustified air of technical exactness. The common, statistical use of the term implies a particular type of relationship and quantifies the strength of that rela- tionship. In comparison, IBIS scores are described by the system’s own training materials as having no intrinsic value, severely limiting the ability to express the strength of similarity between two exhibits and to compare results across different runs of the system. As we suggest in Chapter 6, we believe that the usefulness of IBIS is compromised unless some meaning can be imputed to its “correlation” scores—to make them function more like true statistical correlations. 4–G  Appendix: Summary of Performance Tests in the California Evaluation of a Reference Ballistic Image Database This appendix describes the tests performed by Tulleners (2001) in response to the California legislature’s directive that the state’s Department of Justice study the feasibility of a reference ballistic image database. We begin by profiling those tests that were actually completed; these summa- ries extract additional information from spreadsheet printouts that were included as an appendix to Tulleners (2001). We also describe those tests that were planned for the evaluation but were unable to be completed, and summarize the formal responses to and independent assessment of the California evaluation. 4–G.1  Completed Performance Tests The Tulleners (2001) technical evaluation was based on the completion of five performance tests.

CURRENT BALLISTIC IMAGING TECHNOLOGY 127 Test 1—Basic System Correlation Two cartridges were fired from each of the 792 CHP pistols, one to be entered into a “test” database and the other retained as an “evidence” exhibit. All of these firings used the same Federal brand ammunition. The basic goals of this test were to assess the time required to enter specimens into a database and to test the accuracy of comparison as database size increases. The first component of this test considered the basic ability of the sys- tem to find exhibits for guns known to be in the database. A sample of 50 test cartridges (the same-gun pairs of “evidence” entries already in the data- base) was drawn, and queries were made against the full database. Twenty- four (48 percent) of these test casings matched to their sister evidence casing as the first-ranked entry in either breech face or firing pin mark. However, a surprisingly high 19 of the comparisons (38 percent) did not find the sister casing within the top 10 ranked items in either breech face or firing pin, of which 9 (18 percent) of these known-match comparisons failed to clear IBIS’ coarse comparison and 20 percent threshold. It does not appear that one mark was superior to the other in terms of generating possible matches: the 31 instances where the known sister was found in the top 10 by either mark are fairly evenly divided between cases where both marks were in the top 10 (10), only the breech face was in the top 10 (9), and only the firing pin was in the top 10 (12). A second component of the test selected five of the “evidence” casings used in the first test that had low ranks on one or both markings; these were reacquired by a second IBIS operator and matched against smaller subsets of the data to see if those changes affected the rankings. In terms of comparisons to the full database, the rankings changed using the image from the second operator but not grossly so; no very low-ranked ­exhibits were converted to high ranks, although two of the casings apparently failed to clear the 20 percent threshold in the reacquisition.10 The entries were compared against database subsets of size 100, 200, 300, 400, 500, 600, 700, and 792; generally, rankings degraded with the larger sample   he main text of Tulleners (2001) indicates these figures as being searches for matches in T the top 15 ranked items, but it can be verified from the “raw data” spreadsheets in Appendix C of the technical evaluation that the statements hold for the stronger (and more conventional) top 10 filter.   ailure to clear the 20 percent threshold is assumed from the “Not in Selection” entry for F both score types (breech face and firing pin) in the technical evaluation spreadsheets. 10  ne of these, labeled E44 in the first test and E152A in the reacquisition, appears to O have had a significant difference in the acquisition of the firing pin image. The exhibit was ranked 45 on breech face and 1 on firing pin in the first analysis, but apparently failed to clear the 20 percent threshold and was excluded from listing in the reanalysis (Tulleners, 2001: Appendix C).

128 BALLISTIC IMAGING sizes, though very high-ranked exhibits tended to stay very high (e.g., a one-ranked exhibit on firing pin remained the number one rank for all the sizes, and a two-ranked exhibit for the 100-entry database slipped to rank six in the full 792 set).11 Test 2—Cartridges Not in Database Ten cartridges were fired using the same Federal brand cartridges but using 10 pistols of the same make and model not from the new CHP order. The comparison scores for the best match on both marks were recorded and were judged to be consistent with the range of scores registered in Test 1, several with the high end of that range. However, the evaluation accepted FTI’s advice that “a score is only relevant within a particular correlation” and that “the score cannot be used to compare the ranking of two correla- tions.” The test was found to be inconclusive. Test 3—Different Ammunition During the test firing of the CHP pistols, 22 of the pistols were also used to fire rounds using batches of five different ammunition brands: PMC-Eldorado (.40 S&W 180 grain), CORBON (.40 S&W 165 grain), ARMSCOR (.40 S&W 180 grain), Remington (.40 S&W 180 grain), and Winchester (.40 S&W 180 grain). Not all of the ammunition types were fired from each of the guns; 72 cartridges were acquired in total. Each of these casings was then compared with the 792-exhibit set to test the ability of the system to find the Federal-brand test fire from the same gun in the database. The test found poor results in finding matches to the images from F ­ ederal-brand ammunition using images from the other five brands. Sixteen of the 72 comparisons (22 percent) matched to the image from the same gun as the top-ranked result on either the breech face or firing pin impres- sions; in total, 21 of the comparisons (29 percent) had the known sister image occur in the top 10 ranks on either mark. Neither mark was better at generating matches; 13 top-10 matches were found on the breech face mark and 14 on the firing pin. No match was found in the top 10 ranks by either mark in 26 of the comparisons (36 percent), and 25 of the com- parisons (35 percent) failed to clear the coarse comparison and 20 percent threshold. 11  A third part of the test timed the comparison times for three selected exhibits for data- base subsets of different sizes (100, 250, 500, 792). From the results, Tulleners (2001:8-5) concluded that “correlation times are not a significant issue for a large database” although he assumed a strict linear interpolation in processing times.

CURRENT BALLISTIC IMAGING TECHNOLOGY 129 Tabulations from Tulleners (2001:Appendix C) suggest that the ARMSCOR and Corbon ammunition proved particularly difficult to match. Out of 14 matches of ARMSCOR rounds to the Federal-brand images in the database by breech face, 6 failed to clear the coarse comparison step, and 8 ranked lower than 25; 13 Corbon comparisons were attempted, with 8 being rejected at the coarse comparison stage and only one ranking better than 25 (but out of the top 10). Results by firing pin were similar, with a few more rankings in the 11–25 range and one ARMSCOR round finding its Federal-brand sister as the top-ranked result. The Winchester rounds proved most amenable to matches in the 18 comparisons that were made: 7 found the Federal sister round in the top-ranked slot, with 1 ranking 11–25, 6 ranking below 25, and 4 missing the 20 percent threshold (see also De Kinder’s [2002b:11] analysis of the same spreadsheet). Test 4—Altered Breech Face After firing the two Federal-brand cartridges for the “test” and “evi- dence” sets, the firing pin tip and breech face of one of the CHP pistols was subjected to “minimum file and sandpaper efforts” to attempt to change the firearm’s individualizing marks (Tulleners, 2001:B-5). “This filing alteration took about three minutes using a standard file” (Tulleners, 2001:7-3). A second set of two test fires with Federal ammunition was then performed, one for entry in the database and the other used as an “evidence” query. The two sets of exhibits, before and after alteration, matched to each other well: the pre-alteration casings matched to each other in the top-ranked position on both firing pin and breech face and the post-alteration casings matched as the top-ranked pairing on firing pin (however, the rank was 35 on breech face). However, no match was possible from the pre-alteration to the post-alteration exhibits; in both cases, the technical evaluation’s data appendix lists the matches as “not in selection list,” suggesting that the deliberate alteration prevented the exhibits from clearing the IBIS coarse comparison pass. Test 7—Breech Face Longevity Study In a test intended to determine whether a breech face maintains individ- ual marks over repeated firings, an independent laboratory was contracted to perform 600 test fires from each of two .40 caliber pistols; the make of one was described as a Glock type and the other as unknown (Tulleners, 2001:8-11). The Glock-type pistol was fired using CCI brand ammunition, and IMI ammunition was used in the unknown-make pistol. For each pis- tol, casings in the intervals 1–6, 101–106, 201–206, 301–306, 401–406, 501–506, and 595–600 were retained for analysis; one casing from each

130 BALLISTIC IMAGING interval was used as the “test” database and the other as “evidence” entries. Ultimately, the Glock-type firings turned out to be unusable due to the lack of a larger database for comparison; none of the other CHP weapons have Glock type firing pins, so they could not be compared to the Glock firings due to IBIS’ demographic filtering. Tulleners (2001:8-11) concluded that there were signs of “definitive ranking degradation” as the firings from later intervals were tended to rank lower than those from the earlier firings among the IMI cartridges. However, the evaluation suggested that “further tests need to be conducted in this area.” 4–G.2  Incomplete Performance Tests Tulleners (2001) was unable to carry out tests 5, 6, and 8 in his original slate of experiments. Test 5 was intended to assess IBIS performance using cartridges fired from SIG Sauer firearms, which are known among examin- ers for having minimal breech face characteristics. An extensive set of SIG Sauer test fires was subsequently used in Tulleners’ joint study, De Kinder et al. (2004), described in the next section. Test 8 was meant to test the system using firearms known to have strong subclass characteristic carry-over, such as some Heckler and Koch and Lorcin firearms (see Section 3–B.1). Test 6 “would have taken some test-fired cartridge cases from selected weapons, buried one of the cartridge cases in a large database and then observe the correlation on these cartridge cases” (Tulleners, 2001:7-3, 7-4). The California Department of Justice was unable to complete the test as planned, though it arranged for a limited test along the same lines to be conducted by the New York City Police Department (NYPD). The C ­ alifornia Criminalistics Institute submitted eight casings each fired from two 9mm SIG Sauer pistols. In each set, two rounds used Remington- Peters ammunition, and the other firings used Winchester, Federal, Hor- nady Vector, Fiocchi, CCI, and Sellier and Bellot ammunition. One of the Remington rounds was retained as the “evidence” casing, so that for each of the two pistols, seven sister images were mixed into the NYPD’s 9mm database, which then contained 3,673 items. For both pistols, four of the seven sister images were found in the top 15 ranks by either breech face or firing pin, and the second Remington round generally turned up as the top-ranked entry by either mark. The Hornady Vector, CCI, and Sellier and Bellot rounds “seemed to be the most difficult for comparison” (Tulleners, 2001:8-10). 4–G.3  Criticisms and Independent Review Rebutting the California study on behalf of ATF, Thompson et al. (2002:15, 16) argued most stridently that “all of [the performance test

CURRENT BALLISTIC IMAGING TECHNOLOGY 131 results] are skewed due to the selection of Federal Brand ammunition.” They argue that Federal is not the prescribed “ATF protocol ammunition in any of the calibers of interest, due to the primer surface generally being too hard in comparison to the ammunition being used in handguns.” Instead, they suggested Remington-Peters ammunition as a more suitable medium. In his review, De Kinder (2002b:9–11) rejected this argument, citing research by the Forensic Institute in The Netherlands on IBIS score results using 18 common primers suggesting that Federal showed medium performance in registering marks. (Unfortunately, the Dutch study did not include Remington ammunition, as it is not common in Europe.) More directly, De Kinder noted that hardness properties of primers are not well known but hardness measures for the six types of ammunition used in California’s Test 3 had been independently conducted by the Lawrence Livermore National Laboratory. The Federal brass primers were directly measured to be the least hard of the six (108 ± 5HV), including Remington- Peters’ nickel primers (157 ± 12HV).12 In its rebuttal to the California study, FTI (2002:5) argued that “the Evaluation has an overly pessimistic view of automated ballistics technol- ogy that discredits its conclusions.” In Test 1, FTI (2002:14–15), submits that too great a focus on the 38 percent of possible matches missing from the top 15 ranks unduly discounts the 48 percent that found the correct match in the top rank on one of the marks and the 62 percent that matched within the top 15 on either rank. “These results are sufficient to identify a significant number of cartridge cases that merit manual study and would have produced new cold hits.” More fundamentally, FTI (2002:13, 14) holds that the IBIS system was held to an unfair standard in the test. A firearms examiner manually compared the cases for FTI and concluded that he could not certify a match between eight of the Test 1 pairs and that “approximately half had markings that were somewhat unfavorable.” As a result, FTI suggested that at least the eight human-identified nonmatches be excluded from the statistics, arguing: It is immediately obvious that the performance of an automated exami- nation could not, and should not, be more accurate than a microscope comparison by a firearms examiner. Thus, to the extent that the Evalua- tion included cartridge cases that had insufficient marks to be identified by a firearms examiner, the results cannot support the hypothesis, and the Evaluation must be without scientific value. 12  e Kinder also noted that the criticism of Federal ammunition was unusual, given that D Federal had been chosen for a similar study by several of the same ATF authors (Thompson et al., 1996).

132 BALLISTIC IMAGING On these points, De Kinder (2002b:14) concluded that the FTI arguments were an overreach. He countered that the passage quoted above “is the same type of expression as saying at the beginning of the 1990[s] that automated comparison of bullets and cartridge casings is impossible.” He preferred instead the revised statement that “the current scientific knowledge and state-of-the-art technology does not allow one to be more accurate than a microscope comparison by a firearms examiner.” De Kinder held that drop- ping the believed-“unmatchable” exhibits from analysis is “unacceptable,” particularly given that the study was oriented to studying the feasibility of an RBID. “All data points have to be taken into consideration” because “the goal of [an RBID] is not restricted to those cartridge cases that can be identified by a trained firearm examiner.” Generally, De Kinder (2002b) indicated approval of the conduct and interpretation of the major performance tests in the California study. He suggested the need for further study in a variety of areas.

Next: 5 Current Ballistic Image Databases: NIBIN and the State Reference Databases »
Ballistic Imaging Get This Book
×
Buy Paperback | $64.00 Buy Ebook | $49.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Ballistic Imaging assesses the state of computer-based imaging technology in forensic firearms identification. The book evaluates the current law enforcement database of images of crime-related cartridge cases and bullets and recommends ways to improve the usefulness of the technology for suggesting leads in criminal investigations. It also advises against the construction of a national reference database that would include images from test-fires of every newly manufactured or imported firearm in the United States. The book also suggests further research on an alternate method for generating an investigative lead to the location where a gun was first sold: "microstamping," the direct imprinting of unique identifiers on firearm parts or ammunition.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!