7
Three-Dimensional Measurement and Ballistic Imaging

The striations on the edges of fired bullets and the textures impressed on the primer surface of a cartridge casing are, in their own right, images: representations of physical objects (imperfections and textures in a firearm’s barrel, breech face, and firing pin) depicted in a medium. These physical “images” are also inherently three-dimensional, produced by cutting, scraping, and etching. Part of the tension that has accompanied the use of photography in forensic firearms identification (see Section 3–F) arises from the fact that flat, two-dimensional representations of tactile, three-dimensional features is necessarily somewhat dissatisfying. Though it could be argued that any of the instantaneous views of bullet or cartridge case evidence through a comparison microscope is a two-dimensional perception, the ability to directly manipulate the exhibits—to alter their rotation and lighting—gives a three-dimensional experience that any single two-dimensional freeze-frame would lack.

The basic objective of any ballistic image database is to collect some accurate representation of cartridge cases and bullets, derived so that entries can be compared and scored for similarity with others. The presence of an electronically coded representation of the physical objects obviates the need for direct access to the physical objects for comparison (though they would have to be directly examined for final confirmation). In theory, then, a three-dimensional model of a cartridge case or bullet—accurately conveying fine differences in depth but still capable of mathematical processing—would be ideally suited to the task.

As advances have continued in the field of surface metrology in recent years, applications in the three-dimensional measurement of ballistics evi-



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 186
7 Three-Dimensional Measurement and  Ballistic Imaging The striations on the edges of fired bullets and the textures impressed  on the primer surface of a cartridge casing are, in their own right, images:  representations of physical objects (imperfections and textures in a firearm’s  barrel,  breech  face,  and  firing  pin)  depicted  in  a  medium.  These  physi- cal  “images”  are  also  inherently  three-dimensional,  produced  by  cutting,  scraping,  and  etching.  Part  of  the  tension  that  has  accompanied  the  use  of photography in forensic firearms identification (see Section 3–F) arises  from  the  fact  that  flat,  two-dimensional  representations  of  tactile,  three- d   imensional features is necessarily somewhat dissatisfying. Though it could  be  argued  that  any  of  the  instantaneous  views  of  bullet  or  cartridge  case  evidence  through  a  comparison  microscope  is  a  two-dimensional  percep- tion, the ability to directly manipulate the exhibits—to alter their rotation  and  lighting—gives  a  three-dimensional  experience  that  any  single  two- dimensional freeze-frame would lack. The basic objective of any ballistic image database is to collect some  accurate representation of cartridge cases and bullets, derived so that entries  can  be  compared  and  scored  for  similarity  with  others.  The  presence  of  an electronically coded representation of the physical objects obviates the  need for direct access to the physical objects for comparison (though they  would  have  to  be  directly  examined  for  final  confirmation).  In  theory,  then, a three-dimensional model of a cartridge case or bullet—accurately  conveying fine differences in depth but still capable of mathematical pro- cessing—would be ideally suited to the task. As advances have continued in the field of surface metrology in recent  years, applications in the three-dimensional measurement of ballistics evi- 

OCR for page 186
 THREE-DIMENSIONAL MEASUREMENT AND BALLISTIC IMAGING dence have begun to emerge. As part of our charge to consider technical  enhancements to the present National Integrated Ballistic Information Net- work (NIBIN) system—and to ballistic imaging, generally—consideration  of three-dimensional measurement versus two-dimensional photography as  the imaging standard was a natural pursuit. As Thompson (2006:10, 12)  suggests, fully exploiting the three-dimensional aspects of the toolmarks left  by firearms raises new levels of complexity relative to two-dimensional pho- tography. “Striated [three-dimensional] toolmarks would be easy to match  if, from the beginning to the end, they always stayed the same,” but they  do  not.  Indeed,  even  fine  striations—colloquially  referred  to  as  lines—do  have  a  third  dimension,  depth,  that  can  be  appreciated  “by  using  higher  magnification”; ultimately, computer-based systems for analyzing striations  will  have  to  contend  with  the  problem  of  deciding  whether  the  different  depths  of  “lines”  convey  any  special  significance.  Moreover,  Thompson  (2006:12) notes: The dynamics of a bullet going down the barrel of a firearm, the down- ward  movement  of  a  fired  cartridge  case  against  the  breech  face  of  a  Glock pistol, or the movement of a screwdriver across a door strike plate  all leave 3-dimensional toolmarks that can change considerably in a short  distance. . . . These features, toolmark angle, ammunition variability, [and]  tool/barrel wear are features that an examiner considers during an exami- nation  and  none  of  these  can  be  [fully]  captured  in  a  [two-dimensional]  photograph. In  Chapter  8,  we  discuss  experiments  conducted  on  the  committee’s  behalf by the National Institute of Standards and Technology (NIST) using  a prototype three-dimensional sensor on cartridge cases. This chapter pro- vides basic background for that discussion, beginning with a discussion of  the conceptual differences between two-dimensional and three-dimensional  image  acquisition  technologies  (Section  7–A).  Previous  efforts  in  three- dimensional measurement of ballistics evidence are described in Section 7–B,  along with currently emerging three-dimensional products (7–C).  7–A ACquISITION TECHNOLOgIES 7–A.1 Two-Dimensional Acquisition A  two-dimensional  approach  to  pattern  comparison  uses  a  photo- graphic image of the object as the basic element. In considering the impact  of two-dimensional imaging on the comparison process, there are several  key  factors—all  driven  by  the  fact  that  the  image  is  a  projection  of  light 

OCR for page 186
 BALLISTIC IMAGING reflected off of three-dimensional objects onto a two-dimensional acquisition  plane. These factors generally separate into geometry and photometry. The basic process of two-dimensional image formation involves several  steps. Light rays are emitted from a source (or sources) located at specific  geometric spots relative to the object and the sensor. Those rays follow stan- dard optics as they emanate from the source to the object. When each ray  strikes the object, it interacts with the surface of the object. There are sev- eral effects, depending on the material properties of the object. If the object  is purely specular (a mirror), the ray is reflected back into the world at an  orientation governed by the local geometry of the surface. More generally,  however, the ray interacts with the surface. Some of the energy of the ray  will be absorbed by the material, thus diminishing the total energy reflected  back into the world. In addition, the microstructure of the surface will typi- cally cause the ray to diffract, meaning that the amount of energy retrans- mitted off of the surface will vary as a function of the angle of emittance  relative to the surface normal at the point. For example, in a purely matte  surface, the amount of energy reflected from the surface varies as a cosine  law. These effects are generally captured by the photometry of the situation,  and techniques such as the bi-directional reflectance distribution function  (BRDF)  can  be  used  to  very  accurately  capture  the  reflectance  properties  of a material. Of course, this works for ideal materials, or materials whose  properties can be measured in isolation. In more general settings, one uses  approximations to capture the BRDF of a material. Once the light energy is reflected off of the surface, it obeys standard  optics laws, and is captured by a sensor (camera). Here, the geometry of the  situation will influence how many photons are captured at a single image  element—the distance of the camera from the surface (typically not an issue  in close-range imaging), the orientation of the cameras optics system and  acquisition plane relative to the incoming rays, as well as other effects. In general, one can characterize several factors that influence the amount  of light captured at a pixel (picture element) in a standard imaging device: •  position and strength of the light source; •  physical  extent  of  the  light  source  (assuming  it  is  not  a  point  source); •  use of multiple light sources; •  geometry relating light source positions to the surface material of  the object being sensed; •  geometry of the object itself (see below); •  material properties of the object (this includes both changes in the  material  across  the  object,  which  will  change  the  amount  of  light  reflected  independent of geometry (e.g., dirt or other defects may reduce the light and 

OCR for page 186
9 THREE-DIMENSIONAL MEASUREMENT AND BALLISTIC IMAGING thus the appearance of a particular pixel) and the manner in which light, inde- pendent of the total incoming amount, is reflected from the surface); and •  geometry of the sensor relative to the object being sensed. Clearly, the overall orientation of a local patch of an object, both rela- tive to the sources and to the sensor, is a major factor in determining how  much  light  is  reflected  to  the  source.  A  naïve  analysis,  however,  assumes  that  the  object  is  spherical  or  cylindrical,  that  is,  that  all  incoming  rays  strike  the  object  and  there  is  no  occlusion.  When  the  object  has  a  more  intricate  surface,  however,  the  situation  gets  more  complex.  This  is  espe- cially true if there can be self-occlusion, that is, that some light rays may  not reach part of the object because they are reflected by other parts of the  object—casting self-shadows. Given  all  of  these  factors,  one  can  see  that  there  is  a  fundamental  issue in comparing an image of a probe object against an image of a target  object—one needs to ensure that the comparison of intensities in an image,  or of some other feature extracted from the intensities, is actually reflect- ing the shape of the underlying object, and not some other factor. Many of  these elements can be controlled. For example, using the same strength of  light source, and fixing the position of the light source relative to the object  will keep these factors constant across images. Normalization of the image  intensities can also remove effects of the elements. Ensuring that the objects  are cleaned in a consistent manner will remove material property changes  from the images. Because  of  the  shapes  of  bullets  and  shell  casings  and  because  their  surfaces can be highly reflective (and hence high-glare), the geometry of the  acquisition setup is very important. Keeping the orientation of the object  the same with respect to the camera across acquisitions is very important.  Given that one is primarily measuring striations on the surface of the object,  self-shadowing and angle of reflectance effects are very critical, in order to  ensure that the striations are both visible and have the same effect on inten- sities in the two images. One way to deal with this issue is to use multiple  light sources—effectively to bathe the object in light from multiple direc- tions. A standard technique is to use a ring of light sources surrounding the  camera itself. This tends to reduce self-shadowing effects and reduces the  impact of the specular reflection properties of metal objects. An alternative  is to use multiple sources but to sequence them—that is, to take multiple  images in the same geometry but illuminated from different directions. This  can highlight striation patterns in one of the images that might get washed  out in a bathing scenario. Other actions can be taken to reduce image variations not related to  surface variations. In addition to controlling the lighting effects, the reso- lution of the image acquisition device, relative to the size of the object, is 

OCR for page 186
90 BALLISTIC IMAGING important. Since one is typically trying to capture image information about  small  markings  on  the  object,  there  is  a  danger  that  those  markings  will  get blurred out. Consider the geometry of the situation. While idealistically  each light ray is reflected from a different infinitesimal patch of the surface,  so that surface patches from the interior of a striation will reflect a different  ray than a nearby unmarked surface patch, at some point all of those rays  are captured by a patch of an acquisition device (leading to a pixel or pic- ture element). If the pixels are small in comparison with the object size, then  nearby  rays—one  from  a  striation,  one  from  the  nearby  surface—will  be  captured by different pixels. However, if the pixels are too large, then these  rays may project to and be integrated out by the same pixel. This blurring  of the image can be crucial in this setting, so it is important to determine  the size of standard striations and to ensure that the camera device is suf- ficiently high resolution to capture these changes in surface shape. 7–A.2 Three-Dimensional Acquisition Since the goal is to compare physical shapes of specimens—the probe  object  against  a  stored  target  object—an  alternative  is  to  try  to  directly  measure the three-dimensional shapes. In other words, rather than trying  to control or factor out all of the components that affect the image of an  object, an alternative is to directly measure the shape. If one can do this,  then the comparison can take place on shapes, rather than on image appear- ance of shapes, and all of the light interaction issues no longer matter. Three-dimensional surface measurement techniques have developed to  include both contact and noncontact methodologies. A contact probe, such  as a stylus, can directly measure the three-dimensional position of a point  on a surface relative to a fixed coordinate frame. Repeated passes of such  a contact probe can be used to more fully reconstruct a three-dimensional  shape,  and  these  direct  surface  measurements  can  be  directly  compared  against  other  exemplars.  In  the  particular  context  of  ballistics  evidence  analysis,  however,  contact  methods  are  problematic  for  several  reasons.  One problem is the size of the object being studied—a bullet or a casing.  Most contact probes do not have the level of resolution necessary to build  a sufficiently detailed three-dimensional reconstruction. As described more  fully in the next section, a more fundamental difficulty is the potential for  the evidence bullet or casing to be scratched or otherwise damaged using  contact methods, potentially jeopardizing the chain of evidence. Noncontact methodologies have emerged that do have high resolution,  certainly  sufficient  for  the  task  of  working  with  bullet  or  casing  evidence.  These noncontact methodologies include confocal microscopy, interferometry,  or laser scanners. Each of these methods can be used to capture the three- dimensional shape of an object, without being subject to nonlinear intensity 

OCR for page 186
9 THREE-DIMENSIONAL MEASUREMENT AND BALLISTIC IMAGING effects  due  to  light  reflection  properties,  or  to  self-shadowing  effects.  The  main  advantage,  therefore,  is  being  able  to  capture  and  directly  compare  three-dimensional shape information. However, the use of these highly sensi- tive methods can incur some disadvantages, chief among them: •  Cost—such sensors are usually much more expensive than optical  camera systems. •  Speed—such  sensors  are  typically  much  slower,  taking  orders  of  minutes  or  tens  of  minutes  to  acquire  a  three-dimensional  image,  rather  than a fraction of a second.  •  Noise—it  is  important  to  characterize  the  noise  in  the  acquired  measurements.  If  the  noise  is  on  the  order  of  the  depth  of  the  striations,  this will render the approach ineffective.  In  Chapter  8  we  consider  the  performance  of  confocal  microscopy,  which  operates  on  the  principle  of  focusing  a  point  of  light  on  parts  of  a surface separately and measuring the intensity of returned rays of light  rather than a pure reflectivity approach of illuminating the whole surface  at  once.  In  particular,  light  is  concentrated  through  a  pinhole  aperture  to  reach  the  surface,  and  reflected  rays  pass  through  a  second  pinhole  in  order to filter out rays that are not directly from the focal point. A three- dimensional reconstruction can be built by varying the height of the pinhole  apparatus, thus creating a series of thin two-dimensional slices from which  three-dimensional heights can be derived by considering the vertical level  at which the maximum level of light was reflected back from a particular  point  (Semwogerere  and  Weeks,  2005).  The  particular  microscope  tested  in  Chapter  8  makes  use  of  a  Nipkow  disk,  a  spinning  disk  consisting  of  multiple pinholes, in order to collect information more rapidly from a wider  lateral surface. 7–b PAST EFFORTS IN THREE-DIMENSIONAL IMAgINg OF bALLISTICS EvIDENCE Seeking  a  way  to  reinforce  firearms  identification  with  a  quantifiable  and  objective  basis,  Davis  (1958)  developed  an  instrument  he  called  a  “striagraph.” The striagraph recorded measurements from a stylus riding  around the circumference of a recovered bullet as the bullet was rotated,  providing three-dimensional surface measurement of the path around the  bullet. The striagraph was never developed commercially, in part because  of  two  key  limitations  that  continue  to  affect  the  use  of  stylus  methods  for  forensic  analysis  today:  The  method  was  not  applicable  to  deformed  or  fragmented  bullets  (as  are  not  uncommon  at  crime  scenes),  and  the  direct  contact  of  the  stylus  could  scratch  or  mark  the  bullet,  corrupting 

OCR for page 186
9 BALLISTIC IMAGING the  evidence  (Gardner,  1979).  A  similar  stylus  method,  dubbed  the  Balid  system, was described in a conference presentation and summarized in the  second  issue  of  the  AFTE Newsletter  in  1969.  The  destructive  nature  of  stylus profiling methods was later demonstrated by Blackwell and Framan  (1980),  using  scanning  electron  microscopy  to  illustrate  the  deformation  caused  by  using  a  stylus-based  “Profilcorder”  to  trace  the  circumference  of several bullets. Though  stylus  methods  are  infeasible  for  forensic  analysis,  methods  for  profilometry—the  generation  of  one-dimensional  vectors  of  height  information—of ballistics evidence were pursued by later researchers. De  Kinder  et  al.  (1998)  and  De  Kinder  and  Bonfanti  (1999)  analyzed  bullet  striations using three-dimensional profilometry; their scope used a reflected  laser as a sensor and was capable of measuring height differences to 1 μm.  De Kinder et al. (1998:299) performed preliminary analysis on 9mm Para  b   ullets (including bullets from unfired rounds), as well as those fired through  a Fabrique Nationale High Power pistol and recovered using either a water  tank or cotton wool. They concluded by noting that “we hope to reduce  [the disadvantage of lengthy data capture times] by setting up a procedure  to extract a feature vector. This will probably no longer necessitate us to  record the whole surface of a bullet, but only a few circumferences to obtain  a representative data set of the surface topology.”  De Kinder and Bonfanti (1999) extended this work, taking 151 scans  (0.05 cm apart) beginning approximately 1mm from the end of the bullet,  thus  giving  a  set  of  profiles  along  a  7.5mm  patch.  (The  first  34  scans  were  later  found  to  be  relatively  noninformative  and  were  dropped  from  a   nalysis.) Each circumference measurement was taken with an overlap to  account for striations split by the initial starting point. Data capture time  was 4–5 hours per bullet, “which will be reduced by optimising the defini- tion of the feature vector” (De Kinder and Bonfanti, 1999:87). To compare  bullets, they constructed a correlation matrix consisting of the correlations  between feature vectors for each of the land impressions (the bullets they  studied had six land engraved areas). They took the trace of the resulting  matrix as a summary measure for that specific alignment between the bul- lets; the six traces that arise from reordering the matrix for different land  impression alignments were collected. They then compared the maximum  value  (the  presumptive  best  match)  to  the  average  of  the  traces  for  non- corresponding  alignments.  So,  for  one  case  involving  two  bullets  from  the same gun, they found that the “sum of the correlation coefficients for  corresponding  striation  marks”  was  64  percent  “higher  than  the  average  value for non-corresponding match.” In the sole case where they compared  bullets from two different guns, the same factor came to 11 percent. They  concluded  that,  of  six  cases  where  two  bullets  from  the  same  gun  were 

OCR for page 186
9 THREE-DIMENSIONAL MEASUREMENT AND BALLISTIC IMAGING compared,  “a  well  founded  positive  answer  can  be  provided  for  about  one in four cases, while for the two other comparisons, no clear answers  can be given” (De Kinder and Bonfanti, 1999:92). In correlating the series  acquired at different heights from the bullet base, they found that, “con- trary  to  our  expectations,  optimal  results  (correlation  coefficients  larger  than 80 percent) were not obtained for the scans closest to the back of the  bullet, but for lines 80 to 100, corresponding to a distance to the base of  about 2mm” (De Kinder and Bonfanti, 1999:89–90).  Also  focused  on  the  problem  of  analyzing  bullet  striations,  Bachrach  (2002) developed SciClops to acquire profiles of bullet striations. This plat- form used confocal microscopy to derive a linear, topographic trace around  the circumference of a bullet. This work would ultimately be extended to  include  analysis  of  a  rich  set  of  test-fired  bullets,  using  gun  barrels  from  nine different manufacturers and including more than 200 firings through  each (the barrels were cleaned at one point in the firing, to determine the  effect  of  that  action  on  observed  striations).  The  research  suggested  a  three-way  gradation  in  terms  of  the  propensity  of  manufactured  barrels  to leave detectable and reproducible marks. A middle range of barrels and  manufacturers worked best for toolmark deposition. At one extreme were  relatively cheap firearms barrels whose less precise manufacturing standards  added randomness to the observed markings and precluded easy matching;  at the other were extremely high-end barrels that were so finely polished  and machined as to render toolmarks too subtle to readily distinguish. Banno et al. (2004) acquired images from bullets (two from a Tanfoglio  GT27 automatic pistol and another from a Browning model 1910 7.65mm)  using a Lasertec HD100D-A confocal microscope. This microscope is capa- ble  of  measuring  a  3.2mm  ×  3.2mm  patch  with  450  × 450  pixels,  with  0.02 μm height resolution. Images of land engraved areas were generated  by connecting a 4 × 4 set of separate patches. Software aligned the different  three-dimensional  renderings,  and  similarity  was  assessed  by  differencing  the aligned images and visualizing the results, shaded to indicate whether  differences were within a 0.015mm tolerance. Images for the bullets fired  from  the  same  Tanfoglio  pistol  showed  generally  strong  similarity,  with  higher differences generated when comparing features from the Tanfoglio  and  Browning  test  fires.  Banno  et al.  (2004:240)  illustrate  but  do  not  extensively  analyze  use  of  this  measurement  for  other  surfaces,  including  cartridge case markings. “This algorithm did well” in comparing firing pin  impressions for two cartridges from the same weapon; though there is dif- ference in texture along the wall of the firing pin impression, the hollows  of the interior of the marks overlap almost exactly.  Zographos et al. (1997) and Evans et al. (2004) advanced the “Linescan”  system, a revised methodology for obtaining a composite two-dimensional 

OCR for page 186
9 BALLISTIC IMAGING image  around  the  edge  of  cylindrical  shape,  such  as  a  cartridge  case  or  bullet.  The  system  acquires  images  in  a  small  window  while  the  object  is  turned,  resulting  in  a  continuous  imaging  process  rather  than  a  stitch- together of related images. 7–C EMERgINg PLATFORMS FOR THREE-DIMENSIONAL IMAgINg OF bALLISTICS EvIDENCE In the past few years, Forensic Technology WAI, Inc. (FTI) has devel- oped a bullet-only three-dimensional imaging system, dubbed BulletTRAX- 3D.1  This  new  system  “acquires  two-dimensional  and  three-dimensional  data  in  digital  form  from  the  entire  bearing  surface  of  a  fired  bullet  to  obtain its digital ‘signature’, specifically, a map of its surface topography”  for a band around the bullet (Dillon, 2005:5). This differs from the stan- dard  Integrated  Ballistics  Identification  System  (IBIS)  entry  that  requires  operators to specify and image the separate land engraved areas. Graphi- cally, this image data can be rendered onscreen in layers and, notably, as  a  planar  surface  that  can  be  rotated  and  lit  (altering  both  direction  and  type  of  simulated  lighting)  to  see  striations  in  relief.  A  software  module  also  attempts  to  detect  and  display  bands  of  consecutive  matching  stria- tions,  an  emerging  standard  for  quantifying  bullet  comparisons  (see  Sec- tion 3–B.3). Like its two-dimensional counterpart in IBIS, the comparison  algorithm utilized by BulletTRAX-3D is proprietary information.2 As such,  it  is  unknown  how  it  differs  from  the  standard  two-dimensional  IBIS  in  its comparison routines. However, a reading of Roberge and Beauchamp’s  (2006) analysis, described below, suggests that the types of scores returned  by BulletTRAX-3D are similar to those returned by IBIS.  Roberge  and  Beauchamp  (2006)  report  success  in  using  FTI’s  B   ulletTRAX-3D  platform  in  a  complicated  test  of  bullet  matching,  mak- ing use of a set of 10 consecutively manufactured Hi-Point barrels. These  b   utton-rifled barrels are known to create major problems for direct visual  comparison  (see  Section  2–D.1).  Four  bullets  were  fired  through  each  b   arrel, and these were grouped into pairs; the objective was to match one  1  he exact rendering of the name of the system varies. The promotional brochure for the  T system uses a logo that depicts the “3D” part of the name in superscript—BULLETTRAX3D— but describes the system in text as BULLETTRAX-3D. However, Dillon (2005) and Roberge  and Beauchamp (2006) used mixed case, calling it BulletTRAX-3D. 2  Dillon (2005:15) makes the remarkable statement that “the search algorithms employed . . .  are proprietary in nature and not of direct interest to the firearms examiner.” Dillon suggests  that “the examiner is less concerned  with  the  search  algorithms  and  much  more  concerned  with the bottom line represented by the system’s list of high probability associations with other  cases,” though how one can be confident in the “high probability” of suggested associations  without any understanding of the algorithm’s process is not specified.

OCR for page 186
9 THREE-DIMENSIONAL MEASUREMENT AND BALLISTIC IMAGING group of 10 pairs (labeled 1–10) to the second (labeled A–K; an 11th pair  with  a  different  number  of  land  impressions  was  inserted  in  this  group).  Roberge and Beauchamp (2006) exploited the pairwise nature of the test  samples to create a training set of known matches; this gave them a sense  of  optimal  “Max  Phase”  scores  (see  Chapter  4)  to  use  as  a  decision  rule  and  assign  matches.  Following  the  training  phase,  the  testing  was  per- formed  stage-wise—performing  a  set  of  comparisons,  applying  decision  rules to pick out matches, removing those elements from the dataset, and  repeating—until all assignments were made. Though a caption in Dillon (2005:10) touted BulletTRAX-3D (and its  companion MatchPoint Plus display stations) as “the latest configuration  of  IBIS”—suggesting  a  replacement  of  IBIS—the  system  was  originally  positioned as a counterpart to IBIS. However, FTI has recently indicated a  shift  of  its  product  line  to  focus  on  three-dimensional  platforms,  shifting  the two-dimensional system currently deployed as the base for the NIBIN- system as the “IBIS Heritage” branch (see Box 4-1). Promotional materials  for  the  three-dimensional  systems  emphasize  that  the  three-dimensional  systems are backward-compatible with the older two-dimensional systems;  photographs are taken during the two-dimensional acquisition process and  are offered as a layer that can be viewed onscreen in the three-dimensional  system,  so  that  photographs  can  presumably  be  subjected  to  the  existing  two-dimensional  comparison  process.  It  is  unknown  what  changes  have  been made to account for three-dimensional measurement information in  generating comparison scores in these new systems.

OCR for page 186