National Academies Press: OpenBook

IDs -- Not That Easy: Questions About Nationwide Identity Systems (2002)

Chapter: 3 Technological Challenges

« Previous: 2 Policy Considerations
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 34
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 35
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 36
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 37
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 38
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 39
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 40
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 41
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 42
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 43
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 44
Suggested Citation:"3 Technological Challenges." National Research Council. 2002. IDs -- Not That Easy: Questions About Nationwide Identity Systems. Washington, DC: The National Academies Press. doi: 10.17226/10346.
×
Page 45

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

3 Technological Challenges T hough the aim of this short report is merely to point out some of the essential policy questions that would be raised by the introduc- tion of a nationwide identity system, the committee believes that the technological and implementation challenges raised—even without a precise characterization of such a system’s goals or subsequent policies— are enormous and that they warrant significant and serious analysis. This need becomes clear when an ID is understood as an element of a much larger system that includes technical, material, and human ele- ments. At a minimum, • Cards and card readers (if used for validation) would need to be designed, fabricated, distributed, and updated or otherwise maintained or replaced. • A corresponding (backend) database would need to be established, maintained, and protected. • Procedures for checking the authenticity of IDs and for verifying the presenter (with or without specialized equipment) would need to be established, promulgated, practiced, and audited.1 1Association of an identity card with its holder has to be verified before the identity information it contains can be relied upon (otherwise, stealing the card would permit the theft of the cardholder’s identity). 34

TECHNOLOGICAL CHALLENGES 35 • Means to discover, report, verify, and authoritatively correct mis- takes would need to be put in place. • A variety of security measures would need to be factored into all aspects of the system to be sure that it meets its objectives and is not vulnerable to things such as fraud or denial-of-service abuses that can result in privacy violations. Fraud (and security in general) is a significant concern in any system, even the most technologically sophisticated.2 The nationwide scale of such a system would require knowing that all aspects of the system are scalable—a daunting problem for lesser systems.3 In any case, the chal- lenges of building robust and trustworthy information systems—they are extensive and well-documented4 —are accompanied by the even greater challenge of making the systems resistant to attacks by well-funded ad- versaries. Architectural issues include the degree of centralization of the under- lying databases as well as the location and cost of data storage, computa- tion, and communication, which can all be done at different places.5 For example, how would authorized entities obtain the records they wanted, under what circumstances, and with what degree of authorization? Would there be daily or weekly downloads of selected records to more permanent storage media? Would a real-time network feed be required (perhaps similar to those used in real-time credit authorization systems)? Would it be possible to secure such a feed sufficiently?6 Choices among architectural options, as well as other options, would depend on the functional goal(s) of the system. Architecture influences scalability, cost, and usability/human factors. It also interacts with proce- 2A large breach of security with French banking cards is causing a significant upgrade of the infrastructure in France (<http://parodie.com/english/smart card.htm>). In the United States, satellite-signal theft by smart card fraud is so extensive that it is now the focus of a government sting operation. See Ross Anderson’s work on cryptography and security, much of which is available at <http://www.cl.cam.ac.uk/users/rja14/>. 3CSTB’s 2000 report Making IT Better underscores the profound challenges associated with large-scale systems. 4See the CSTB reports Computers at Risk (1990), Trust in Cyberspace (1999), Making IT Better (2000), and Embedded, Everywhere: A Research Agenda for Networked Systems of Embedded Com- puters (2001). 5A general rule is that the lower the cost of accessing an online database and the larger the likelihood of doing so, the less sophisticated the card needs to be. 6Such security might require a very large new network that would have to be connected inside the firewalls of the institutions and organizations using the system. Securing such a network is extremely difficult; experience suggests that maintaining that security would be very challenging.

36 IDs—NOT THAT EASY dure: Decisions must be made about who would be in charge of issuing, reissuing, renewing, revoking, and administering the cards, along with maintaining, updating, and granting access to the database. A further concern is the need for graceful recovery from failure as well as substitute mechanisms when the system is compromised or not adequately respon- sive at the time verification of an identity is needed. All of these factors influence cost, as well as effectiveness. Cost needs to be analyzed completely, on a life-cycle basis and with attention to numerous trade-offs. Even if software and hardware costs are minimized, experience with lesser systems—from SSNs to state driv- ers’ licenses to military identification systems—shows that there will be significant ongoing administrative costs for training, issuing cards, verifi- cation, maintenance (keeping whatever information is associated with an individual and his or her ID up to date), and detection and investigation of counterfeiting.7 In particular, the costs—and technological and admin- istrative complexity—of assuring the integrity and security of an identity infrastructure are likely to be large. They would depend in part on whether technology for automated checking of an ID—as opposed to a visual check used today with SSNs or drivers’ licenses—is required, which in turn depends on the choice of ID technology (see Box 3.1). For example, in response to legislation enacted in August 1996,8 the Social Security Administration (SSA) conducted an analysis and produced a report on options for enhancing the Social Security card.9 Citing a num- ber of key business and technology assumptions that appeared valid at the time of the study (1997), SSA estimated that issuing enhanced cards might have a life-cycle cost of $5.2 billion to $10.5 billion, depending on the technology developed and deployed. These estimates included as- sumptions about the need for reissuing cards, issuing new cards, and maintaining the systems in order to store data related to the cards and keep that data up to date. The study did not assume that each SSN and its related card would relate to just one individual, because SSA estimated that at the time, approximately 10 million of the 269 million valid SSNs 7There are numerous ways in which fraudulent (“novelty”) identification documents can be obtained. A simple Web search on “fake id” provides links to many possible suppliers. 8Section 111 of P.L. 104-193, “Personal Responsibility and Work Opportunity Reconcilia- tion Act of 1996” (Welfare Reform) and section 657 of P.L. 104-208, Division C, “Illegal Immigration Reform and Immigrant Responsibility Act of 1996” (Immigration Reform). 9See Report to Congress on Options for Enhancing the Social Security Card, Social Security Administration, Publication No. 12-002, September 1997. Available at <http://www.ssa. gov/history/reports/ssnreport.html>.

TECHNOLOGICAL CHALLENGES 37 were duplicates (that is, two or more persons had been given the same SSN). There was a variety of reasons for such duplication, including error on the part of SSA and malfeasance on the part of some individuals. As with the design of any system, decisions about trade-offs would need to be made in advance. The security, efficiency, and effectiveness options chosen would depend on the goals and policies (see Chapter 2) and the planned uses of the system. For example, a “trusted traveler” system whose sole function was to authenticate individuals who had been previously certified as “trusted” in the particular context of travel might place more emphasis on efficiency in travel-related queues and on elimi- nating false positives than on protecting the fact that a particular person has been certified as trusted (or untrusted). A secure driver’s license system, in which the license is used as an ID for many activities beyond driving on a public roadway, might trade ease of replacing a lost license against the rigorous authentication of individuals who request a replace- ment. In making decisions about trade-offs, understanding the potential threats and risks will be a large component of assessing the security re- quirements of a system. BINDING PERSONS TO IDENTITIES A practical issue that would arise in a card-based identity system is that of relating cards and identities to individuals: How would the issu- ing authorities create this binding? Most of the systems (both hypotheti- cal and actual) alluded to in this report employ what is known as two- factor authentication, requiring the holders to present more information than the card itself (perhaps a face that matches the picture, a PIN, or a thumbprint) to verify that they are the legitimate holders. If someone has a valid card, how would anyone know that it belongs to him or her? A picture on the front of the card would not be sufficient if very high assurance is sought.10 If the card makes use of a magnetic stripe, it would have been easy to copy the stored information to a new card with a different picture. If the card is a memory card or smart card, duplication, while a little more difficult, would still have been possible. If biometric information11 is used, it could have been stored on the card and 10The inability of human inspectors to reliably match faces to cards was demonstrated in Pike, Kemp, and Brace, “Psychology of Human Face Recognition,” IEEE Conference on Visual Biometrics, London, March 2, 2001. 11There are a number of biometrics that might be used; for the purposes of this discus- sion, assume an iris scan or fingerprint.

38 IDs—NOT THAT EASY BOX 3.1 Cards and Their Requirements The presumed goal of a counterfeit-resistant, long-lasting, easily replaceable ID presents difficult technical challenges. With respect to the ID itself, assuming that it is a physical artifact such as a card, a number of questions need to be answered.1 Form factors—the size, shape, and substance of the card—would likely play a part both in acceptance on the part of the citizenry and in the card’s resistance to coun- terfeiting. The more difficult challenges pertain to the aspects of cards that are deter- mined by the kind of technology used. One could use a relatively simple card, like a credit card or driver’s license. Each individual in the system would have a card with some information printed on it about the holder and perhaps a picture. There might be a unique number on the card, and the information in a nationwide database would be indexed by that num- ber. The card itself might contain a magnetic stripe along with embossed and printed data. As with a driver’s license or passport, access to this database (for reading data out or putting data in) would presumably be limited,2 as it would be under the proposal by the American Association of Motor Vehicle Administrators to create nationwide standards for driver’s licenses. On the other hand, the counterfeiting of magnetic stripe cards is a trivial under- taking.3 More important, the ease with which the information contained in the mag- netic stripe can be duplicated means that a counterfeiter can produce a clone card and/or retransmit the data in other transactions as if they came from a legitimate card. All of this implies serious security and privacy vulnerabilities, and there is no verifi- able connection (by means of biometrics, for example) between the holder of the card and the person to whom the card was issued. Hence, using such credentials as a basis for issuing new cards (and, ergo, identities) would compromise the accuracy of some of the identification data, inasmuch as the credentials depend on attestations by the individual or even third parties.4 1The Department of Defense is now deploying a smart card that it refers to as a common access card (CAC) as an authentication device and for other purposes. The card combines a magnetic stripe, bar code, a photo ID, and smart card technology. DOD’s experiences may well prove instructive when considering a nationwide system. However, the privacy concerns of military employees are likely to be different from those of average citizens, making an exact analogue unlikely. In addition, the CAC will be deployed for a population that is more than an order of magnitude smaller than the U.S. population, which is more diverse in many dimensions than the military and currently less subject than the military to sanctions for failure to comply with the identification system’s requirements. 2On November 2, 2001, the Washington Post reported that the American Association of Motor Vehicle Administrators was working on a plan to link all driver databases and to strength- en the security and functionality of current driver’s licenses and state identification cards. See <http://www.washingtonpost.com/wp-dyn/articles/A32717-2001Nov2.html>. See also <http:// www.aamva.org/standards/stdAAMVADLIstandard2000.asp> for a description of AAMVA’s standard, which aims to provide a uniform means for identifying holders of driver’s licenses throughout North America. 3See, as just two of many examples, “Skim Artists Can Swipe Your Credit,” at <http:// www.techtv.com/cybercrime/internetfraud/story/0,23008,2583624,00.html> and “Newly Dis- covered Bug Skims Credit Card Data,” at <http://www.newsfactor.com/perl/story/11494.html>. 4Note that the existing identification infrastructure (including the system of birth and death records) in the United States often depends on the presentation of credentials and is highly decentralized. The lack of common national standards generates skepticism about the quality of the data.

TECHNOLOGICAL CHALLENGES 39 Another possibility is a memory card (or storage card), which would hold more information and be more expensive than the magnetic-stripe cards of the previous example.5 These cards contain memory as well as some security logic to prevent unauthorized reading or tampering with their data. The information contained on them could be digitally signed (that is, a number would be associated with that information that is dependent on a secret known only to the signer as well as on the data itself) to prevent easy counterfeiting. The correspondence between the user and the card (along with the information on the card and in the database) could be ascertained through biometric authentication, which would be undertaken using special equipment—such as a reader for fingerprints or iris scans—in addition to presentation of the card. An additional possibility is to use smart card technology that permits computation (such as digital signatures and encryption) to take place on the card itself. Though successful attacks have taken place, these cards are even harder to counterfeit than memory cards. They might have a name, photo, number, and bio- metric data, all of which could be cryptographically signed. The data would be backed up in a database to enable checking when reissuing a card and checking for duplicates when the card is first issued. A card of this sort could engage in a real- time, cryptographic exchange with an online system to verify a user’s identity— possibly without exposing details of that identity to the organization performing the data capture—for example, an airline or a retail establishment. As an example of a card-based system using biometrics, consider the Connect- icut Department of Social Services, which issues cards to aid welfare recipients.6 Fingerprints of each applicant are taken and compared with the fingerprint of all applicants previously enrolled. Under the assumption that people are not modifying their fingerprints (and assuming no matching errors), this can prevent a single user from registering under multiple identities within the system. The card is printed with the fingerprints encoded in a two-dimensional optical bar code on the front of the card. At point-of-service applications, the user presents a fingerprint that is com- pared with that encoded on the card. This prevents multiple users from making use of a single identity. Other biometric technologies, such as iris recognition, might be useful in this application as well. However, no biometric technology is completely invulnerable to attacks by sophisticated adversaries.7,8 5One example is the INSPASS and the data stored on it, coupled with a hand-geometry reader at point of entry to verify identity. Another example is a German identification card, die Karte, which uses two separate smart card chips and contains 22 separate mechanisms for card validation/antifraud technology. 6For a discussion of the costs associated with identification cards and fingerprints in social service applications, see “A Review of Five Cost/Benefit Studies of Fingerprinting in Social Ser- vice Applications,” Roger Salstrom, Burton Dean, and James Wayman, available at <http:// www.dss.state.ct.us/digital/news22/bhsug22.htm>. 7T. van der Putte and J. Keuning, “Biometrical Fingerprint Recognition: Don’t Let Your Fingers Get Burned,” Proceedings of IFIP TC8/WG8.8 Fourth Working Conference on Smart Card Research and Advanced Applications, Kluwer Academic Publishers, September 2000, pp. 289- 303. Also, see T. Matsumoto et al., “Impact of Artificial ‘Gummy’ Fingers on Fingerprint Sys- tems,” Proceedings of the SPIE, vol. 4677 (January 2002) and D. Maio, D. Maltoni, J. Wayman, and A. Jain, “FVC2000: Fingerprint Verification Competition 2000,” Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, September 2000, available on-line at <http://bias.csr.unibo.it/FVC2000/>. 8D. Willis and M. Lee, “Six Biometric Devices Point the Finger at Security,” Network Computing, June 1, 1998.

40 IDs—NOT THAT EASY a “live capture” of the biometric could be carried out when an individual presents the card. The captured data would then be compared with the data stored on the card. Depending on what kinds of cryptographic protections are used, this system could be susceptible to forgery as well— for example, someone might recreate the card with his or her own biomet- ric information in combination with another person’s identity informa- tion. Another scenario might be to have the person present a biometric to a controlled scanner and present the card that contains reference informa- tion. Both pieces of information are then validated in combination against a backend server. However, this creates a requirement for high availabil- ity (that is, the system should be usable essentially all of the time) and a dependence on reliable, secure network and communications infrastruc- tures. In principle, a card coupled with biometrics and the appropriate in- frastructure for reading and verifying biometric data may offer the great- est confidence with respect to linking persons and their cards. But getting biometrics technology right (including control of the risks of compro- mise) and widely distributed is not easy.12,13 There are additional issues associated with the use of biometrics, such as some popular resistance.14 Note that biometrics allows for cardless system options: A database- only system based solely on biometrics eliminates the risk of card loss or theft, but real-time database accessibility then becomes a major consider- ation. In addition, compromise of the database is an even greater concern than in card-based systems, where the cards can be used to provide a check against corrupted data in the database. Further, a cardless system implies that anyone wishing to use the system (even for activities needing only moderate to low levels of security) would have to invest in the equip- ment needed to access the infrastructure in real time. 12“Advice on the Selection of Biometric Products: Issue 1.0,” (U.K.) Communication Electronic Security Group, November 23, 2001, available at <http://www.cesg.gov.uk/ technology/biometrics>. 13J.D.M. Ashbourn, Biometrics: Advanced Identity Verification: The Complete Guide, Springer, London, 2000. 14For further information, see the recent RAND report Army Biometric Applications: Iden- tifying and Addressing Sociocultural Concerns, 2001. In addition, an accuracy issue arises with biometrics because it uses what are known as probabilistic measures of similarity. No two images of the same biometric pattern (even fingerprints) from the same person are exactly alike. Consequently, biometrics is based on pattern-matching techniques that re- turn sufficiently close measures of similarity. With enough (or not enough) information about the application environment and user population, it is possible to convert those mea- sures into probabilities of a match or nonmatch. Thus, incorrect decisions occur randomly with a probability that can be measured.

TECHNOLOGICAL CHALLENGES 41 Cryptographic protection and digital signatures, in combination with offline verification of the signature and a properly deployed public key infrastructure (PKI),15 could provide a measure of protection for the in- formation associated with IDs and guard against misuse. But for any technology, some degree of imperfection will exist. Therefore, it is neces- sary to decide on thresholds for false rejection rates (false negatives) and false acceptance rates (false positives), not only for when the ID is used but also at the time of issuance, reissuance, and renewal. Policy deci- sions—perhaps with corresponding legal backing—need to be made about what happens in the event of a false negative or false positive. Creation of exception-handling procedures for dealing with incorrect de- cisions opens up additional vulnerabilities for the system, as impostors might claim to have been falsely rejected and request handling as an exception. BACKEND SYSTEMS Once methods are in place to satisfactorily link persons to IDs, the requirements and goals of the system should drive decision making about associated databases. The databases’ principal features are likely to in- clude an ability to search based on an ID number or other unique identi- fier, various ID attributes, and possibly biometric data. Depending on whether tracking and prediction are requirements of the system, signifi- cant logging, auditing, and data mining capabilities would be needed as well. Key issues related to this part of the system stem from both structural and procedural decisions. If the database needs to be readily accessible from remote locations (which is likely), it would almost certainly need to be replicated. This, in combination with its perceived (and actual) value and the fact that more people over a more widespread area would be likely to have authorized access to the system, makes it even more vulner- able to break-ins: by physically accessing one of the sites, by finding some communications-based vulnerability, or by bribing or corrupting some- one with access to the system. Moreover, if verification of identity re- quired an online database query at airports, a handful of “accidents” at key places around the country (such as wires being cut at critical points in a way that appears accidental) could cripple civil aviation and any other 15The committee’s final report will examine PKI and other authentication technologies in detail.

42 IDs—NOT THAT EASY commerce that required identity verification (for example, purchase of guns or certain chemicals). Note that availability would be a key aspect of any online component of a nationwide identity system. While the desire for cost savings might lead to such a backend system being accessible via the public Internet (as opposed to a dedicated network), this would expose the system to yet more attacks, both direct and indirect, on shared infrastructure, such as the routing systems and hardware, the domain name system, or shared bandwidth. As noted previously, it has proven extremely difficult to secure systems that utilize the Internet; a nationwide identity system would likewise need to be widely accessible and would inevitably be the target of malicious attacks as well as subject to unintentional or incidental damage. Failure modes of the system would have to be very carefully studied, and backup plans and procedures would have to be designed and tested for all critical systems that depend on use of the nationwide identity system. A further complication would result if it were decided that different users should be granted different levels of access to the database, whether for aggregated data or information about individuals. This raises query capability, access control, and security issues. Related to the size of the user base (that is, those who use the identity system to make some sort of determination about an individual) is the question of whether the same security measures need to hold for each user. For example, if the system were used broadly in the private sector, a clerk at a liquor store might be relatively less concerned about detecting counterfeit cards than would be an intelligence or law-enforcement agent granting access to national secu- rity-related sites or information. In addition, the clerk would need less information (for example, age of individual is greater than 21) verified through the system than would the agent. It is a significant challenge to develop an infrastructure that would allow multiple kinds of queries, differing constraints on queries (based on who was making them), restrictions on the data displayed to what was needed for the particular transaction or interaction, and varying thresh- olds for security based on the requirements of the user. Determining the scope of use and the breadth of the user population in advance would dictate which functionalities are needed. A further challenge resulting from a wide variety of users and uses is data integrity. Different users (even if the system were used only by agencies within the government) would undoubtedly have different per- ceptions of how critical the accuracy of the data is. Therefore, to maintain the quality of the data, controls over who could input data, and with what degree of specificity and security, must also be a factor in the design of the system.

TECHNOLOGICAL CHALLENGES 43 Another necessary component of system and data integrity is audit- ing capability. Keeping track of who has accessed what parts of the system and which data would be necessary for reasons of technology (to track down errors and bugs, for example) and liability.16 Procedurally, such a large system would require many people to be authorized to maintain and administer it. Even if perfect technological security were achievable, there would still be the security risk of compro- mised insiders, given the very large numbers of people needed to main- tain and administer the system.17,18 The human factor would also be an issue with regard to data entry and possible errors in the database. This is well known among statisticians, and various technical and procedural steps can be taken to offset risks of inaccuracy. In general, therefore, correction mechanisms would need to be created; however, these mecha- nisms provide additional opportunity for fraud. Given the uses to which such a system is expected to be put, however, and potential impacts on individuals’ reputations and freedom to function as social and economic actors, mechanisms that allow individuals to know what is in the data- base and to contest and/or correct alleged inaccuracies would be desir- able and politically essential (and, if run by the federal government, le- gally required). While such mechanisms can be found in credit-reporting and medical databases,19 the law-enforcement and national-security frameworks that are motivating proposals for a nationwide identity sys- tem pose unique accessibility and disclosure challenges. Another concern is that depending solely on feedback from partici- pants to correct inaccuracies would catch only a fraction of the errors. People may tend to notice and report only those errors that interfere with something they are attempting to accomplish. An incorrectly entered birth date, for example, may not be noticed or corrected for decades and may only come to light when the person applies for, say, Medicare. An 16Indeed, major federal agencies such as the Internal Revenue Service have run into problems with tracking and controlling access to information. For a discussion of this as it relates to privacy, see Peter P. Swire, “Financial Privacy and the Theory of High-Tech Gov- ernment Surveillance,” Washington University Law Quarterly 177(2):461-512 (1999). 17CSTB held a planning meeting on the topic of the insider threat in late 2000. For more information, see <http://www.cstb.org/web/whitepaper_insiderthreat>. 18The President’s Commission on Critical Infrastructure Protection at <http://www.ciao. gov/PCCIP/PCCIP_Report.pdf> discusses cyberthreats, including the insider threat. For- tune has examined the cost of insider attacks online at <www.fortune.com/sitelets/sec- tions/fortune/tech/2001_01esecurity2.html>. 19See, for example, Computer Science and Telecommunications Board, For the Record: Protecting Electronic Health Information, National Academy Press, Washington, D.C., 1997.

44 IDs—NOT THAT EASY accumulation of latent errors is inevitable and leads to at least two prob- lems: (1) by the time the error is discovered it may be hard to locate the information needed to verify the claim of error and (2) the act of making the correction may interfere with or delay some action that should be allowed by the system. Creating a workable nationwide identity system that can compensate in effective ways for these inevitabilities is clearly a nontrivial task. DATA CORRELATION AND PRIVACY A key question about a nationwide identity system database is whether it would be designed to foster consolidation of other (especially federal) databases—or whether that might happen as a side effect. Either way, proponents note that this would make information sharing among intelligence and law-enforcement agencies easier,20,21 although the com- mittee believes that it could also carry significant risks. A centralized, nationwide identity system essentially offers adversar- ies a single point of failure and presents an attractive target for identity theft and fraud. The more valuable the information in the database and the credentials associated with an identity, the more they become a target for subversion. Unauthorized access might be sought by terrorists, stalk- ers, abusive ex-spouses, blackmailers, or organized crime. Furthermore, to the extent that important activities become dependent on the system, the system becomes an attractive target for denial-of-service attacks. Implementing a secure and reliable nationwide identity system that is resistant to credential theft or loss,22 fraud, and attack is a significant technological challenge, with ancillary procedural challenges. Related to consolidation, information correlation is facilitated by sys- tems in which one individual has exactly one identity. This has both negative and positive implications. Such a system is useful for predicting 20A forthcoming CSTB report will explore issues on critical information infrastructure protection and the law, including a preliminary analysis of the issue of information sharing between the public and private sectors. For more information, see <http://www.cstb.org/ web/project_cip>. 21See, for example, Larry Ellison’s October 8, 2001, article in the Wall Street Journal, “Digi- tal IDs Can Help Prevent Terrorism,” and Cara Garretson’s December 2001 article in CIO, “Government Info Sharing Key to Fighting Terrorism,” at <http://www.cio.com/govern- ment/edit/122001_share.html>. 22Loss of ID cards presents its own challenges to the system; if all of the individuals with lost IDs were to become immediately “suspect” in the system, intolerable backlogs and/or overload could result.

TECHNOLOGICAL CHALLENGES 45 or detecting socially detrimental activities, because it avoids the uncer- tainty and confusion that may arise from multiple identities (notwith- standing that multiple identities can serve useful and socially desirable purposes, as described previously). Credit card companies, for example, can conduct behavior-pattern analysis for fraud detection.23 Similar tech- nologies must be used to detect behavior indicative of impending crimi- nal or terrorist activities, although this raises concerns about profiling. On the negative side, such analysis also enables invasions of personal privacy. The extent to which this occurs would depend heavily on the circumstances under which an individual can be compelled to present an ID, what information is retained, and which activities are tracked within the system (a topic explored above). Indeed, detecting a problem might only be possible in some instances through broad analysis. This would necessitate examining the behavior of many people who do not pose a risk—most human behavior involves law-abiding citizens pursuing con- stitutionally protected activities—in order to identify the few who do.24 23Credit card companies make these correlations using both standard statistical methods and neural networks. 24For a discussion of some of the effects and implications of ubiquitous surveillance cameras, see the October 7, 2001, article by Jeffrey Rosen, “A Watchful State,” New York Times Magazine.

Next: 4 Concluding Remarks »
IDs -- Not That Easy: Questions About Nationwide Identity Systems Get This Book
×
Buy Paperback | $47.00 Buy Ebook | $37.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

IDs—Not That Easy highlights some of the challenging policy, procedural, and technological issues presented by nationwide identity systems. In the wake of the events of September 11, 2001, nationwide identity systems have been proposed to better track the movement of suspected terrorists. However, questions arise as to who would use the system and how, if participation would be mandatory, the type of data that would be collected, and the legal structures needed to protect privacy. The committee’s goal is to foster a broad and deliberate discussion among policy-makers and the public about the form of nationwide identity system that might be created, and whether such a system is desirable or feasible.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!