6
Category 3—Promoting Deployment

The goal of requirements in Category 3—Promoting deployment, is that of ensuring that the technologies and procedures in Categories 1 and 2 of the committee’s illustrative research agenda are actually used to promote and enhance security. This broad category includes technologies that facilitate ease of use, by both end users and system implementers; incentives that promote the use of security technologies in the relevant contexts; and removal of barriers that impede the use of security technologies.

6.1
USABLE SECURITY

It is axiomatic that security functionality that is turned off or disabled or bypassed or not deployed by users serves no protective function. The same is true for security practices or procedures that are promulgated but not followed in practice. (This section uses the term “security” in its broadest sense to include both technology and practices and procedures.) Yet, even in an age of increasing cyberthreat, security features are often turned off and security practices are often not followed. Today, security is often too complex for individuals and enterprise organizations to manage effectively or to use conveniently. Security is hard for users, administrators, and developers to understand; clumsy and awkward to use; obstructs all of these parties in getting real work done; and does not scale easily to large numbers of users or devices to be protected. Thus, many cybersecurity measures are circumvented by the users they are intended



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 124
Toward a Safer and More Secure Cyberspace 6 Category 3—Promoting Deployment The goal of requirements in Category 3—Promoting deployment, is that of ensuring that the technologies and procedures in Categories 1 and 2 of the committee’s illustrative research agenda are actually used to promote and enhance security. This broad category includes technologies that facilitate ease of use, by both end users and system implementers; incentives that promote the use of security technologies in the relevant contexts; and removal of barriers that impede the use of security technologies. 6.1 USABLE SECURITY It is axiomatic that security functionality that is turned off or disabled or bypassed or not deployed by users serves no protective function. The same is true for security practices or procedures that are promulgated but not followed in practice. (This section uses the term “security” in its broadest sense to include both technology and practices and procedures.) Yet, even in an age of increasing cyberthreat, security features are often turned off and security practices are often not followed. Today, security is often too complex for individuals and enterprise organizations to manage effectively or to use conveniently. Security is hard for users, administrators, and developers to understand; clumsy and awkward to use; obstructs all of these parties in getting real work done; and does not scale easily to large numbers of users or devices to be protected. Thus, many cybersecurity measures are circumvented by the users they are intended

OCR for page 124
Toward a Safer and More Secure Cyberspace to protect, not because these users are lazy but because these users are well motivated and trying to do their jobs. When security gets in the way, users switch it off and work around it, designers avoid strong security, and administrators make mistakes in using it. It is true that in the design of any computer system, there are inevitable trade-offs among various system characteristics: better or less costly administration, trustworthiness or security, ease of use, and so on. Because the intent of security is to make a system completely unusable to an unauthorized party but completely usable to an authorized one, there are inherent trade-offs between security and convenience or ease of access. One element of usable security is better education. That is, administrators and developers—and even end users—would benefit from greater attention to security in their information technology (IT) education, so that the concepts of and the need for security are familiar to them in actual working environments (Box 6.1). In addition, some aspects of security are necessarily left for users to decide (e.g., who should have access to some resource), and users must know enough to make such decisions sensibly. The trade-off between security and usability need not be as stark as many people believe, however, and there is no a priori reason why a system designed to be highly secure against unauthorized access cannot also be user-friendly. An example case in which security and usability have enhanced each other in a noncybersecurity context is that of modern hotel room keys. Key cards are lighter and more versatile than the old metal keys were. They are easier for the guests to use (except when the magnetic strip is accidentally erased), and the system provides the hotels with useful security information, such as who visited the room and whether the door was left ajar. Modern car keys are arguably more secure and more convenient as well. The committee believes that efforts to increase security and usability can proceed simultaneously for a long time, even if they may collide at some point after attempts at better design or better engineering have been exhausted. Many of the usability problems of today have occurred because designers have simply given up too soon, before serious efforts have been made to reconcile the tension. All too often, the existence of undeniable tensions between security and access is used as an excuse for not addressing usability problems in security. One part of the problem is that the interfaces are often designed by programmers who are familiar with the technology and often have a level of literacy (both absolute and technical) well above that of the average end user. The result is interfaces that are generally obvious and well understood by the programmers but not by the end users. Few programmers even have awareness of interface issues, and fewer still have useful train-

OCR for page 124
Toward a Safer and More Secure Cyberspace BOX 6.1 Fluency with Information Technology (and Cybersecurity) A report entitled Being Fluent with Information Technology published several years ago by the National Research Council (NRC) sought to identify what everyone—every user—ought to know about information technology.1 Written in 1999, that report mentioned security issues in passing as one subtopic within the general area of information systems. Subsequently, Lawrence Snyder, chair of the NRC Committee on Information Technology Literacy responsible for the 1999 report, wrote Fluency with Information Technology: Skills, Concepts, and Capabilities.2 The University of Washington course for 2006 based on this book (http://www.cs.washington.edu/education/courses/100/06wi/labs/lab11/lab11.html) addresses security issues in greater detail by setting forth the following objectives for the security unit: Learn to create strong passwords Set up junk e-mail filtering Use Windows Update to keep your system up to date Update McAfee VirusScan so that you can detect viruses Use Windows Defender to locate and remove spyware Another NRC report, ICT Fluency and High Schools: A Workshop Summary,3 released in 2006, suggested that security issues were one possible update to the fluency framework described in the 1999 NRC report. Taken together, these reports indicate that in the 8 years since Being Fluent with Information Technology was released, issues related to cybersecurity have begun to become important even to the most basic IT education efforts.    1National Research Council. 1999. Being Fluent with Information Technology. National Academy Press, Washington, D.C.    2Lawrence Snyder. 2002. Fluency with Information Technology; Skills, Concepts, and Capabilities. Addison-Wesley, Lebanon, Ind.    3National Research Council. 2006. ICT [Information and Communications Technology] Fluency and High Schools: A Workshop Summary. The National Academies Press, Washington, D.C. ing and background in this subfield. For example, security understandings are often based on physical-world metaphors, such as locking doors and obscuring sensitive information. These metaphors have some utility, and yet considerable education is needed to teach users the limitations of the metaphors. (Consider that in a world of powerful search tools [e.g., Google’s desktop, and Spotlight on Mac computers], it is not realistic for those in possession of sensitive information to rely on “trusting other people not to look for sensitive information” or “burying information in

OCR for page 124
Toward a Safer and More Secure Cyberspace sub-sub-sub-sub directories,” whereas in the absence of such tools, such actions might well have considerable protective value.) The difficulty of overcoming such limitations suggests that it is simply unrealistic to expect that security should depend primarily on security education and training. In addition, the extra training and education for security simply do not match the market, with the predictable result that users don’t spend much time learning about security. Users want to be able to buy and use IT without any additional training. Vendors want to sell to customers without extra barriers. Couple these realities with the projection that the Internet user population will double in the next decade, with hundreds of millions of new users, and it is clear that we cannot depend on extra education and training to improve security significantly. If user education is not the answer to security, the only other possibility is to develop more usable security mechanisms and approaches. As a starting point, consider the following example. Individuals in a company may need to share files with one another. When these persons are in different work units, collaboration is often a hassle. Using today’s security mechanisms, it is likely that these people would have to go through an extended multistep process to designate file directories that they want to share with one another—managing access-control lists, giving specific permissions, and so on. Depending on the level of inconvenience entailed, these individuals may simply elect to e-mail their files to one another, thus circumventing entirely the difficulties of in-house collaboration—but also making their files vulnerable to all of the security issues associated with the open Internet. It would be much more preferable to have mechanisms in place that aggregate and automatically perform low-level security actions under an abstraction that allows each user to designate another person as a collaborator on a given project and have the system select the relevant files to make available to that person and to no others. Usable security would thus reduce the cognitive load needed by an authorized user to navigate security and the “hassle factor,” thus increasing the likelihood that users would refrain from simply bypassing security measures or would never implement them in the first place. Such issues go far beyond the notion of “wizards,” which all too often simply mask an underlying complexity that is inherently difficult to understand. System administrators are also an important focal point for usable security. Because system administrators address low-level system issues much more often than end users do, they are usually more knowledgeable about security matters and are usually the ones to whom end users turn when security issues arise. But many users (e.g., those in small businesses) must perform their own system administration—a point suggesting that remote security administration, provided as a service, has an

OCR for page 124
Toward a Safer and More Secure Cyberspace important role to play while more usable security mechanisms are not widely deployed. In addition, the fact that system administrators are more knowledgeable than end users about low-level security issues does not mean that they do not find administering those issues to be a burden. For example, system administrators rather than vendors must make decisions about access control—who should have what privileges on a system—simply because the vendor does not and cannot know to whom any particular user is willing to grant access to a resource. However, this fact does not mean that it should be difficult to specify an access-control list. Many computer security problems result from a mismatch between a security policy and the way that the policy is or is not implemented, and system administrators would benefit greatly from automated tools that would indicate how their systems are actually configured and whether an actual configuration is consistent with their security policy. For example, administrators need to be able to set appropriate levels of privilege for different users, but they also need to be able to generate lists of all users with a given level of privilege. Some tools and products offer some capability for comparing installed configurations with defined security policies, but more work needs to be done on tools that enable security policies to be described more clearly, more unambiguously, and more easily. Such tools are needed, for example, when security policies change often. A related though separate point is the extent to which new systems and networks can or should include ideas that involve significant changes from current practice. Though end users are the limiting case of this issue (e.g., “How can you deploy systems that require the habits of 200 million Internet users to change and a whole industry to support them?”), the issue of requiring significant change is also relevant to system administrators, who are fewer in number but may well be as resistant to change as end users are. In some cases, issues may arise that fundamentally require end users to alter their ways of doing business. Consider the question of whether the end user should or should not make a personal choice about whether or not to trust a certificate authority. One line of argument suggests that such a question is too important to handle automatically. If so, users may indeed be required to change their habits and learn about certificate authorities. But the countering line of argument is that systems that require users to make such decisions will never be deployed on a large scale, regardless of their technical merits, and there is ample evidence that most users are not going to make sensible choices about trusting certificate authorities. One way of addressing such differences is to develop technology that by default shields users from having to make such choices

OCR for page 124
Toward a Safer and More Secure Cyberspace but nevertheless provides those who wish to do so with the ability to make their own choices. The quest for usable security has social and organizational dimensions as well as technological and psychological ones. Researchers have found that the development of usable security requires deep insight into the human-interaction dimensions of the application for which security is being developed and of the alignment of technical protocols for security and of the social/organizational protocols that surround such security. Only with such insight is it possible to design and develop security functionality that does not interfere with what legitimate workers must do in the ordinary course of their regular work. (That is, such functionality would not depend on taking explicit steps related only to security and nothing else.) For example: Individuals generally have multiple cyber-identities. For example, a person may have a dozen different log-in names to different systems, each of which demands its own password to access. Different identities often mean that the associated roles differ, for example, by machine, by user identities, by privilege, and so on. It is hard enough to remember different log-in names, which may be necessitated because the user’s preferred log-in name is already in use (the log-in name JohnSmith is almost certainly already in use in most large-scale systems, and any given John Smith may use JohnSmithAmex or JohnSmithCitibank or JohnSmithPhone as his log-in name, depending on the system he needs to access). But what about passwords? In order to minimize the cognitive load on the user, he or she will often use the same password for every site—and in particular will not tailor the strength of the password to the importance or the sensitivity of the site. Alternatively, users may plead for “single-sign-on” capability. Being required to present authentication credentials only once is certainly simpler for the user but is risky when different levels of trust or security are involved. Individuals usually don’t know what they don’t know. A common approach to security is to hide objects from people who do not have explicit authorization to access them, and to make these objects visible to people who do have explicit authorization. From a business process standpoint, there is an important category that this approach to security does not recognize—individuals who should have explicit authorization for access but do not. Authorization is granted in one of two ways: the individual receives authority unbidden (e.g., a new hire is automatically granted access to his

OCR for page 124
Toward a Safer and More Secure Cyberspace or her unit’s server), and/or the individual requests authorization from some authority who then decides whether or not to grant that authority. The latter approach, most common when some ad hoc collaborative arrangement is made, presumes that the individual knows enough to request access. But if the necessary objects are hidden from the individual, how does he or she know that it is necessary to request access to those specific objects? Such issues often arise in an environment dealing with classified information in which, because of secrecy and compartmentalization, one party does not know what information another party has. Individuals function in a social and organizational context, and processes for determining access rights are inherently social and organizational. When necessary accesses are blocked in the name of security, individuals must often expend considerable effort in untangling the web of confusion that is the ultimate cause of the denial of access. Individuals with less aggressive personalities, or newly hired individuals who do not want to “make trouble” may well be more reluctant to take such action—with the result that security policies and practices have kept employees from doing their work. Addressing these social and organizational dimensions of security requires asking questions of a different sort than those that technologists usually ask. Technologists usually seek to develop solutions or applications that generalize across organizational settings—and users must adapt to the requirements of the technology. Focusing on the social and organizational dimension implies developing understandings of what end users are trying to accomplish, with whom, and in what settings. What is the organization trying to achieve? What are the day-to-day security practices of effective employees? What are the greatest security threats? What information must be protected? What workplace practices are functional and effective and should be preserved in security redesigns? These understandings then support and may even drive design innovation at the network, infrastructure, and applications interface levels. A social and organizational understanding of security is based on posing questions at several distinct levels. In an organization, senior management determines security policy and establishes the nature and scope of its security concerns. But management also shapes a much larger social context that includes matters such as expectations for cooperative work, the nature of relationships between subordinates and superiors, and relationships between support and business units. At the same time, individuals and groups in the organization must interpret management-determined security concerns and implement management-determined policy—and these individuals and groups generally have considerable

OCR for page 124
Toward a Safer and More Secure Cyberspace latitude in doing so. Most importantly, individuals and groups must get their primary work done, and security is by definition peripheral to doing the primary work of the organization. Thus, because there is often conflict, or at least tension, between security and getting work done, workers must make judgments about what risks are worth taking in order to get their work done and how to bypass security measures if that is necessary in order to do so. It is against this backdrop that the technology infrastructure must be assessed. At the applications and task levels, it is important to understand how data-sharing practices are managed and what interorganizational and intraorganizational information flows must be in place for people to work effectively with others. A key dimension of data-sharing practices is access privileges—how are they determined, and how is knowledge of these privileges promulgated? (This includes, of course, knowing of the privileges themselves as well as their settings.) Technology development so assessed implies not only good technology, but extensive tools that facilitate organizational customization and that help end users identify what needs to be communicated and to whom. 6.2 EXPLOITATION OF PREVIOUS WORK There is a long history of advances in cybersecurity research that are not reflected in today’s practice and products. In many cases, the failure to adopt such advances is explained at least in part by a mismatch between market demands and the products making use of such research. For example, secure architectures often resulted in systems that were too slow, too costly, too late, and/or too hard to use. Nevertheless, the committee believes that some security innovations from the past are worth renewed attention today in light of a new underlying technological substrate with which to implement these innovations and a realization that inattention to nontechnical factors may have contributed to their nonuse (Section 3.4.1.4). These previous innovations include, but are not limited to the following: Virtual machine architectures that enable strict partitions and suitable isolation among different users, as discussed in Section 4.1.2.3 (Process Isolation); Multilevel security and multilevel integrity that enable the simultaneous processing of information with different classification levels; With the exception of the AS/400, System 38 (now the IBM iSeries), capability architectures that have not traditionally been successful but could prove to have valuable lessons to teach; and

OCR for page 124
Toward a Safer and More Secure Cyberspace Software engineering practices and programming tools for developing secure and reliable systems. “Old” but unadopted innovations that solved real cybersecurity problems are often plausible as points of departure for new research that addresses these same problems. As an example of early work that may be relevant today, consider what might be called a “small-process/message-passing” model for computation, in which a user’s work is performed using multiple loci of control (generally called threads), which communicate with one another by means of signals and messages. Exemplified by Unix, this model has a demonstrated ability to optimize machine resources, especially processor utilization; while one thread may be blocked waiting on, say, disk access, other threads can be performing useful tasks. The small-process/message-passing model does, however, have some disadvantages for security. A secure machine must map some set of external attributes, such as user identity, role, and/or clearance into the internal workings of the machine and use those attributes to enforce limits on access to resources or invocation of services. The internal data structures used to enforce these limits is often called the “security state.” The security state of a small-process/message-passing structure is diffuse, dynamic, and spread throughout a large number of processes. Furthermore, its relationship to the hardware is tenuous. It is therefore hard to analyze and verify. An alternative structure is the “large-process” model of computation, an example of which was Multics. In the large-process model, the work being done for a user is tied to a single locus of control, and the security state is mostly embodied in a hardware-enforced structure. This model relies on multiplexing between users to gain efficiency (as opposed to the small-process model, which multiplexes between threads working for a single user) and is efficient only when large numbers of users are sharing a single body of hardware, such as a server. From a security perspective, the advantage of the large-process structure is that the security features of the system are easier to understand, analyze, and verify. Because hardware resources are increasingly inexpensive, efficient use of hardware is no longer as important as it once was. Designs based on the need to use hardware efficiently have also had undesirable security consequences, and with the dropping cost of hardware, it may make sense to revisit some of those designs in certain circumstances. For example, the multicore processor (discussed briefly in Section 4.1.2.1) holds some promise for mitigating the performance penalties of the large-process model, and permitting the security and verification advantages to be exploited in certain contexts. Although a small-process/message-passing model is sensible for distributed computing (e.g., for Web services) in which

OCR for page 124
Toward a Safer and More Secure Cyberspace administrative control is decentralized, the large-process model makes sense in applications such as a public utility or central server in which security requirements are under centralized administrative control. 6.3 CYBERSECURITY METRICS Cybersecurity is a quality that has long resisted—and continues to resist—precise numerical classification. Today, there are few good ways to determine the efficacy or operational utility of any given security measure. Thus, individuals and companies are unable to make rational decisions about whether or not they have “done enough” with respect to cybersecurity. In the absence of good cybersecurity metrics, it is largely impossible to quantify cost-benefit trade-offs in implementing security features. Even worse, it is very difficult if not impossible to determine if System A is more secure than System B. Good metrics would also be one element supporting a more robust insurance market in cybersecurity founded on sound actuarial principles and knowledge.1 One view of security is that it is a binary and negative property—secure is simply defined as the opposite of being insecure. Under this “absolutist” model, it is easy to demonstrate the insecurity of a system via an effective attack, but demonstrating security requires proving that no effective attack exists. An additional complicating factor is that once an attacker finds a vulnerability, it must be assumed that such knowledge will propagate rapidly, thus enabling previously stymied attackers to launch successful attacks. There are some limited domains, such as the proof of privacy in Shannon’s foundational work on perfect ciphers2 and the proof of safety properties guaranteed by the type systems of many modern programming languages that are successful applications of this approach. But on the whole, only relatively small programs, let alone systems of any complexity, can be evaluated to such a standard in their entirety. If security is binary, then a system with any vulnerability is insecure—and metrics are not needed to indicate that one system is “more” secure than another. But this absolutist view has both theoretical and practical difficulties. One theoretical difficulty is that the difference between 1 It is also helpful to distinguish between a metric (which measures some quantity or phenomenon in a reasonably repeatable way) and risk assessment (which generally involves an aggregation of metrics according to a model that provides some degree of predictive power). For example, in the financial industry, risk assessment depends on a number of metrics relevant to a person’s financial history (e.g., income, debt, number of years in the same residence, and so on). 2 Claude Shannon, “Communication Theory of Secrecy Systems,” Bell System Technical Journal, 28: 656-715, October 1949.

OCR for page 124
Toward a Safer and More Secure Cyberspace a secure and a vulnerable software artifact can be as small as one bit, and it is hard to imagine a process that is sensitive enough to determining if the artifact is or is not secure. A practical difficulty is that static-code analysis—predicting the behavior of code without actually executing it—remains a daunting challenge in the general case. One aspect of the difficulty is that of determining if the code in question behaves in accordance with its specifications. Usually the domain of formal proofs of correctness, this approach presumes that the specifications themselves are correct—but in fact vulnerabilities are sometimes traced to incorrect or inappropriate specifications. Moreover, the size of systems amenable to such formal proofs remains small compared to the size of many systems in use today. A second aspect of this difficulty is in finding functionality that should not be present according to the specifications (as discussed in Section 4.1.3.1). Outside the absolutist model, security is inherently a synthetic property—it no longer reflects some innate quality of the system, but rather how well a given system with a given set of security policies (Section 6.5) can resist the activities of a given adversary. Thus, the security of a system can be as much a property of the adversary being considered as it is of the system’s construction itself. That is, measuring the security of a system must be qualified by asking, Against what kind of threat? Under what circumstances? For what purpose? and Under what security policy? In this context, the term “metric” is not binary. It must be, at the very least, ordinal, so that metrics can be used to rank-order a system along some security-relevant dimension. In addition, the term “metric” assumes that one or more outcomes of interest can be measured in an unambiguous way—that one can recognize a good outcome or a bad outcome when it occurs. Furthermore, it assumes that an improvement in the metric actually corresponds to an improvement in outcome. Yet another complicating factor is that an adversary may offer unforeseen threats whose impact on the system cannot be anticipated or measured in advance. While the absolutist model—which depends a great deal on formal proof—presumes that all security properties can be specified a priori, in practice it is common that a system’s security requirements are not understood until well after its deployment (if even then!). Moreover, if a threat (or even a benign event) is unforeseen, a response tailored to that threat (or event) cannot be specified (although a highly general response, such as “Abort,” may be possible, and certain other responses may be known to be categorically undesirable). For example, the cryptography community has had some success in formalizing the security of its systems. Proving cryptographic security calls for defining an abstract model of an adversary and then using reductions to prove that key security properties have equivalent computational

OCR for page 124
Toward a Safer and More Secure Cyberspace 6.4.4.3 Private-Sector Mechanisms to Incentivize Behavioral Change Private-sector mechanisms to incentivize organizations and individuals to improve their cybersecurity postures do not entail the difficulties of promulgating government regulation, and a number of attempts in the private sector have been made for this purpose. Research is needed to understand how these attempts have fared, to understand how they could be improved if they have not worked well, and to understand how they could be more widely promulgated and their scope extended if they have. 6.4.4.3.1 Insurance Historically, the insurance industry has played a key role in many markets as an agent for creating incentives for good practices (e.g., in health care and in fire and auto safety). Thus, the possibility arises that it might be able to play a similar role in incentivizing better cybersecurity. Consumers (individuals and organizations) buy insurance so as to protect themselves against loss. Strictly speaking, insurance does not itself protect against loss—it provides compensation to the holder of an insurance policy in the event that the consumer suffers a loss. Insurance companies sell those policies to consumers and profit to the extent that policyholders do not file claims. Thus, it is in the insurance company’s interest to reduce the likelihood that the policyholder suffers a loss. Moreover, the insurance company will charge a higher premium if it judges that the policyholder is likely to suffer a loss. Particularizing this reasoning to the cybersecurity context, consumers will buy a policy to insure themselves against the possibility of a successful exploitation by some adversary. The insurance company will charge a higher premium if it judges that the policyholder’s cybersecurity posture is weak and a lower premium if the posture is strong. This gives the user a financial incentive to strengthen his or her posture. Users would pay for poor cybersecurity practices and insecure IT products with higher premiums, and so the differential pricing of business disaster-recovery insurance based in part on quality/assurance/security would bring market pressure to bear in this area. Indeed, cyber-insurance has frequently been proposed as a market-based mechanism for overcoming security market failure,61 and the importance of an insurance industry role in promoting 61 See, for instance, Lawrence A. Gordon, Martin P. Loeb, and Tashfeen Sohail, “A Framework for Using Insurance for Cyber-Risk Management,” Communications of the ACM, 46(3): 81-85, 2003; Jay P. Kesan, Ruperto P. Majuca, and William J. Yurcik, “The Economic Case for Cyberinsurance,” Workshop on the Economics of Information Security, Cambridge, Mass., 2005;

OCR for page 124
Toward a Safer and More Secure Cyberspace cybersecurity was recently noted at the 2005 Rueschlikon Conference on Information Policy.62 Of course, how such a market actually works depends on the specifics of how premiums are set and how a policyholder’s cybersecurity posture can be assessed. (For example, one possible method for setting premiums for the cybersecurity insurance of a large firm might be based in part on the results of an independently conducted red team attack.) Furthermore, there are a number of other reasons that stand in the way of establishing a viable cyber-insurance market: the highly correlated nature of losses from outbreaks (e.g., from viruses) in a largely homogenous monoculture environment, the difficulty in substantiating claims, the intangible nature of losses and assets, and unclear legal grounds.63 6.4.4.3.2 The Credit Card Industry A prime target of cybercriminals is personal information such as credit card numbers, Social Security numbers, and other consumer information. Because many participants in the credit card industry (e.g., banks and merchants) obtain such information in the course of their routine business activities, these participants are likely to be targeted by cybercriminals seeking such information. To reduce the likelihood of success of such criminal activities, the credit card industry has established the Payment Card Industry (PCI) Data Security Standard, which establishes a set of requirements for enhancing payment account data security.64 These requirements include the following: Install and maintain a firewall configuration to protect cardholder data. Do not use vendor-supplied defaults for system passwords and other security parameters. Protect stored cardholder data. Encrypt transmission of cardholder data across open, public networks. William Yurcik and David Doss, “Cyberinsurance: A Market Solution to the Internet Security Market Failure,” Workshop on Economics and Information Security, Berkeley, Calif., 2002. 62 Kenneth Cukier, “Ensuring (and Insuring?) Critical Information Infrastructure Protection,” A Report of the 2005 Rueschlikon Conference on Information Policy, Switzerland, June 16-18, 2005. 63 Rainer Böhme, “Vulnerability Markets: What Is the Economic Value of a Zero-Day Exploit?,” Proceedings of 22C3, Berlin, Germany, December 27-30, 2005, p. 4. 64 An extended description of these requirements can be found at http://usa.visa.com/download/business/accepting_visa/ops_risk_management/cisp_PCI_Data_Security_Standard.pdf.

OCR for page 124
Toward a Safer and More Secure Cyberspace Use and regularly update antivirus software. Develop and maintain secure systems and applications. Restrict access to cardholder data by business need to know. Assign a unique identifier to each person with computer access. Restrict physical access to cardholder data. Track and monitor all access to network resources and cardholder data. Regularly test security systems and processes. Maintain a policy that addresses information security. Organizations (e.g., merchants) that handle credit cards must conform to this standard and follow certain leveled requirements for testing and reporting. Compliance with these standards is enforced by the banks, which have the authority to penalize noncompliant organizations and data disclosures caused by noncompliance. 6.4.4.3.3 Standards-Setting Processes For certain specialized applications, compliance with appropriate security standards are almost a sine qua non for their success. For example, for electronic voting applications, security standards are clearly necessary, and indeed the National Institute of Standards and Technology has developed security standards—or more precisely, voluntary guidelines—for electronic voting systems. (These guidelines are voluntary in the sense that federal law does not require that electronic voting systems conform to them—but many states do have such requirements.) In a broader context, the International Organization for Standardization (ISO) standards process is intended to develop standards that specify requirements for various products, services, processes, materials, and systems and for good managerial and organizational practice. Many firms find value in compliance with an ISO standard and seek a public acknowledgment of such compliance (that is, seek certification) in order to improve their competitive position in the marketplace. In the cybersecurity domain, the ISO (and its partner organization, the International Electrotechnical Commission [IEC]) has developed ISO/IEC 17799:2005, which is a code of practice for information security management that establishes guidelines and general principles for initiating, implementing, maintaining, and improving information security management in an organization. ISO/IEC 17799:2005 contains best practices of control objectives and controls in certain areas of information security management, including security policy; organization of information security; information systems acquisition, development, and maintenance; and information security incident management. Although ISO/IEC

OCR for page 124
Toward a Safer and More Secure Cyberspace 17799:2005 is not a certification standard, the complementary specification standard ISO/IEC 27001 addresses information security management system requirements and can be used for certification.65 As for the putative value of ISO/IEC 17799:2005, the convener of the working group that developed ISO/IEC 17799:2005 argued that “users of this standard can also demonstrate to business partners, customers and suppliers that they are fit enough and secure enough to do business with, providing the chance for them to turn their investment in information security into business-enabling opportunities.”66 6.4.4.4 Nonregulatory Public-Sector Mechanisms A variety of nonregulatory public-sector mechanisms are available to promote greater attention to and action on cybersecurity, including the following: Government procurement. The federal government is a large consumer of information technology goods and services, a fact that provides some leverage in its interactions with technology vendors. Such leverage could be used to encourage vendors to provide the government with IT systems that are more secure (e.g., with security defaults turned on rather than off). With such systems thus available, vendors might be able to offer them to other customers as well. Government cybersecurity practices. The government is an important player in information technology. Thus, the federal government itself might seek to improve its own cybersecurity practices and offer itself as an example for the rest of the nation. Tax policy. A variety of tax incentives might be offered to stimulate greater investment in cybersecurity. Public recognition. Public recognition often provides “bragging rights” for a firm that translate into competitive advantages; cybersecurity could be a candidate area for such recognition. One possible model for such recognition is the Malcolm Baldrige National Quality Award, given to firms judged to be outstanding in a number of important business quality areas. The award was established to mark a standard of excellence that would help U.S. organizations achieve world-class quality. 65 See http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=39612. 66 See http://www.iso.org/iso/en/commcentre/pressreleases/archives/2005/Ref963.html.

OCR for page 124
Toward a Safer and More Secure Cyberspace The desirability and feasibility of these mechanisms and others are topics warranting investigation and research. 6.4.4.5 Direct Regulation (and Penalties) Still another approach to changing business cases is the direct regulation of technology and users—legally enforceable mandates requiring that certain technologies must contain certain functionality or that certain users must behave in certain ways. This is an extreme form of changing the business cases—that is: comply or face a penalty. The regulatory approach has been taken in certain sectors of the economy: financial services, health care, utilities such as electricity and gas, and transportation are among the obvious examples of sectors or industries that are subject to ongoing regulation. For many products in common use today, vendors are required by law to comply with various safety standards—seat belts in cars are an obvious example. But there are few mandatory standards relating to cybersecurity for IT products. Indeed, in many cases the contracts and terms of service that bind users to IT vendors often oblige the users to waive any rights with respect to the provision of security; this is especially true when the user is an individual retail consumer. In such situations, the buyer in essence assumes all security risks inherent in the use of the IT product or service in question. (Note here the contrast to the guarantees made by many credit card companies—the Fair Credit Reporting Act sets a ceiling of $50 on the financial liability of a credit card holder for an unauthorized transaction providing proper notifications have been given, and many credit card issuers have contractually waived such liability entirely if the loss results from an online transactions. These assurances have had an important impact on consumer willingness to engage in electronic commerce.) Such contracts notwithstanding, direct regulation might call for all regulated institutions to adopt certain kinds of standards relating to cybersecurity “best practices” regarding the services they provide to consumers or their own internal practices. For example, in an attempt to increase security for customers, the Federal Financial Institutions Examination Council (FFIEC) has directed covered financial institutions to implement two-factor authentication for customers using online banking.67 Another 67 Two-factor authentication refers to the use of two independent factors to authenticate one’s identity. An authentication factor could be something that one knows (e.g., a password), something that one has (e.g., a hardware token), or something that one is (e.g., a fingerprint). So, one two-factor authentication scheme calls for a user to insert a smart card into a reader and then to enter a password; neither one alone provides sufficient authentication, but the combination is supposed to do so.

OCR for page 124
Toward a Safer and More Secure Cyberspace “best practice” might be the use of tiger teams (red teams) to test an organization’s security on an ongoing basis. (The committee is not endorsing either of these items as a best practice—they are provided as illustrations only of possible best practices.) However, regulation is difficult to get right under the best of circumstances, as a good balance of flexibility and inflexibility must be found. Regulation so flexible that organizations need not change their practices at all is not particularly effective in driving change, and regulation so inflexible that compliance would require organizations to change in ways that materially harm their core capabilities will meet with enormous resistance and will likely be ignored in practice or not adopted at all. Several factors would make it especially difficult to determine satisfactory regulations for cybersecurity.68 Attack vectors are numerous and new ones continue to emerge, meaning that regulations based on addressing specific ills would necessarily provide only partial solutions. Costs of implementation would be highly variable and dependent on a number of factors beyond the control of the regulated party. Risks vary greatly from system to system. There is wide variation in the technical and financial ability of firms to support security measures. In addition, certain regulatory mechanisms have been used for publicly traded companies to ensure that important information is flowing to investors and that these companies follow certain accounting practices in their finances. For example, publicly traded companies must issue annual reports on a U.S. Securities and Exchange Commission (SEC) Form 10-K; these documents provide a comprehensive overview of the company’s business and financial condition and include audited financial statements. In addition, publicly traded companies must issue annual reports to shareholders, providing financial data, results of continuing operations, market segment information, new product plans, subsidiary activities, and research and development activities on future programs. Audits of company finances must be undertaken by independent accounting firms and must follow generally accepted accounting practices. Intrusive auditing and reporting practices have some precedent in certain sectors that are already heavily regulated by federal and state authorities—these sectors include finance, energy, telecommunications, and transportation. Research is needed to investigate the feasibility of using these mechanisms, possibly in a modified form, for collecting information on security breaches and developing a picture of a company’s cybersecurity posture. As an illustration of the value of regulation, consider that in 68 Alfredo Garcia and Barry Horowitz, “The Potential for Underinvestment in Internet Security: Implications for Regulatory Policy,” Journal of Regulatory Economics, Vol. 31, No. 1, February 2007; available at http://ssrn.com/abstract=889071.

OCR for page 124
Toward a Safer and More Secure Cyberspace 2002, California passed the first state law to require public disclosure of any breach in the security of certain personal information. A number of states followed suit, and the California law is widely credited with drawing public attention to the problem of identity theft and its relationship to breaches in the security of personal information. An empirical study by Gordon et al. found that the Sarbanes-Oxley Act of 2002 (P.L. No. 107-204, 116 Stat. 745) had a positive impact on the voluntary disclosure of information security activities by corporations, a finding providing strong indirect evidence that the passage of this act has led to an increase in the focus of corporations on information security activities.69 But such regulatory-driven focus is not without cost and may have unintended consequences, including decreased competition, distortions in cybersecurity investments and internal controls, and lost productivity from increased risk aversion.70 Thus, research is needed to better understand the trade-offs involved in implementing information-disclosure regulations. What might be included under such a rubric? One possibility is that a publicly traded company might be required to disclose all cybersecurity breaches in a year above a certain level of severity—a breach could be defined by recovery costs exceeding a certain dollar threshold. As part of its audit of the firm’s books, an accounting firm could be required to assess company records on such matters. A metric such as the number of such breaches divided by the company’s revenues would help to normalize the severity of the cybersecurity problem for the company’s size. Another possibility is that a publicly traded company might be required to test its cybersecurity posture against a red team, and a sanitized report of the test’s outcome or an independent assessment of the test’s results included in the firm’s SEC Form 10-K report. With more information about a firm’s track record and cybersecurity posture on the public record, consumers and investors would be able to take such information into account in making buying and investment decisions, and a firm would have incentives to improve in the ways reflected in such information. (These possibilities should not be construed as policy recommendations of the committee, but rather as some topics among others that are worth researching for feasibility and desirability.) 69 Lawrence A. Gordon, Martin P. Loeb, William Lucyshyn, and Tashfeen Sohail, “Impact of Sarbanes-Oxley Act on Information Security Activities,” Journal of Accounting and Public Policy, 25(5): 503-530, 2006. 70 Anindya Ghose and Uday Rajan, “The Economic Impact of Regulatory Information Disclosure on Information Security Investments, Competition, and Social Welfare,” 2006 Workshop on Economics of Information Security, Cambridge, England, March 2006.

OCR for page 124
Toward a Safer and More Secure Cyberspace 6.4.4.6 Use of Liability Liability is based on the notion of holding vendors and/or system operators financially responsible through the courts for harms that result from cybersecurity breaches. According to this theory, vendors and operators, knowing that they could be held liable for cybersecurity breaches that result from product design or system operation, would be forced to make greater efforts than they do today to reduce the likelihood of such breaches. Courts in the legal system would also define obligations that users have regarding security. Some analysts (often from the academic sector or from industries that already experience considerable government regulation) argue that the nation’s cybersecurity posture will improve only if liability forces users and/or vendors to increase the attention they pay to security matters. Opponents argue that the threat of liability would stifle technological innovation, potentially compromise trade secrets, and reduce the competitiveness of products subject to such forces. Moreover, they argue that there are no reasonable objective metrics to which products or operations can be held responsible, especially in an environment in which cybersecurity breaches can result from factors that are not under the control of a vendor or an operator. An intermediate position confines explicit liability to a limited domain. In this view, regulation or liability or some other extrinsic driver can help to bootstrap a more market-driven approach. Believers in this view assert that new metrics, lampposts, criteria, and so on can be integrated with established processes for engineering or acceptance evaluation. Updating the Common Criteria or the Federal Information Security Management Act (FISMA) to include these mandated elements would enable the injection of the new ideas into the marketplace, and their demonstrated value and utility may persuade others not subject to regulation or liability to adopt them anyway. All of these views on liability were present within the committee, and the committee did not attempt to reconcile them. But it found value in separating the issue into three components. The first is the putative effectiveness of an approach based on liability or direct regulation in strengthening the nation’s cybersecurity posture. The second is the character of the actual link between regulation or liability and technological innovation and trade secret protection. The third is the public policy choice about any trade-offs that such a link might imply. Regarding the first and the second, the committee found mostly a set of assertions but exceedingly little analytical work. Advocates of regulation or liability to strengthen cybersecurity have not made the case that any regulatory apparatus or case law on liability can move quickly

OCR for page 124
Toward a Safer and More Secure Cyberspace enough as new threats and vulnerabilities emerge, while critics of regulation or liability have not addressed the claim that regulation and liability have a proven record of improving security in other fields, nor have they yet convincingly shown why the information technology field is different. Nor is there a body of research that either proves or disproves an inverse link between regulation or liability and innovation or trade secret protection. Substantial research on this point would help to inform the public debate over regulation by identifying the strengths and weaknesses of regulation or liability for cybersecurity and the points (if any) at which a reconciliation of the tensions is in fact not possible. Regarding the third, and presuming the existence of irreconcilable tensions, it then becomes a public policy choice about how much and what kind of innovation must be traded off in order to obtain greater cybersecurity. 6.5 SECURITY POLICIES With the increasing sophistication and wide reach of computer systems, many organizations are now approaching computer security using more proactive and methodical strategies than in the past. Central to many of these strategies are formal, high-end policies designed to address an organization’s overall effort for keeping its computers, systems, IT resources, and users secure. While access control is a large component of most security policies, the policies themselves go far beyond merely controlling who has access to what data. Indeed, as Guel points out, security policies communicate a consensus on what is appropriate behavior for a given system or organization.71 Basically, developing a security policy requires making many decisions about such things as which people and which resources to trust and how much and when to trust them. The policy development process comprises a number of distinct considerations:72 Developing requirements involves the often-difficult process of determining just how much security attention to pay to a given set of data, resources, or users. Human resources information, for example, or critical proprietary data about a company’s product, might require significantly stronger protections than, say, general information documents on an organization’s intranet. A biological research facility might wish to encrypt genomic databases that 71 Michele D. Guel, “A Short Primer for Developing Security Policies,” SANS Institute, 2002; available at http://www.sans.org/resources/policies/Policy_Primer.pdf. 72 More perspective on developing security policies can be found in Matt Bishop, “What Is Computer Security?” IEEE Security and Privacy, 1(1): 67-69, 2003.

OCR for page 124
Toward a Safer and More Secure Cyberspace contain sequence information of pandemic viruses, allowing access only to vetted requestors. Setting a policy entails translating security requirements into a formal document or statement setting the bounds of permissible and impermissible behavior and establishing clear lines of accountability. Implementing a policy can be accomplished using any of a range of technical mechanisms (e.g., a firewall or setting a user’s system permissions) or procedural mechanisms (e.g., requiring users to change passwords on a monthly basis, reviewing access-control lists periodically). Assessing the effectiveness of mechanisms for implementing a policy and assessing the effectiveness of a policy in meeting the original set of requirements are ongoing activities. Organizations often choose to create a number of distinct policies (or subpolicies) to address specific contexts. For example, most organizations now provide employees with acceptable-use policies that specify what types of behavior are permissible with company computer equipment and network access. Other prevalent policies include wireless network, remote access, and data-backup policies. Having multiple security policies allows organizations to focus specific attention on important contexts (for example, consider the efficiency of having an organization-wide password policy), although harmonizing multiple policies across an organization can often be a challenge. Determining just how to set one’s security policy is a critical and often difficult process for organizations. After all, long before any security policy is ever drafted, an organization must first get a good sense for its security landscape—for example, what things need what level of protection, which users require what level of access to what different resources, and so on. However, in the beginning of such a process, many organizations may not even know what questions need to be asked to begin developing a coherent policy or what options are open to them for addressing a given concern. One major open issue and area for research, therefore, is how to assist with this early, though all-important, stage of developing requirements and setting a security policy, as well as how to assist in evaluating existing policies.73 One approach to the problem of establishing appropriate policies in large organizations is the use of role-based access control, a practice that 73 One interesting framework for developing and assessing security policies can be found in Jackie Rees, Subhajyoti Bandyopadhyay, and Eugene H. Spafford, “PFIRES: A Policy Framework for Information Security,” Communications of the ACM, 46(7): 101-106, 2003.

OCR for page 124
Toward a Safer and More Secure Cyberspace determines the security policy appropriate for the roles in an organization rather than the individuals (a role can be established for a class of individuals, such as doctors in a hospital, or for a class of devices, such as all wireless devices). However, since individuals may have multiple roles, reconciling conflicting privileges can be problematic. Other major open issues and research areas include the enforcement of security policies (as discussed in Section 6.1) and the determination of how effective a given security policy is in regulating desirable and undesirable behavior. These two areas (that is, enforcement and auditability) have been made more significant in recent years by an evolving regulatory framework that has placed new compliance responsibilities on organizations (e.g., Sarbanes-Oxley Act of 2002 [P.L. No. 107-204, 116 Stat. 745]; Gramm-Leach-Bliley Act [15 U.S.C., Subchapter I, Sec. 6801-6809, Disclosure of Nonpublic Personal Information]; the Health Insurance Portability and Accountability Act (HIPAA) of 1996; and so on). Another open question in this space involves the effectiveness of using outsourced firms to audit security policies. Additional areas for research include ways to simulate the effects and feasibility of security policies; how to keep policies aligned with organizational goals (especially in multipolicy environments); methods for automating security policies or making them usable by machines; how to apply and manage security policies with respect to evolving technology such as distributed systems, handheld devices, electronic services (or Web services), and so on; and ways to reconcile security policies of different organizations that might decide to communicate or share information or resources.