National Academies Press: OpenBook

Toward a Safer and More Secure Cyberspace (2007)

Chapter: 6 Category 3 - Promoting Deployment

« Previous: 5 Category 2 - Enabling Accountability
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

6
Category 3—Promoting Deployment

The goal of requirements in Category 3—Promoting deployment, is that of ensuring that the technologies and procedures in Categories 1 and 2 of the committee’s illustrative research agenda are actually used to promote and enhance security. This broad category includes technologies that facilitate ease of use, by both end users and system implementers; incentives that promote the use of security technologies in the relevant contexts; and removal of barriers that impede the use of security technologies.

6.1
USABLE SECURITY

It is axiomatic that security functionality that is turned off or disabled or bypassed or not deployed by users serves no protective function. The same is true for security practices or procedures that are promulgated but not followed in practice. (This section uses the term “security” in its broadest sense to include both technology and practices and procedures.) Yet, even in an age of increasing cyberthreat, security features are often turned off and security practices are often not followed. Today, security is often too complex for individuals and enterprise organizations to manage effectively or to use conveniently. Security is hard for users, administrators, and developers to understand; clumsy and awkward to use; obstructs all of these parties in getting real work done; and does not scale easily to large numbers of users or devices to be protected. Thus, many cybersecurity measures are circumvented by the users they are intended

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

to protect, not because these users are lazy but because these users are well motivated and trying to do their jobs. When security gets in the way, users switch it off and work around it, designers avoid strong security, and administrators make mistakes in using it.

It is true that in the design of any computer system, there are inevitable trade-offs among various system characteristics: better or less costly administration, trustworthiness or security, ease of use, and so on. Because the intent of security is to make a system completely unusable to an unauthorized party but completely usable to an authorized one, there are inherent trade-offs between security and convenience or ease of access.

One element of usable security is better education. That is, administrators and developers—and even end users—would benefit from greater attention to security in their information technology (IT) education, so that the concepts of and the need for security are familiar to them in actual working environments (Box 6.1). In addition, some aspects of security are necessarily left for users to decide (e.g., who should have access to some resource), and users must know enough to make such decisions sensibly.

The trade-off between security and usability need not be as stark as many people believe, however, and there is no a priori reason why a system designed to be highly secure against unauthorized access cannot also be user-friendly. An example case in which security and usability have enhanced each other in a noncybersecurity context is that of modern hotel room keys. Key cards are lighter and more versatile than the old metal keys were. They are easier for the guests to use (except when the magnetic strip is accidentally erased), and the system provides the hotels with useful security information, such as who visited the room and whether the door was left ajar. Modern car keys are arguably more secure and more convenient as well.

The committee believes that efforts to increase security and usability can proceed simultaneously for a long time, even if they may collide at some point after attempts at better design or better engineering have been exhausted. Many of the usability problems of today have occurred because designers have simply given up too soon, before serious efforts have been made to reconcile the tension. All too often, the existence of undeniable tensions between security and access is used as an excuse for not addressing usability problems in security.

One part of the problem is that the interfaces are often designed by programmers who are familiar with the technology and often have a level of literacy (both absolute and technical) well above that of the average end user. The result is interfaces that are generally obvious and well understood by the programmers but not by the end users. Few programmers even have awareness of interface issues, and fewer still have useful train-

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

BOX 6.1

Fluency with Information Technology (and Cybersecurity)

A report entitled Being Fluent with Information Technology published several years ago by the National Research Council (NRC) sought to identify what everyone—every user—ought to know about information technology.1 Written in 1999, that report mentioned security issues in passing as one subtopic within the general area of information systems. Subsequently, Lawrence Snyder, chair of the NRC Committee on Information Technology Literacy responsible for the 1999 report, wrote Fluency with Information Technology: Skills, Concepts, and Capabilities.2 The University of Washington course for 2006 based on this book (http://www.cs.washington.edu/education/courses/100/06wi/labs/lab11/lab11.html) addresses security issues in greater detail by setting forth the following objectives for the security unit:

  • Learn to create strong passwords

  • Set up junk e-mail filtering

  • Use Windows Update to keep your system up to date

  • Update McAfee VirusScan so that you can detect viruses

  • Use Windows Defender to locate and remove spyware

Another NRC report, ICT Fluency and High Schools: A Workshop Summary,3 released in 2006, suggested that security issues were one possible update to the fluency framework described in the 1999 NRC report.

Taken together, these reports indicate that in the 8 years since Being Fluent with Information Technology was released, issues related to cybersecurity have begun to become important even to the most basic IT education efforts.

  

1National Research Council. 1999. Being Fluent with Information Technology. National Academy Press, Washington, D.C.

  

2Lawrence Snyder. 2002. Fluency with Information Technology; Skills, Concepts, and Capabilities. Addison-Wesley, Lebanon, Ind.

  

3National Research Council. 2006. ICT [Information and Communications Technology] Fluency and High Schools: A Workshop Summary. The National Academies Press, Washington, D.C.

ing and background in this subfield. For example, security understandings are often based on physical-world metaphors, such as locking doors and obscuring sensitive information. These metaphors have some utility, and yet considerable education is needed to teach users the limitations of the metaphors. (Consider that in a world of powerful search tools [e.g., Google’s desktop, and Spotlight on Mac computers], it is not realistic for those in possession of sensitive information to rely on “trusting other people not to look for sensitive information” or “burying information in

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

sub-sub-sub-sub directories,” whereas in the absence of such tools, such actions might well have considerable protective value.) The difficulty of overcoming such limitations suggests that it is simply unrealistic to expect that security should depend primarily on security education and training.

In addition, the extra training and education for security simply do not match the market, with the predictable result that users don’t spend much time learning about security. Users want to be able to buy and use IT without any additional training. Vendors want to sell to customers without extra barriers. Couple these realities with the projection that the Internet user population will double in the next decade, with hundreds of millions of new users, and it is clear that we cannot depend on extra education and training to improve security significantly.

If user education is not the answer to security, the only other possibility is to develop more usable security mechanisms and approaches. As a starting point, consider the following example. Individuals in a company may need to share files with one another. When these persons are in different work units, collaboration is often a hassle. Using today’s security mechanisms, it is likely that these people would have to go through an extended multistep process to designate file directories that they want to share with one another—managing access-control lists, giving specific permissions, and so on. Depending on the level of inconvenience entailed, these individuals may simply elect to e-mail their files to one another, thus circumventing entirely the difficulties of in-house collaboration—but also making their files vulnerable to all of the security issues associated with the open Internet. It would be much more preferable to have mechanisms in place that aggregate and automatically perform low-level security actions under an abstraction that allows each user to designate another person as a collaborator on a given project and have the system select the relevant files to make available to that person and to no others.

Usable security would thus reduce the cognitive load needed by an authorized user to navigate security and the “hassle factor,” thus increasing the likelihood that users would refrain from simply bypassing security measures or would never implement them in the first place. Such issues go far beyond the notion of “wizards,” which all too often simply mask an underlying complexity that is inherently difficult to understand.

System administrators are also an important focal point for usable security. Because system administrators address low-level system issues much more often than end users do, they are usually more knowledgeable about security matters and are usually the ones to whom end users turn when security issues arise. But many users (e.g., those in small businesses) must perform their own system administration—a point suggesting that remote security administration, provided as a service, has an

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

important role to play while more usable security mechanisms are not widely deployed.

In addition, the fact that system administrators are more knowledgeable than end users about low-level security issues does not mean that they do not find administering those issues to be a burden. For example, system administrators rather than vendors must make decisions about access control—who should have what privileges on a system—simply because the vendor does not and cannot know to whom any particular user is willing to grant access to a resource. However, this fact does not mean that it should be difficult to specify an access-control list.

Many computer security problems result from a mismatch between a security policy and the way that the policy is or is not implemented, and system administrators would benefit greatly from automated tools that would indicate how their systems are actually configured and whether an actual configuration is consistent with their security policy. For example, administrators need to be able to set appropriate levels of privilege for different users, but they also need to be able to generate lists of all users with a given level of privilege. Some tools and products offer some capability for comparing installed configurations with defined security policies, but more work needs to be done on tools that enable security policies to be described more clearly, more unambiguously, and more easily. Such tools are needed, for example, when security policies change often.

A related though separate point is the extent to which new systems and networks can or should include ideas that involve significant changes from current practice. Though end users are the limiting case of this issue (e.g., “How can you deploy systems that require the habits of 200 million Internet users to change and a whole industry to support them?”), the issue of requiring significant change is also relevant to system administrators, who are fewer in number but may well be as resistant to change as end users are.

In some cases, issues may arise that fundamentally require end users to alter their ways of doing business. Consider the question of whether the end user should or should not make a personal choice about whether or not to trust a certificate authority. One line of argument suggests that such a question is too important to handle automatically. If so, users may indeed be required to change their habits and learn about certificate authorities. But the countering line of argument is that systems that require users to make such decisions will never be deployed on a large scale, regardless of their technical merits, and there is ample evidence that most users are not going to make sensible choices about trusting certificate authorities. One way of addressing such differences is to develop technology that by default shields users from having to make such choices

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

but nevertheless provides those who wish to do so with the ability to make their own choices.

The quest for usable security has social and organizational dimensions as well as technological and psychological ones. Researchers have found that the development of usable security requires deep insight into the human-interaction dimensions of the application for which security is being developed and of the alignment of technical protocols for security and of the social/organizational protocols that surround such security. Only with such insight is it possible to design and develop security functionality that does not interfere with what legitimate workers must do in the ordinary course of their regular work. (That is, such functionality would not depend on taking explicit steps related only to security and nothing else.) For example:

  • Individuals generally have multiple cyber-identities. For example, a person may have a dozen different log-in names to different systems, each of which demands its own password to access. Different identities often mean that the associated roles differ, for example, by machine, by user identities, by privilege, and so on. It is hard enough to remember different log-in names, which may be necessitated because the user’s preferred log-in name is already in use (the log-in name JohnSmith is almost certainly already in use in most large-scale systems, and any given John Smith may use JohnSmithAmex or JohnSmithCitibank or JohnSmithPhone as his log-in name, depending on the system he needs to access). But what about passwords? In order to minimize the cognitive load on the user, he or she will often use the same password for every site—and in particular will not tailor the strength of the password to the importance or the sensitivity of the site. Alternatively, users may plead for “single-sign-on” capability. Being required to present authentication credentials only once is certainly simpler for the user but is risky when different levels of trust or security are involved.

  • Individuals usually don’t know what they don’t know. A common approach to security is to hide objects from people who do not have explicit authorization to access them, and to make these objects visible to people who do have explicit authorization. From a business process standpoint, there is an important category that this approach to security does not recognize—individuals who should have explicit authorization for access but do not. Authorization is granted in one of two ways: the individual receives authority unbidden (e.g., a new hire is automatically granted access to his

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

or her unit’s server), and/or the individual requests authorization from some authority who then decides whether or not to grant that authority. The latter approach, most common when some ad hoc collaborative arrangement is made, presumes that the individual knows enough to request access. But if the necessary objects are hidden from the individual, how does he or she know that it is necessary to request access to those specific objects? Such issues often arise in an environment dealing with classified information in which, because of secrecy and compartmentalization, one party does not know what information another party has.

  • Individuals function in a social and organizational context, and processes for determining access rights are inherently social and organizational. When necessary accesses are blocked in the name of security, individuals must often expend considerable effort in untangling the web of confusion that is the ultimate cause of the denial of access. Individuals with less aggressive personalities, or newly hired individuals who do not want to “make trouble” may well be more reluctant to take such action—with the result that security policies and practices have kept employees from doing their work.

Addressing these social and organizational dimensions of security requires asking questions of a different sort than those that technologists usually ask. Technologists usually seek to develop solutions or applications that generalize across organizational settings—and users must adapt to the requirements of the technology. Focusing on the social and organizational dimension implies developing understandings of what end users are trying to accomplish, with whom, and in what settings. What is the organization trying to achieve? What are the day-to-day security practices of effective employees? What are the greatest security threats? What information must be protected? What workplace practices are functional and effective and should be preserved in security redesigns? These understandings then support and may even drive design innovation at the network, infrastructure, and applications interface levels.

A social and organizational understanding of security is based on posing questions at several distinct levels. In an organization, senior management determines security policy and establishes the nature and scope of its security concerns. But management also shapes a much larger social context that includes matters such as expectations for cooperative work, the nature of relationships between subordinates and superiors, and relationships between support and business units. At the same time, individuals and groups in the organization must interpret management-determined security concerns and implement management-determined policy—and these individuals and groups generally have considerable

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

latitude in doing so. Most importantly, individuals and groups must get their primary work done, and security is by definition peripheral to doing the primary work of the organization. Thus, because there is often conflict, or at least tension, between security and getting work done, workers must make judgments about what risks are worth taking in order to get their work done and how to bypass security measures if that is necessary in order to do so.

It is against this backdrop that the technology infrastructure must be assessed. At the applications and task levels, it is important to understand how data-sharing practices are managed and what interorganizational and intraorganizational information flows must be in place for people to work effectively with others. A key dimension of data-sharing practices is access privileges—how are they determined, and how is knowledge of these privileges promulgated? (This includes, of course, knowing of the privileges themselves as well as their settings.) Technology development so assessed implies not only good technology, but extensive tools that facilitate organizational customization and that help end users identify what needs to be communicated and to whom.

6.2
EXPLOITATION OF PREVIOUS WORK

There is a long history of advances in cybersecurity research that are not reflected in today’s practice and products. In many cases, the failure to adopt such advances is explained at least in part by a mismatch between market demands and the products making use of such research. For example, secure architectures often resulted in systems that were too slow, too costly, too late, and/or too hard to use. Nevertheless, the committee believes that some security innovations from the past are worth renewed attention today in light of a new underlying technological substrate with which to implement these innovations and a realization that inattention to nontechnical factors may have contributed to their nonuse (Section 3.4.1.4). These previous innovations include, but are not limited to the following:

  • Virtual machine architectures that enable strict partitions and suitable isolation among different users, as discussed in Section 4.1.2.3 (Process Isolation);

  • Multilevel security and multilevel integrity that enable the simultaneous processing of information with different classification levels;

  • With the exception of the AS/400, System 38 (now the IBM iSeries), capability architectures that have not traditionally been successful but could prove to have valuable lessons to teach; and

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
  • Software engineering practices and programming tools for developing secure and reliable systems.

“Old” but unadopted innovations that solved real cybersecurity problems are often plausible as points of departure for new research that addresses these same problems. As an example of early work that may be relevant today, consider what might be called a “small-process/message-passing” model for computation, in which a user’s work is performed using multiple loci of control (generally called threads), which communicate with one another by means of signals and messages. Exemplified by Unix, this model has a demonstrated ability to optimize machine resources, especially processor utilization; while one thread may be blocked waiting on, say, disk access, other threads can be performing useful tasks.

The small-process/message-passing model does, however, have some disadvantages for security. A secure machine must map some set of external attributes, such as user identity, role, and/or clearance into the internal workings of the machine and use those attributes to enforce limits on access to resources or invocation of services. The internal data structures used to enforce these limits is often called the “security state.” The security state of a small-process/message-passing structure is diffuse, dynamic, and spread throughout a large number of processes. Furthermore, its relationship to the hardware is tenuous. It is therefore hard to analyze and verify.

An alternative structure is the “large-process” model of computation, an example of which was Multics. In the large-process model, the work being done for a user is tied to a single locus of control, and the security state is mostly embodied in a hardware-enforced structure. This model relies on multiplexing between users to gain efficiency (as opposed to the small-process model, which multiplexes between threads working for a single user) and is efficient only when large numbers of users are sharing a single body of hardware, such as a server. From a security perspective, the advantage of the large-process structure is that the security features of the system are easier to understand, analyze, and verify.

Because hardware resources are increasingly inexpensive, efficient use of hardware is no longer as important as it once was. Designs based on the need to use hardware efficiently have also had undesirable security consequences, and with the dropping cost of hardware, it may make sense to revisit some of those designs in certain circumstances. For example, the multicore processor (discussed briefly in Section 4.1.2.1) holds some promise for mitigating the performance penalties of the large-process model, and permitting the security and verification advantages to be exploited in certain contexts. Although a small-process/message-passing model is sensible for distributed computing (e.g., for Web services) in which

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

administrative control is decentralized, the large-process model makes sense in applications such as a public utility or central server in which security requirements are under centralized administrative control.

6.3
CYBERSECURITY METRICS

Cybersecurity is a quality that has long resisted—and continues to resist—precise numerical classification. Today, there are few good ways to determine the efficacy or operational utility of any given security measure. Thus, individuals and companies are unable to make rational decisions about whether or not they have “done enough” with respect to cybersecurity. In the absence of good cybersecurity metrics, it is largely impossible to quantify cost-benefit trade-offs in implementing security features. Even worse, it is very difficult if not impossible to determine if System A is more secure than System B. Good metrics would also be one element supporting a more robust insurance market in cybersecurity founded on sound actuarial principles and knowledge.1

One view of security is that it is a binary and negative property—secure is simply defined as the opposite of being insecure. Under this “absolutist” model, it is easy to demonstrate the insecurity of a system via an effective attack, but demonstrating security requires proving that no effective attack exists. An additional complicating factor is that once an attacker finds a vulnerability, it must be assumed that such knowledge will propagate rapidly, thus enabling previously stymied attackers to launch successful attacks.

There are some limited domains, such as the proof of privacy in Shannon’s foundational work on perfect ciphers2 and the proof of safety properties guaranteed by the type systems of many modern programming languages that are successful applications of this approach. But on the whole, only relatively small programs, let alone systems of any complexity, can be evaluated to such a standard in their entirety.

If security is binary, then a system with any vulnerability is insecure—and metrics are not needed to indicate that one system is “more” secure than another. But this absolutist view has both theoretical and practical difficulties. One theoretical difficulty is that the difference between

1

It is also helpful to distinguish between a metric (which measures some quantity or phenomenon in a reasonably repeatable way) and risk assessment (which generally involves an aggregation of metrics according to a model that provides some degree of predictive power). For example, in the financial industry, risk assessment depends on a number of metrics relevant to a person’s financial history (e.g., income, debt, number of years in the same residence, and so on).

2

Claude Shannon, “Communication Theory of Secrecy Systems,” Bell System Technical Journal, 28: 656-715, October 1949.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

a secure and a vulnerable software artifact can be as small as one bit, and it is hard to imagine a process that is sensitive enough to determining if the artifact is or is not secure.

A practical difficulty is that static-code analysis—predicting the behavior of code without actually executing it—remains a daunting challenge in the general case. One aspect of the difficulty is that of determining if the code in question behaves in accordance with its specifications. Usually the domain of formal proofs of correctness, this approach presumes that the specifications themselves are correct—but in fact vulnerabilities are sometimes traced to incorrect or inappropriate specifications. Moreover, the size of systems amenable to such formal proofs remains small compared to the size of many systems in use today. A second aspect of this difficulty is in finding functionality that should not be present according to the specifications (as discussed in Section 4.1.3.1).

Outside the absolutist model, security is inherently a synthetic property—it no longer reflects some innate quality of the system, but rather how well a given system with a given set of security policies (Section 6.5) can resist the activities of a given adversary. Thus, the security of a system can be as much a property of the adversary being considered as it is of the system’s construction itself. That is, measuring the security of a system must be qualified by asking, Against what kind of threat? Under what circumstances? For what purpose? and Under what security policy?

In this context, the term “metric” is not binary. It must be, at the very least, ordinal, so that metrics can be used to rank-order a system along some security-relevant dimension. In addition, the term “metric” assumes that one or more outcomes of interest can be measured in an unambiguous way—that one can recognize a good outcome or a bad outcome when it occurs. Furthermore, it assumes that an improvement in the metric actually corresponds to an improvement in outcome.

Yet another complicating factor is that an adversary may offer unforeseen threats whose impact on the system cannot be anticipated or measured in advance. While the absolutist model—which depends a great deal on formal proof—presumes that all security properties can be specified a priori, in practice it is common that a system’s security requirements are not understood until well after its deployment (if even then!). Moreover, if a threat (or even a benign event) is unforeseen, a response tailored to that threat (or event) cannot be specified (although a highly general response, such as “Abort,” may be possible, and certain other responses may be known to be categorically undesirable).

For example, the cryptography community has had some success in formalizing the security of its systems. Proving cryptographic security calls for defining an abstract model of an adversary and then using reductions to prove that key security properties have equivalent computational

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

hardness to certain well-known difficult problems. Thus, the strength of a cipher can be parameterized as a function of the adversary’s qualitative capabilities (e.g., the ability to inject known plaintext messages into the channel) and quantitative capabilities (e.g., the ability to perform N computations in time M). However, outside this rarified environment, real attackers bypass these limitations simply by working outside the model’s assumptions (e.g., side channel attacks, protocol engineering interactions, and so on). And, sometimes cryptographic primitives can fail, invalidating the model’s assumptions, as illustrated by recently discovered problems in the SHA-1 hash algorithm.3

Finally, the security of a system tends to be tightly coupled with the particulars of its configuration, which suggests that security can be a highly fragile property. The same software system may be considerably more secure under the care of one administrator than under the care of another.

These challenges suggest that the search for an overall cybersecurity metric—one that would be applicable to all systems and in all environments—is a largely fruitless quest. Rather, cybersecurity must be conceptualized in multidimensional terms, and metrics for cybersecurity must, for example, take into account the nature of the threat and how a system is operated in practice. Users and researchers thus must be clear about the limitations of a given metric (e.g., the metric only applies under the following set of assumptions) and/or create tests that anticipate various classes of adversaries.

Nevertheless, we have strong intuitions that some systems are in fact more secure than others. While security may always be too complex to submit to a precise analysis, it seems likely that even imperfect approaches may provide useful insights for evaluating current and future systems, provided that the necessary qualifiers are taken into account.

To date, most attempts to define security metrics have fallen into one of several broad categories. The first category is operational metrics. This approach, typified by the Security Metrics Guide for Information Technology Systems from the National Institute of Standards and Technology,4 focuses on measurements of the behavior of an IT organization. Thus, at the highest level of abstraction, one might measure the fraction of systems that have certain security controls in place, the number of systems operators with security accreditations, the number of organizational components with incident response plans, and so on. Enterprise IT executives might

3

Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu, “Finding Collisions in the Full SHA-1,” Advances in Cryptology—Crypto’05; available at http://www.infosec.sdu.edu.cn/paper/sha1-crypto-auth-new-2-yao.pdf.

4

See http://csrc.nist.gov/publications/nistpubs/800-55/sp800-55.pdf.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

also track outcome data (e.g., number of viruses detected inside the organization, number of intrusions from the outside) and control data (number of machines with antivirus software, number of services exposed by firewall, and so on). Operational metrics can be valuable for tracking overall compliance with a security policy and trends in well-established problem classes, but they seem unlikely to be useful in providing finer-granularity insight about software security.

Related to operational metrics are what might be called process metrics; these indicate the extent to which an organization follows some best practice or practices. An example of a process metric is the Capability Maturity Model (CMM), which is intended to measure the quality of an organization’s software development processes. In the CMM, organizations are measured from Level 1 (corresponding to a development process that is ad hoc and chaotic) to Level 5 (corresponding to a development process that is repeatable, well-defined, and institutionalized; managed with quantifiable objectives and minimal variation in performing tasks; and optimized to produce continuous process improvement).5 Process metrics must be correlated with outcome metrics in order to be regarded as successful, and the extent of such correlation is an open question today.

A second broad category of metrics is that of product evaluations. This approach focuses on a third-party evaluation process for products rather than for organizations. These evaluation processes are typically structured around certifications of product security that place a product’s security in a categorical ranking based on its passing certain process benchmarks. For example, the Common Criteria specify distinct Evaluation Assurance Levels, which require successfully passing a variety of test regimes ranging from functional system testing to formal design verification. Typically these certifications are based on some combination of software process measures (e.g., what design practices were used in the design of the software) and testing (e.g., validating that unacceptable test inputs are not accepted).

The strongest ratings may require a formal analysis of security for a system’s design. However, there are real limits to such metrics for the security field. First, they are largely disconnected from the software artifact itself and can make few statements about the weaknesses of a particular implementation. Second, certification levels are sufficiently coarse that most products can only be successfully evaluated within the same narrow range. Finally, certification is human-intensive and thus can be

5

The original CMM for software is no longer supported by the Software Engineering Institute. In 2000, the SW-CMM was upgraded to CMMI® (Capability Maturity Model Integration).

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

very expensive and slow. Under the regime of the Orange Book, many software artifacts were no longer marketed or supported by the vendors by the time the certification had been completed. Under the current Common Criteria regime, many small companies cannot afford to get their products certified, thus creating a potential bias that may inhibit a fully open market in secure and security products.

A third category of metrics is post hoc, or outcome, metrics. This is the most data-rich category of security metrics because it is driven by post hoc analysis and characterization of discovered security vulnerabilities or active attacks. Examples of an outcome metric might be the following:

  • The rate (number per unit time) of successful penetration attempts of a system when a given cybersecurity action is in place. In this example, a lower value is better (assuming that the threat environment remained the same) but is meaningful only for this particular cybersecurity measure.

  • The fraction of known vulnerabilities that a given cybersecurity measure eliminates or mitigates (Cowan’s relative vulnerability metric).6 In this example, a larger fraction is better, subject to the same qualifiers. (Note that any metric involving the tracking of vulnerabilities over time requires a list of standardized names for vulnerabilities and other information security exposures. Developing and maintaining this list are the purposes of the MITRE Common Vulnerabilities and Exposures effort and have enabled longitudinal vulnerability analyses and a reduction in confusion when communicating about particular problems.) The CERT Coordination Center (CERT/CC) also maintains vulnerability lists that provide common vocabulary, data for classification, and so on.7

  • The time that it takes for a particular kind of worm (e.g., a scanner that chooses a target at random once it has been implanted) to infect a certain fraction of the vulnerable population of Internet sites. Defenses against this kind of worm can then be characterized in terms of their effect on this time (longer times would indicate defenses of greater effectiveness). Staniford et al. present a model of Internet worms that parameterizes worm outbreaks in terms of their spreading

6

Crispin Cowan, “Relative Vulnerability: An Empirical Assurance Metric,” presentation at the Workshop on Measuring Assurance in Cyberspace, 44th IFP Working Group, June 2003; available at http://www2.laas.fr/IFIPWG/Workshops&Meetings/44/.

7

For more information on the CERT Coordination Center, see http://www.cert.org/certcc.html.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

rate.8 Following the model of Moore et al.,9 defenses can then be evaluated quantitatively as the fraction of susceptible hosts that are protected over a given period of time for a given deployment. In general, this approach is only well suited for evaluating the relative strength of security technologies, and then only when attacks can be abstracted and homogenized.

  • The financial impact of security penetrations when losses are incurred. Firms cannot make reasonable investment decisions unless they understand the implicit and explicit impact of their security investment decisions. This is a challenging task, but currently the only data available are anecdotal, making the decision to invest difficult to evaluate and compare with other security/nonsecurity investment options.

Software vulnerabilities are widely reported on public mailing lists and archived in both public and private databases (the National Vulnerability Database is one such well-known collection). Each vulnerability is typically tagged with its source and the particular systems impacted and the source of the vulnerability. Attacks are typically gathered from intrusion-detection system logs and honeypot systems designed to detect new attacks (e.g., Symantec’s DeepSight and DShield.org are well-known examples of attack-monitoring systems).

Such data can be used in a number of ways:

  • Relative assessments based on counts. Different systems or versions of systems may be compared on the basis of the number of vulnerabilities or attacks they experienced. This is one of the most problematic use of post hoc data, since it presumes that the vulnerability-discovery process and the target-selection process are random and uniform. In fact, both are unlikely to be true. Particular systems are likely to be targeted more than others owing to popularity (i.e., because the system provides a wider base to attack), owing to familiarity (i.e., there are fewer people with knowledge of unusual systems), or owing to the particular goals of the attacker (i.e., its intended victim makes extensive use of a particular system). Similarly, vulnerability discovery is driven by the same motives as those of attackers as well as by an additional bias from third-party

8

Stuart Staniford et al., “The Top Speed of Flash Worms,” presented at the ACM Workshop on Rapid Malcode (WORM), October 29, 2004, Washington, D.C.; available at www.icir.org/vern/papers/topspeed-worm04.pdf.

9

David Moore, Colleen Shannon, and k claffy, “Code Red: A Case Study on the Spread and Victims of an Internet Worm,” pp. 273-284 in Proceedings of the 2nd ACM SIGCOMM Workshop on Internet Measurement, ASM Press, New York, 2002.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

security assessment companies that actively search for new vulnerabilities to enhance their offerings and marketing to potential customers.

  • Vulnerability origin studies. Rescorla first synchronized vulnerability data with particular software versions to analyze the time origin of vulnerabilities in popular open-source operating systems and their “lifetime” distribution.10 Ozment and Schechter provided a more detailed analysis showing that, at least for the OpenBSD system, most newly discovered vulnerabilities are not in “new” code and have existed for long periods of time.11 Moreover, they attempt to use reliability growth models to infer changes in the rate of new vulnerabilities being introduced and in the rate of overall vulnerabilities being discovered. While these techniques are necessarily limited (they are inherently “right-censored,” since the future is unknown), they suggest a mechanism to identify real trends. This is a nascent area, and there is little doubt that it could be extended to the analysis of particular subsystems, changes in software process, and so on.

  • Defense evaluation studies. Cowan has argued for using future vulnerability data as a mechanism for evaluating defense approaches.12 His “relative vulnerability” metric would thus provide a means for comparing different hardening approaches, based on the fraction of subsequent vulnerabilities that were blocked. While this approach cannot predict the impact of completely new attacks, it seems well posed to measure the breadth of defenses intended to address particular classes of vulnerabilities. At the same time, there is a natural symbiosis between attacker and defender, and thus popular defenses will be more likely to induce the creation of attacks that work around them.

  • Reactivity. Moore et al. first used attack data to infer the rate at which administrators patched systems that were vulnerable to the Code Red v2 worm.13 Rescorla used a more sophisticated version

10

E. Rescorla, “Is Finding Security Holes a Good Idea?,” presentation at the Workshop on Economics and Information Security 2004, May 2004; available at http://www.dtc.umn.edu/weis2004/rescorla.pdf.

11

Andy Ozment and Stuart E. Schechter, “Milk or Wine: Does Software Security Improve with Age?,” USENIX Security 2006, 2006; available at http://www.eecs.harvard.edu/~stuart/papers/usenix06.pdf.

12

Crispin Cowan, “Relative Vulnerability: An Empirical Assurance Metric,” presentation at the Workshop on Measuring Assurance in Cyberspace, 44th IFP Working Group, June 2003; available at http://www2.laas.fr/IFIPWG/Workshops&Meetings/44/.

13

David Moore, Colleen Shannon, and k claffy, “Code Red: A Case Study on the Spread and Victims of an Internet Worm,” pp. 273-284 in Proceedings of the 2nd ACM SIGCOMM Workshop on Internet Measurement, ASM Press, New York, 2002.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

of this analysis to analyze patching behavior for vulnerabilities in popular implementations of Secure Sockets Layer (SSL). Finally, Beattie et al. use patch update data to extrapolate an optimal time to patch (for the purpose of maximizing availability) based on honeypot measures of attack incidence.14 In general, the effects of software maintenance on security is understudied (indeed, Rescorla provides an argument that patches can harm security15), and yet considerable empirical data are available on this topic.

  • Threat assessments. Different vulnerabilities engender different risks. In particular, some vulnerabilities are easier to exploit than others, some have more significant consequences, some transition more quickly into attacks in the wild, and some persist for longer periods of time. Today threat assessments are largely performed on an ad hoc basis, but there is reason to hope that at least some of this activity could be automated and objectified.

Finally, there are predictive metrics that measure something intrinsic about a given information technology artifact and that are intended to provide an a priori indication of how secure a system is before it is deployed. An example is vulnerability testing/checking metrics that have emerged from recent work enabling the automated detection of classes of security vulnerabilities in software. As opposed to manual penetration testing, automated methods are by design tester-independent and repeatable. Static-analysis approaches include the detection techniques of Wagner et al. for buffer overflows and format string vulnerabilities16 and the automated analysis and model checking of whole operating system kernels of Engler et al.17 While these techniques are neither complete nor accurate (they produce false positives), they are able to consume large software systems and identify potential security vulnerabilities. Some systems, exemplified by Ganapathy et al., are even able to analyze binary

14

Steve Beattie et al., “Timing the Application of Security Patches for Optimal Uptime,” USENIX LISA, 2002; available at http://www.homeport.org/~adam/time-to-patch-usenix-lisa02.pdf.

15

E. Rescorla, “Is Finding Security Holes a Good Idea?,” presentation at the Workshop on Economics and Information Security 2004, May 2004; available at http://www.dtc.umn.edu/weis2004/rescorla.pdf.

16

David Wagner, Jeffrey S. Foster, Eric A. Brewer, and Alexander Aiken, “A First Step Towards Automated Detection of Buffer Overrun Vulnerabilities,” Network and Distributed System Security 2000; available at http://www.cs.berkeley.edu/~daw/papers/overruns-ndss00.ps.

17

Dawson Engler, David Yu Chen, Seth Hallem, Andy Chou, and Benjamin Chelf, “Bugs as Deviant Behavior: A General Approach to Inferring Errors in Systems Code,” in Proceedings of the Eighteenth ACM Symposium on Operating Systems Principles, 2001; available at http://www.stanford.edu/~engler/deviant-sosp-01.pdf.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

programs, discover new vulnerabilities, and identify precise test cases (i.e., exploits).18 Dynamic testing, via fuzz testers, can manipulate both input and environment to test corner cases known to be a source of security vulnerabilities in the past.

The most sophisticated of these systems can also triage their own output and determine which vulnerabilities are most likely to be exploitable. Using such tools, one can compare this aspect of security across different versions of a software system and evaluate trends in how new detectable vulnerabilities emerge. However, if successful, these tools will become less useful over time as they are introduced into the normal quality-assurance process and the vulnerabilities that they detect are weeded out before deployment.

A similar methodology is possible for detecting confidentiality violations, using static information flow analysis and dynamic taint checking; however, this particular approach has not been explored as a security metric per se (although Garfinkel uses one such technique to demonstrate the presence of information leakage in a commodity operating system19).

Another type of predictive metric addresses the attackability of a system. Howard and LeBlanc developed the notion of an attack surface,20 which is defined in terms of externally visible and accessible system resources that can be used to mount an attack on the system and subsequently weighted according to the potential damage that could be caused by any given exploitation of a vulnerability. Larger attack surfaces indicate a larger extent of potential vulnerability, and vulnerabilities in a system can be reduced by reducing the attack surface. Attack surface measures potential rather than actual aggregate vulnerability. The presumption, supported in part with post hoc data, is that smaller attack surfaces are likely to host fewer exploitable vulnerabilities and will be easier to secure. While Howard and LeBlanc measure the number of potential “attack vectors” in a given system and configuration, Manadhata and Wing have formalized “attack surface” without reference to Howard and LeBlanc’s attack vectors.21 The attack-surface metric appears to have promise, but as of yet it is still largely a manual enterprise.22

18

Vinod Ganapathy et al., “Automatic Discovery of API-Level Exploits,” in Proceedings of the 27th International Conference on Software Engineering, St. Louis, Mo., pp. 312-321, 2005.

19

Simson L. Garfinkel, Information Leakage and Computer Forensics, Center for Research on Computation and Society, Harvard University, February 17, 2006.

20

Michael Howard and David LeBlanc, Writing Secure Code, Second Edition, Microsoft Press, Seattle, Wash., 2002.

21

P. Manadhata and J.M. Wing, An Attack Surface Metric, CMU-CS-05-155, Technical Report, Pittsburgh, Pa., July 2005.

22

Manadhata and Wing also have made progress on a more semi-automated process for analyzing source code. See P.K. Manadhata, J.M. Wing, M.A. Flynn, and M.A. McQueen,

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

Research to further develop the types of metrics described above is needed. Outcome metrics would have high utility for characterizing the impact of some cybersecurity measures, whether technical or procedural. Because predictive metrics seek to characterize artifacts themselves, they would facilitate comparative assessments among different software options and configurations. Generalizing across these different types of metrics, the committee believes that some of the most promising lines of research involve the simultaneous use of different combinations of metrics.

For example, an automated analysis of attack-surface metrics might be designed so that the resulting data could direct vulnerability testing, or post hoc metrics might be used to create quantitatively driven threat assessments. In addition, it would be enormously valuable if metrics useful for understanding security behavior and phenomena in detail could be composed into metrics relevant to aspects of overall system behavior. Today, little is known about how to combine metrics of detailed behavior into metrics of larger scope, and research will be needed to advance this goal. Finally, metrics ought to be subject to a continuing validation process in which various metrics are assessed against incidents as they become known, in order to determine what such metrics might predict about the character of such incidents.

A note of caution is also appropriate in the search for cybersecurity metrics. Researchers have sought good metrics for many years, and though many benefits would flow from the invention of good metrics, the challenge in this cybersecurity research area is particularly great, and some very new ideas will be needed if cybersecurity metricians are to make more progress.

6.4
THE ECONOMICS OF CYBERSECURITY

This section provides an economic perspective on why cybersecurity is hard and on why (if at all) there is underinvestment in cybersecurity.23 Determining the right amount to spend on information security activities in total is linked to efficiently allocating such resources to specific organizational IT activities. For example, organizations need to determine how much to spend on hardware, software, staffing, and personnel training.

“Measuring the Attack Surfaces of Two FTP Daemons,” Quality of Protection Workshop, Alexandria, Va., October 30, 2006.

23

Ross Anderson, “Why Information Security Is Hard—An Economic Perspective,” Proceedings of the 17th Annual Computer Security Applications Conference, IEEE Computer Society, New Orleans, La., 2001, pp. 358-365.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

The committee believes that insight into many problems of cybersecurity can be gained by exploiting the perspective of economics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping, risk dumping, regulatory frameworks, and tragedy of the commons.24 As this list implies, the breadth of economic barriers to improving cybersecurity is extensive and nontrivial. These economic factors can result in a potential for market failure—a less-than-optimal allocation of resources. Taken together, their presence creates a complex and interrelated set of perverse incentives for economic actors. These economic factors go a long way toward explaining why, beyond any technical solutions, the provision of cybersecurity is and will be a hard problem—one requiring research and policy solutions beyond funding technology research—to ameliorate.

In contrast to the large body of technical research on cybersecurity, research related to the economics of cybersecurity is still nascent.25 However, a small but growing body of literature is beginning to provide insights into the necessary elements of the economic analysis essential for addressing policy aspects of cybersecurity. For example, Alderson and Soo Hoo note that most of the public policy initiatives to address the safety and security of the U.S. national information infrastructure have ignored the stakeholder incentives to adopt and to spur the development of security technologies and processes. They suggest that continuing insecurities in cyberspace are in large part the direct result of a public policy failure to recognize and address those incentives and the technological, economic, social, and legal factors underlying them, and argue that the deployment of a more secure cyber infrastructure could be accelerated by careful consideration of stakeholder incentives.26 Solutions that emerge from such research are likely to be subtle and partial, requiring the cooperation and coordination of technology researchers, engineers, economists, lawyers, and policy makers. Any combination of solutions needs to incorporate a fundamental principle of economic analysis: assign responsibility to parties in proportion to their capabilities for managing the risk.27

24

Ross Anderson, “Why Information Security Is Hard—An Economic Perspective,” Proceedings of the 17th Annual Computer Security Applications Conference, IEEE Computer Society, New Orleans, La., 2001, pp. 358-365.

25

Lawrence A. Gordon and Martin P. Loeb, “Budgeting Process for Information Security Expenditures,” Communications of the ACM, January 2006, Vol. 49, No. 1, p. 121.

26

David Alderson and Kevin Soo Hoo, “The Role of Economic Incentives in Securing Cyberspace”; available at http://iis-db.stanford.edu/pubs/20765/alderson-soo_hoo-CISAC-rpt_1.pdf.

27

Hal Varian, “Managing Online Security Risks,” Economic Science Column, New York Times, June 1, 2000; Ross Anderson and Tyler Moore, “The Economics of Information Security,” Science, 314(5799): 610-613, October 27, 2006.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
6.4.1
Conflicting Interests and Incentives Among the Actors in Cybersecurity

There are a number of different actors whose decisions affect the cybersecurity posture of the nation and various entities within the nation: technology vendors, technology service providers, consumers, firms, law enforcement, the intelligence community, attackers, and governments (both as technology users and as guardians of the larger social good). Each of these actors gets plenty of blame for being the “problem”: if technology vendors would just properly engineer their products, if end users would just use the technology available to them and learn safe behavior, if companies would just invest more in cybersecurity or take it more seriously, if law enforcement would just pursue attackers more aggressively, if policy makers would just do a better job of regulation or legislation, if attackers could just be deterred from launching attacks….

There is some truth to such statements, and yet merely to state them does not advance the cause of better understanding and solutions. In particular, knowing why various actors behave as they do is the first step toward changing their behavior. Indeed, one could easily argue that from an economic perspective, each of these actors is behaving largely as might be anticipated on the basis of their interests and incentives and that the reasons underlying their behavior are perfectly reasonable from an economic standpoint, despite the negative impacts on cybersecurity.28

Consider first the incentives of the attacker. Partly because the incentive structure of the attacker is undesirable from a societal perspective and partly because there is clear moral high ground in going after the bad guy, most regulatory and legislative activity has thus far focused on changing the incentive structure of the attacker to make it more dangerous to conduct an attack.29 For example, laws have been passed criminalizing certain kinds of activity and increasing the penalties for such activity. Rewards have been offered for information leading to the arrest and conviction of cyberattackers. On the other hand, jurisdictional issues and the anonymity offered by the intrinsically international nature of cyberspace have served to prevent or at least to greatly impede and increase the cost of identifying and prosecuting cyberattackers. In other words, in practice,

28

See, for instance, Hal Varian, “Managing Online Security Risks,” Economic Science Column, New York Times, June 1, 2000; Alfredo Garcia and Barry Horowitz, “The Potential for Underinvestment in Internet Security: Implications for Regulatory Policy,” Journal of Regulatory Economics, 31(1): 37-55, February 2007, available at http://ssrn.com/abstract=889071; Tyler Moore, “The Economics of Digital Forensics,” presented at the Fifth Annual Workshop on the Economics and Information Security, June 26-28, 2006, Cambridge, England; Ross Anderson and Tyler Moore, “The Economics of Information Security,” Science, 314(5799): 610-613, October 27, 2006.

29

Douglas A. Barnes, “Deworming the Internet,” Texas Law Review, 83: 279-329, 2004.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

the disincentives for an attacker are minimal, since the likelihood of punishment for an attack is quite low.

The attacker’s incentives are part of a larger underground economy. Broadly speaking, the actors in this economy are those selling attack services (e.g., use of a botnet, stolen credit card numbers); those with the direct malevolent intent paying to use those services (e.g., those who wish to conduct a denial-of-service attack on a site for extortion purposes, those who wish to commit actual fraud); and the victims of the resulting cyberattacks (e.g., the operators of the Web site being attacked, those whose credit card numbers are used for fraudulent purposes [or the banks that absorb the fraudulent charges]).

The existence of this economy makes manifest a decoupling between adversarial or criminal intent and the expertise needed to follow through on that intent, thus expanding enormously the universe of possible malefactors. In other words, attack services (e.g., botnets as described in Box 2.3 in Chapter 2) can be regarded as an economic commodity. For example, if someone needs a botnet for some purpose, that party can obtain the use of a botnet in the appropriate market.

Insight into the underground cyber-economy of attackers potentially yields pressure points on which to focus security efforts. For example, the sellers of attack services must publicize the availability of their services in an appropriate marketplace, and it may be possible to target the sellers themselves. It may also be possible to interfere with the operation of the marketplace itself, by shutting down the various marketplace venues or by poisoning them so that buyers and sellers cannot trust each other.

In addition, many of the constraints on digital forensics practices, essential to law enforcement, are due to conflicting incentives of technology vendors, service providers, consumers, and law enforcement.30 For example, technology vendors have economic incentives to differentiate their products by making them proprietary—but in a regime in which there are many proprietary products on the market, law enforcement officials must have at the ready a range of forensic tools that together can operate on a wide range of products embedding multiple standards.

Technology vendors have significant financial incentives to gain a first-mover or a first-to-market advantage. They are driven by important features of the information technology market: the number of other people using a product, the high fixed costs and low marginal costs, and the cost to customers of switching to another product (i.e., lock-in).31

30

Tyler Moore, “The Economics of Digital Forensics,” Fifth Annual Workshop on the Economics of Information Security, University of Cambridge, England, June 26-28, 2006.

31

Carl Shapiro and Hal R. Varian, Information Rules: A Strategic Guide to the Network Economy, Harvard Business School Press, Boston, Mass., 1998.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

These network effects have significant consequences for engineering an information system for security.32 Time-to-market—a key dimension of competitiveness in the industry—is adversely affected when vendors must pay attention to “superfluous” functionality or system characteristics, and functionality or system characteristics that customers do not demand are by definition superfluous. The logic of getting to market quickly runs counter, however, to adding security features, which add complexity, time, and cost in design and testing while being hard to value by customers. In addition, there is often an operational tension between security and other functionality that customers demand explicitly, such as ease of use, interoperability, and backward compatibility—consider, for example, security measures that may make it difficult or cumbersome to respond quickly in an emergency situation.

Information technology purchasers (whether individuals or firms) largely make product choices based on features, ease of use, performance, and dominance in a market,33 although in recent years the criteria for product selection have broadened to include security to some extent in some business domains. But even to the extent that consumers do consider security, there is an information asymmetry that makes it difficult or impossible for them to distinguish between products that are secure and ones that are not. This leads to the “market for lemons” problem described by Akerlof34—buyers are unwilling to pay for something (in this case security) that they cannot measure, so leading vendors to avoid the extra costs of providing something they cannot recover.

Evaluation systems, such as the Common Criteria, have been attempts to remedy this problem. Common Criteria and the European Information Technology Security Evaluation Criteria (ITSEC) require evaluations to be paid for by the vendor seeking evaluation. This introduces the perverse incentive that motivates vendors to shop around for an evaluation contractor with whom a “sweetheart deal” can be negotiated, leading to the potential for suspect certifications.35 Certification systems may even have the perverse effect of encouraging those most motivated to transfer liability,

32

Ross Anderson, “Why Information Security Is Hard—An Economic Perspective,” Proceedings of the 17th Annual Computer Security Applications Conference, IEEE Computer Society, 2001, p. 359.

33

Ross Anderson and Tyler Moore, “The Economics of Information Security,” Science, 314 (5799): 610-613, October 27, 2006.

34

George A. Akerlof, “The Market for ‘Lemons’: Quality, Uncertainty and the Market Mechanism,” Quarterly Journal of Economics, 84: 488-500, 1970.

35

Ross Anderson, “Why Information Security Is Hard—An Economic Perspective,” Proceedings of the 17th Annual Computer Security Applications Conference, IEEE Computer Society, 2001, pp. 358-365. Note that while the meaning of such a certification from a technical perspective may also be suspect, it is beside the point made here.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

meet “due diligence” requirements, and take advantage of naïve customers to seek certification. Mechanisms such as online “trust” certifications meant to help users determine the safety of their online activities appear to result in adverse selection that undermines that safety, where untrustworthy sites are significantly more likely than trustworthy ones to seek certification.36

End users improve their own cybersecurity postures when they act to protect their systems, for instance by maintaining antivirus software. But if the tasks required to protect their systems are complex or costly and their own risk of a security compromise is minimal, a user has little motivation to spend time or money preventing others from using their systems for nefarious purposes (e.g., as part of a botnet). For example, universities with relatively unprotected networks were used to attack major commercial Web sites but bore only a small amount of the cost (as a nuisance in lost performance).37 While “concentrated-benefit” users, such as large commercial Web sites, may suffer serious loss, the harm to ordinary users is diffuse and offset by the costs required to take mitigating action.38 These cases can be recognized as instances of the classic “tragedy of the commons” problem.39

Furthermore, from the standpoint of operators, the benefits of successful security can be seen only in events that do not happen, so it is easy to regard resources devoted to security as “wasted.” The issue of spending money on insurance premiums is similar, but for the conventional losses against which insurance usually protects, there are at least reasonable risk metrics that make quantitative decisions about insurance spending possible.

Research is needed to accurately characterize the scope and nature of the incentives of these various actors. In addition, understanding the relationships among these actors—that is, the market—is key to finding ways to intervene in the market in order to shape the behavior of its actors.

6.4.2
Risk Assessment in Cybersecurity

Even if the incentive structures for the various actors could be changed, issues of how much to invest in security and what to invest in

36

Benjamin Edelman, “Adverse Selection in Online ‘Trust’ Certifications,” Harvard University, Cambridge, Mass., 2006, draft working paper, available at http://www.benedelman.org/publications/advsel-trust-draft.pdf.

37

Hal Varian, “Managing Online Security Risks,” Economic Science Column, New York Times, June 1, 2000.

38

Douglas A. Barnes, “Deworming the Internet,” Texas Law Review, 83: 279-329, 2004.

39

Ross Anderson, “Why Information Security Is Hard—An Economic Perspective,” Proceedings of the 17th Annual Computer Security Applications Conference, IEEE Computer Society, 2001, pp. 358-365.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

would remain. This requires the ability to make sound investments in cybersecurity based on reasonable assessments of the risks.

Individuals and companies do spend large amounts on security. Roughly $100 billion is spent annually on IT security worldwide.40 But there are few ways to know how much is enough. Indeed, technology solutions can create a false sense of security.41 Some models for determining appropriate levels of investment at the firm level have been developed,42 but budgeting for IT security is often driven by such things as the past year’s budget, best industry practices, and a list of must-do items, rather than any sound economic principles. While cost-benefit approaches appear to be useful for properly determining levels of investment, they are predicated on an ability to estimate benefits, which requires understanding the risk profile.43 Firms also excessively discount future costs (see the discussion on behavioral economics below in this section) and costs borne by others (Section 6.4.3), and to the extent that they optimize their operations and investments at all, they do so on a narrow and short-term basis.

A necessary condition for investing rationally in cybersecurity depends on being able to assess the risks of cyberattack and the benefits of countermeasures taken to defend against such attack. Section 6.3 addresses the difficulties in assessing benefits of cybersecurity measures. But assessing risks is also a difficult challenge, especially in a risk environment inhabited at least partly by low-probability, high-impact events. Attempts to construct a business case for cybersecurity often founder because of the unavailability of actuarial data that might help predict in quantitative terms the likelihood of a specific type of attack, and, as discussed below, attacks can change on a short timescale and thereby reduce the utility of such data. In general, such data that are available are not specific enough to drive organizational change, since victims of

40

Kenneth Cukier, “Protecting Our Future: Shaping Public-Private Cooperation to Secure Critical Information Infrastructures,” The Rueschlikon Conference Report of a Roundtable of Experts and Policy Makers, Washington, D.C., May 2006, p. 12.

41

Kenneth Cukier, “Protecting Our Future: Shaping Public-Private Cooperation to Secure Critical Information Infrastructures,” The Rueschlikon Conference Report of a Roundtable of Experts and Policy Makers, Washington, D.C., May 2006, p. 12.

42

See for instance, Lawrence A. Gordon and Martin P. Loeb, “The Economics of Information Security Investment,” ACM Transactions on Information and Systems Security, 5(4, November): 438-457, 2002; Soumyo D. Moitra and Suresh L. Konda, The Survivability of Network Systems: An Empirical Analysis, CMU/SEI-2000-TR-021, ESC-TR-2000-021, December 2000, available at http://www.cert.org/archive/pdf/00tr021.pdf.

43

Lawrence A. Gordon and Martin P. Loeb, “Budgeting Process for Information Security Expenditures,” Communications of the ACM, 49(1, January): 121, 2006. See also, Kenneth Cukier, “Ensuring (and Insuring?) Critical Information Infrastructure Protection,” A Report of the 2005 Rueschlikon Conference on Information Policy, Switzerland, June 16-18, 2005, p. 7.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

various attacks are usually quite reluctant to share information on attacks, concerned about drawing public attention to limitations or deficiencies in their security posture and/or being placed at a subsequent competitive disadvantage in the marketplace.

A major impediment in data collection is the reluctance on the part of owners and operators of IT to collect and share the data necessary for companies to know their risk or for the insurance industry to create a viable market.44 Indeed, firms have good reasons to avoid disclosing breaches. While economic consequences vary, firms can suffer significant costs.45 Potential negative impacts from public disclosures of information security breaches include lost market value and competitive disadvantage. That is, if one company releases information about an incident and other companies do not release information about their own incidents, the releasing company may well be disadvantaged by its candor in the marketplace as its competitors call attention to its failings. Firms also fear legal liability and government fines. Indeed, Gordon et al. argue that, absent appropriate economic incentives, it is in a firm’s self-interest to renege on previously agreed-on arrangements to share cybersecurity-related information, even though information sharing among a group of firms lowers the cost of each firm’s attaining any given level of information security and thus yields potential benefits both for individual firms and for society at large.46

Thus, one research question suggested by the above discussion is the development of incentives that would promote greater information sharing. Possible incentives that warrant research include providing public subsidies to information-sharing firms that vary according to the level of information sharing that takes place; government-subsidized insurance; and other forms of government regulation. Research would entail an examination of how such incentives should be constructed and evaluated and how to prevent the creation of perverse economic incentives that actually discourage information sharing and/or better cybersecurity.

44

Kenneth Cukier, “Ensuring (and Insuring?) Critical Information Infrastructure Protection,” A Report of the 2005 Rueschlikon Conference on Information Policy, Switzerland, June 16-18, 2005, p. 22.

45

Katherine Campbell, Lawrence A. Gordon, Martin P. Loeb, and Lei Zhou, “The Economic Cost of Publicly Announced Information Security Breaches: Empirical Evidence from the Stock Market,” Journal of Computer Security, 11: 431-448, 2003. This paper examines just one element of potential costs—stock market valuation.

46

L.A. Gordon, M.P. Loeb, and W. Lucyshyn, “Sharing Information on Computer Systems Security: An Economic Analysis,” Journal of Accounting and Public Policy, 22(6): 461-485, 2003.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

An example of a quantitative cost-benefit analysis was offered by Wei et al. in 2001.47 Wei and his colleagues developed a methodology, built a model based on cost factors associated with various intrusion categories, and applied the model to investigating the costs and benefits of deploying and using a cooperative intrusion-detection system known as Hummer. The model addressed questions such as “What is the cost of not detecting an intrusion?” and “What does it cost to detect an intrusion?” To address the all-important question of likelihood, Wei et al. used empirical data relating to the frequency with which different categories of intrusion occurred in order to calculate the annual loss expectancy (ALE) (that is, an attack’s damage multiplied by its empirically estimated frequency in 365 days of system operation). If a security mechanism prevents a certain kind of attack with probability p, the loss thereby avoided is p times ALE. The net benefit is calculated by subtracting security investment from the sum of all avoided losses over the operational lifetime of the security mechanism installed.

Another reason for the difficulty of risk assessment is that the “likelihood” of a particular attack is a reactive quantity. For example, imagine that the historical record shows that a certain type of attack (Attack A) has accounted for 50 percent of the attacks against a particular operating system recorded in the past year, while another type of attack (Attack B) accounted for only 10 percent of the attacks. Now, imagine that resources have been made available to develop a defense against Attack A and that now such a defense is available and is being deployed. This deployment will have two results—incidents of Attack A will almost certainly be reduced (because adversaries will not waste their time conducting ineffective attacks), and incidents of Attack B will increase, perhaps absolutely or perhaps only relative to the frequency of Attack A. (It is also likely that attacks of still another type, Attack C, will emerge, and attacks of this type will never before have been seen. Indeed, one might well argue that the ability to create attacks of a type never before seen is part of the definition of a skilled attacker.)

More generally, decision makers have few ways to understand and quantitatively characterize the space of possible attacks and the evolution of a threat. Since the space of possible attacks is so large, sampling that space is an essential element of tractability. But what are the rules that should govern such sampling? At what level of granularity should attacks be characterized? Thus, an important research area is to find an approach

47

Huaqiang Wei, Deb Frinke, Olivia Carter, and Chris Ritter, “Cost-Benefit Analysis for Network Intrusion Detection Systems,” CSI 28th Annual Computer Security Conference, October 29-31, 2001, Washington, D.C.; available at www.csds.uidaho.edu/deb/costbenefit.pdf.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

to the calculus of decision making in cyberspace that does not depend as heavily on actuarial data as do current methods. In addition, this research area would seek to develop more usable characterizations of attacks.

Behavioral economics might suggest research on topics in which human psychological limitations and complications are operative,48 and consequently how actual human behavior in economic matters deviates, often substantially, from that of the rational actor postulated in neoclassical economic theory. In particular, Tversky and Kahneman have described a mental process known as the availability heuristic, in which individuals assess the magnitude of the risk associated with some harmful event based on whether they can bring examples of harm readily to mind.49 If people can easily think of such examples, their assessment of risk increases (e.g., their judgments about the likelihood go up).

In the non-cyber domain, Slovic found that people are much more likely to buy insurance for natural disasters if they can recall such disasters in their personal histories.50 Indeed, policy makers are not immune to the availability heuristic—a great deal of experience in national responses to catastrophic events suggests that such events do much more to force policy makers to pay attention to problems than all the reports in the world.

Applying the availability heuristic to cybersecurity would suggest that if users cannot see a direct and significant impact on themselves from a cybersecurity problem, their awareness and concern about cybersecurity will be relatively low. The converse would also be true: in the aftermath of a “digital Pearl Harbor,” public attention to cybersecurity would rise dramatically. Consider, for example, the security of air transport before and after September 11, 2001 (9/11). Prior to the 9/11 attacks, many reports had drawn attention to the weaknesses in flight security—but few changes had been made. After the attacks, airport security was dramatically increased, but in ways that many analysts argue provide only a few genuine enhancements in actual security. Similarly, in the cybersecurity domain, a very important and relevant research question is how research results and best practices in cybersecurity should be disseminated in an

48

Sendhil Mullainathan and Richard H. Thaler, “Behavioral Economics,” International Encyclopedia of the Social and Behavioral Sciences; available at www.iies.su.se/nobel/papers/Encyclopedia%202.0.pdf.

49

See, for example, A. Tversky and D. Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” pp. 3-22 in D. Kahneman, P. Slovic, and A. Tversky (eds.), Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press, Cambridge and New York, 2002.

50

P. Slovic, The Perception of Risk, Earthscan Publications Ltd., London and Sterling, Va., 2000.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

atmosphere of sudden enthusiasm that would be inevitable after a digital Pearl Harbor.

Gordon et al. go even farther, suggesting that a reactive approach toward the deployment of measures to strengthen cybersecurity beyond some basic minimum may be consistent with an entirely rational (non-behavioral) economic perspective.51 The essence of the argument is that, given a fixed amount to spend on cybersecurity measures, it may make sense to hold a portion of the budget in reserve and wait for a security breach to occur before spending the reserve. By deferring the decision on spending the reserve, managers may obtain a clearer picture about whether or not such spending is warranted. In a wait-and-see scenario, actual losses do occur if and when a breach occurs, but the magnitude of those losses may be lower than the expected benefits of waiting, and so on balance, it may well pay to wait.

For any given company, the implications of this model depend on the specifics regarding the costs of security breaches, the costs of various cybersecurity measures to be put into place, the likelihood that specific security breaches will occur, and the magnitude of the budget available. Thus, one research theme associated with this perspective would be the development of tools and analytical techniques that would enable reasonable and defensible estimates of all of these various parameters in any given instance.

6.4.3
The Nature and Extent of Market Failure (If Any) in Cybersecurity

As noted in Section 6.4.1, the various actors in the cybersecurity domain may well be acting just as a rational-actor economic model might predict. In this view, users, vendors, customers, and so on are concerned with security at a level commensurate with the risk they perceive: although cybersecurity problems occur, users of information technology learn to adjust their behavior, expectations, and economic models to take into account these problems, and business decisions are being made appropriately for the level of threat that currently exists. In this view, there is no market failure, and allowing the free market in cybersecurity to work its will is the preferred course of action.

To the extent that decision makers do take cybersecurity into account, the natural inclination—indeed, fiscal responsibility—of organizational decision makers is to take only those measures that mitigate the secu-

51

L.A. Gordon, M.P. Loeb, and W. Lucyshyn, “Information Security Expenditures and Real Options: A Wait-and-See Approach,” Computer Security Journal, 19(2): 1-7, 2003.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

rity problem for their own organizations rather than for society as a whole. (For example, businesses are not required to consider the downside impact of compromising customer privacy—such impact results in costs to the customers rather than to the business.) That is, they must make a business case for their security investments, and any investment in security beyond that—by definition—cannot be justified on business grounds. Thus, beyond a certain level, society rather than the company will benefit, and so security at or beyond that level is a public good in which individual organizations have little incentive to invest.

In short, incentives for deploying a level of security higher than what today’s business cases will bear are thus nearly nonexistent. Accordingly, if the nation’s cybersecurity posture is to be improved to a level that is higher than the level to which today’s market will drive it, the market calculus that motivates organizations to pay attention to cybersecurity must be altered in some ways, and the business cases for the security of these organizations must change.

It is a different—and researchable—question about whether the national cybersecurity posture resulting from the investment decisions of many individual firms acting in their own self-interest is adequate from a societal perspective. This question becomes especially interesting if data and information become available to support business cases for greater cybersecurity investments by individual firms.

6.4.4
Changing Business Cases and Altering the Market Calculus

The business case for undertaking any action is based on a comparison of incremental costs and benefits. Thus, the likelihood of undertaking an action increases if the costs of undertaking it are lower and/or if the benefits of taking it are higher. In the cybersecurity domain, for example, efforts to develop and promote usable security (Section 6.1) can be fairly regarded as efforts both to avoid lower costs (with security measures many of the benefts will come in the form of cost avoidance) and to reduce disincentives to deploying security functionality. In general, the central element of the economic research agenda for cybersecurity is to identify actions that lower barriers and eliminate disincentives; to create incentives to boost the economic benefits that flow from attention to cybersecurity; and to penalize a lack of attention to cybersecurity or actions that cause harm in cyberspace.

The discussions below focus on two complementary approaches to changing business cases—approaches for increasing the flow of relevant information to cybersecurity decision makers and approaches for incentivizing actual change in the behavior of those decision makers.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
6.4.4.1
Letting Current Threat Trends Take Their Course

One approach to increasing the flow of information to decision makers is to wait for the threat environment to change. In this approach, individual organizations monitor their cybersecurity environment and alter their approaches to cybersecurity as changes in their environment occur (e.g., as certain kinds of threats manifest themselves in the future). That is, as the threat changes, so too will customer behavior and vendor business cases. Indeed, recent announcements and activities of a number of software vendors indicate that markets have been changing in directions that call for more robust cybersecurity functionality.

Nevertheless, from a public policy perspective, this approach leaves open the possibility of cyberattacks with consequences that ripple and reverberate far beyond individual organizations and affect important societal functions. The reason is that current cybersecurity efforts respond to the current perception of risk, which is driven by the most visible threats of today. History and intelligence information suggest that vastly more sophisticated threats against a wider variety of targets are likely to be in the offing, but that these threats will present little overt evidence to motivate further defensive action on the part of most private organizations and individuals.

Moreover, this approach presumes that organizations can respond to changes in the threat on the necessary timescale. Because new kinds of death emerge relatively infrequently, life insurance companies can adjust their actuarial models and develop new rate structures when new threats emerge. But it is not at all clear that changes in the cyberthreat environment will emerge slowly, and indeed considerable evidence exists that it can change quickly.

6.4.4.2
Use of Existing Market Mechanisms to Improve the Flow of Information

Rational investment in security depends on the availability of accurate information about vulnerabilities, and a number of market mechanisms have been developed (though not all have been deployed) to increase the availability of such information. 52 The availability of information about vulnerabilities depends on two factors. One factor is the identification of vulnerabilities; a second factor is the sharing of information about vulnerabilities once identified.

52

The discussion of this section is based largely on Rainer Böhme, “Vulnerability Markets: What Is the Economic Value of a Zero-Day Exploit?,” Proceedings of 22C3, Berlin, Germany, December 27-30, 2005.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

One market mechanism that has been used to identify vulnerabilities is the bug challenge or bug bounty.53 Bug challenges and bounties are offered by producers who pay a monetary reward for reporting security problems that someone finds. They require the value of the reward to be greater than the amount that the identifier might realize by exploiting or selling the vulnerability elsewhere. However, the underlying market mechanism suffers a number of imperfections, particularly in the ability for pricing signals to work efficiently, that make it impractical on a large scale.54 (See Box 6.2.)

Bug auctions based on vendor participation have also been considered.55 They are similar in concept to bug challenges, although they are based on different theoretical framework. Of course bug auctions could be held independent of vendors, but essentially they act as blackmail for vendors and honest users while providing no useful information about security when no vulnerability is for sale.56

Market mechanisms for sharing vulnerability information have also been developed. For example, vulnerability information brokers act as intermediaries among benign identifiers of vulnerabilities, users, and vendors.57 Because they provide a mechanism for reporting vulnerability information, the U.S. Computer Emergency Response Team (CERT) acts as vulnerability brokers, although it does not profit from reporting vulnerabilities. Some firms have monetized this process by buying information about vulnerabilities and creating business models that offer an advantage of advance knowledge about vulnerabilities to their customers.58 However, these market-based mechanisms for vulnerability disclosure carry incentives for manipulation (by leaking information) and have been shown to underperform CERT-like mechanisms.59

53

See for instance, the Mozilla Security Bug Bounty Program, at http://www.mozilla.org/security/bug-bounty.html.

54

Rainer Böhme, “Vulnerability Markets: What Is the Economic Value of a Zero-Day Exploit?,” Proceedings of 22C3, Berlin, Germany, December 27-30, 2005, p. 2.

55

Andy Ozment, “Bug Auctions: Vulnerability Markets Reconsidered,” Workshop of Economics and Information Security, Minneapolis, Minn., 2004.

56

Rainer Böhme, “Vulnerability Markets: What Is the Economic Value of a Zero-Day Exploit?,” Proceedings of 22C3, Berlin, Germany, December 27-30, 2005, p. 2. See footnote 2 therein.

57

Karthik Kannan and Rahul Telang, “Market for Software Vulnerabilities? Think Again,” Management Science, 51(5, May): 726-740, 2005.

58

See for instance, iDefense Quarterly Challenge, at http://labs.idefense.com/vcp/challenge.php#more_q4+2006%3A+%2410%2C000+vulnerability+challenge.

59

Karthik Kannan and Rahul Telang, “Market for Software Vulnerabilities? Think Again,” Management Science, 51(5, May): 726-740, 2005; Ross Anderson and Tyler Moore, “The Economics of Information Security,” Science 314(5799): 610-613, 2006.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

BOX 6.2

Bug Bounties and Whistle-Blowers

The bug bounty—paying for information about systems problems—stands in marked contrast to the more common practice of discouraging or dissuading whistle-blowers (defined in this context as one who launches an attack without malicious intent), especially those from outside the organization that would be responsible for fixing those problems. Yet the putative intent of the whistle-blower and the bug bounty hunter is the same—to bring information about system vulnerabilities to the attention of responsible management. (This presumes that the whistle-blower’s actions have not resulted in the public release of an attack’s actual methodology or other information that would allow someone else with genuine malicious intent to launch such an attack.) Whether prosecution or reward is the correct response to such an individual has long been the subject of debate in the information technology community.

Consider, for example, the story of Robert Morris, Jr., the creator of the first Internet worm in 1988. Morris released a self-replicating, self-propagating program onto the Internet. This program—a worm—replicated itself much faster than Morris had expected, with the result that computers at many sites, including universities, military sites, and medical research facilities, were affected. He was subsequently convicted of violating Section 2(d) of the Computer Fraud and Abuse Act of 1986, 18 U.S.C. §1030(a)(5)(A) (1988), which punishes anyone who intentionally accesses without authorization a category of computers known as “[f]ederal interest computers” and damages or prevents authorized use of information in such computers, causing the loss of $1,000 or more. However, at the time, a number of commentators argued for leniency in Morris’s sentencing on grounds that he had not anticipated the results of his experiment, and further that his actions had brought an important vulnerability into wide public view and thus he had provided a valuable public service. It is not known if these arguments swayed the sentenc-

Another as-yet untried mechanism for sharing information is based on derivative contracts, by which an underwriter issues a pair of contracts: Contract A pays its owner $100 if on a specific date there exists a certain well-specified vulnerability X for a certain system. The complementary Contract B pays $100 if on that date X does not exist. These contracts are then sold on the open market. The issuer of these contracts breaks even, by assumption. If the system in question is regarded as highly secure by market participants, then the trading price for Contract A will drop—it is unlikely that X will be found on that date, and so only speculators betting against the odds will buy Contract A (and will likely lose their [small] investment). By contrast, the trading price for Contract B will remain near $100, so investors playing the odds will profit only minimally but with high probability. The trading prices of Contracts A and B thus reflect

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

ing court, but Morris’s sentence did not reflect the maximum penalty that he could have received.

Those who put on public demonstrations of system vulnerabilities have often said that they did so only after they informed responsible management of their findings and management failed to take remedial action on a sufficiently rapid timescale. Thus, they argue, public pressure informed and generated by such demonstrations is the only way to force management to address the problems identified. However, these individuals are usually (though not always) outsiders to the responsible organization, and in particular they do not have responsibility for overall management.

Inside the organization, management may well have evaluated the information provided by the demonstration and judged its operational significance to be less important than is alleged by the demonstrators. That is, responsible management is likely to have (at least in principle) more information about the relevant operational context, and to have decided that the vulnerability is not worth fixing (especially because all attempts at fixing vulnerabilities run the risk of introducing additional problems).

A further concern is the fear of setting bad precedents. Imagine that an individual launches a cyberattack against some organization and causes damage. When caught, the person asserts that his or her intent was to test the defenses of the organization and so he or she deserves a reward for revealing vulnerabilities rather than prosecution. If the individual could cite precedents for such an argument, his or her own defense case would be much stronger.

One of the most significant differences between the bug bounty and the unauthorized public demonstration of system vulnerability is that in the case of the former, the party paying the bounty—usually the vendor—has demonstrated a receptiveness to receiving the information. But whether other, more controversial mechanisms have value in conveying such information is an open and researchable question.

the probability of occurrence of the underlying event at any time.60 The derivatives approach requires a trusted third party. This approach shares with insurance underwriters the need to pay upon the occurrence of a breach in order to hedge the risk to which they are exposed.

60

Found in Rainer Böhme, “Vulnerability Markets: What Is the Economic Value of a Zero-Day Exploit?,” the concept of the value of derivative contracts reflecting the market’s judgment about the security of a system is taken from Lawrence A. Gordon, Martin P. Loeb, and Tashfeen Sohail, “A Framework for Using Insurance for Cyber-Risk Management,” Communications of the ACM, 46(3): 81-85, 2003; Proceedings of 22C3, Berlin, Germany, December 27-30, 2005, p. 3, available at http://events.ccc.de/congress/2005/fahrplan/attachments/542-Boehme2005_22C3_VulnerabilityMarkets.pdf.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
6.4.4.3
Private-Sector Mechanisms to Incentivize Behavioral Change

Private-sector mechanisms to incentivize organizations and individuals to improve their cybersecurity postures do not entail the difficulties of promulgating government regulation, and a number of attempts in the private sector have been made for this purpose. Research is needed to understand how these attempts have fared, to understand how they could be improved if they have not worked well, and to understand how they could be more widely promulgated and their scope extended if they have.

6.4.4.3.1
Insurance

Historically, the insurance industry has played a key role in many markets as an agent for creating incentives for good practices (e.g., in health care and in fire and auto safety). Thus, the possibility arises that it might be able to play a similar role in incentivizing better cybersecurity.

Consumers (individuals and organizations) buy insurance so as to protect themselves against loss. Strictly speaking, insurance does not itself protect against loss—it provides compensation to the holder of an insurance policy in the event that the consumer suffers a loss. Insurance companies sell those policies to consumers and profit to the extent that policyholders do not file claims. Thus, it is in the insurance company’s interest to reduce the likelihood that the policyholder suffers a loss. Moreover, the insurance company will charge a higher premium if it judges that the policyholder is likely to suffer a loss.

Particularizing this reasoning to the cybersecurity context, consumers will buy a policy to insure themselves against the possibility of a successful exploitation by some adversary. The insurance company will charge a higher premium if it judges that the policyholder’s cybersecurity posture is weak and a lower premium if the posture is strong. This gives the user a financial incentive to strengthen his or her posture. Users would pay for poor cybersecurity practices and insecure IT products with higher premiums, and so the differential pricing of business disaster-recovery insurance based in part on quality/assurance/security would bring market pressure to bear in this area. Indeed, cyber-insurance has frequently been proposed as a market-based mechanism for overcoming security market failure,61 and the importance of an insurance industry role in promoting

61

See, for instance, Lawrence A. Gordon, Martin P. Loeb, and Tashfeen Sohail, “A Framework for Using Insurance for Cyber-Risk Management,” Communications of the ACM, 46(3): 81-85, 2003; Jay P. Kesan, Ruperto P. Majuca, and William J. Yurcik, “The Economic Case for Cyberinsurance,” Workshop on the Economics of Information Security, Cambridge, Mass., 2005;

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

cybersecurity was recently noted at the 2005 Rueschlikon Conference on Information Policy.62

Of course, how such a market actually works depends on the specifics of how premiums are set and how a policyholder’s cybersecurity posture can be assessed. (For example, one possible method for setting premiums for the cybersecurity insurance of a large firm might be based in part on the results of an independently conducted red team attack.) Furthermore, there are a number of other reasons that stand in the way of establishing a viable cyber-insurance market: the highly correlated nature of losses from outbreaks (e.g., from viruses) in a largely homogenous monoculture environment, the difficulty in substantiating claims, the intangible nature of losses and assets, and unclear legal grounds.63

6.4.4.3.2
The Credit Card Industry

A prime target of cybercriminals is personal information such as credit card numbers, Social Security numbers, and other consumer information. Because many participants in the credit card industry (e.g., banks and merchants) obtain such information in the course of their routine business activities, these participants are likely to be targeted by cybercriminals seeking such information. To reduce the likelihood of success of such criminal activities, the credit card industry has established the Payment Card Industry (PCI) Data Security Standard, which establishes a set of requirements for enhancing payment account data security.64 These requirements include the following:

  1. Install and maintain a firewall configuration to protect cardholder data.

  2. Do not use vendor-supplied defaults for system passwords and other security parameters.

  3. Protect stored cardholder data.

  4. Encrypt transmission of cardholder data across open, public networks.

William Yurcik and David Doss, “Cyberinsurance: A Market Solution to the Internet Security Market Failure,” Workshop on Economics and Information Security, Berkeley, Calif., 2002.

62

Kenneth Cukier, “Ensuring (and Insuring?) Critical Information Infrastructure Protection,” A Report of the 2005 Rueschlikon Conference on Information Policy, Switzerland, June 16-18, 2005.

63

Rainer Böhme, “Vulnerability Markets: What Is the Economic Value of a Zero-Day Exploit?,” Proceedings of 22C3, Berlin, Germany, December 27-30, 2005, p. 4.

64

An extended description of these requirements can be found at http://usa.visa.com/download/business/accepting_visa/ops_risk_management/cisp_PCI_Data_Security_Standard.pdf.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
  1. Use and regularly update antivirus software.

  2. Develop and maintain secure systems and applications.

  3. Restrict access to cardholder data by business need to know.

  4. Assign a unique identifier to each person with computer access.

  5. Restrict physical access to cardholder data.

  6. Track and monitor all access to network resources and cardholder data.

  7. Regularly test security systems and processes.

  8. Maintain a policy that addresses information security.

Organizations (e.g., merchants) that handle credit cards must conform to this standard and follow certain leveled requirements for testing and reporting. Compliance with these standards is enforced by the banks, which have the authority to penalize noncompliant organizations and data disclosures caused by noncompliance.

6.4.4.3.3
Standards-Setting Processes

For certain specialized applications, compliance with appropriate security standards are almost a sine qua non for their success. For example, for electronic voting applications, security standards are clearly necessary, and indeed the National Institute of Standards and Technology has developed security standards—or more precisely, voluntary guidelines—for electronic voting systems. (These guidelines are voluntary in the sense that federal law does not require that electronic voting systems conform to them—but many states do have such requirements.)

In a broader context, the International Organization for Standardization (ISO) standards process is intended to develop standards that specify requirements for various products, services, processes, materials, and systems and for good managerial and organizational practice. Many firms find value in compliance with an ISO standard and seek a public acknowledgment of such compliance (that is, seek certification) in order to improve their competitive position in the marketplace.

In the cybersecurity domain, the ISO (and its partner organization, the International Electrotechnical Commission [IEC]) has developed ISO/IEC 17799:2005, which is a code of practice for information security management that establishes guidelines and general principles for initiating, implementing, maintaining, and improving information security management in an organization. ISO/IEC 17799:2005 contains best practices of control objectives and controls in certain areas of information security management, including security policy; organization of information security; information systems acquisition, development, and maintenance; and information security incident management. Although ISO/IEC

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

17799:2005 is not a certification standard, the complementary specification standard ISO/IEC 27001 addresses information security management system requirements and can be used for certification.65

As for the putative value of ISO/IEC 17799:2005, the convener of the working group that developed ISO/IEC 17799:2005 argued that “users of this standard can also demonstrate to business partners, customers and suppliers that they are fit enough and secure enough to do business with, providing the chance for them to turn their investment in information security into business-enabling opportunities.”66

6.4.4.4
Nonregulatory Public-Sector Mechanisms

A variety of nonregulatory public-sector mechanisms are available to promote greater attention to and action on cybersecurity, including the following:

  • Government procurement. The federal government is a large consumer of information technology goods and services, a fact that provides some leverage in its interactions with technology vendors. Such leverage could be used to encourage vendors to provide the government with IT systems that are more secure (e.g., with security defaults turned on rather than off). With such systems thus available, vendors might be able to offer them to other customers as well.

  • Government cybersecurity practices. The government is an important player in information technology. Thus, the federal government itself might seek to improve its own cybersecurity practices and offer itself as an example for the rest of the nation.

  • Tax policy. A variety of tax incentives might be offered to stimulate greater investment in cybersecurity.

  • Public recognition. Public recognition often provides “bragging rights” for a firm that translate into competitive advantages; cybersecurity could be a candidate area for such recognition. One possible model for such recognition is the Malcolm Baldrige National Quality Award, given to firms judged to be outstanding in a number of important business quality areas. The award was established to mark a standard of excellence that would help U.S. organizations achieve world-class quality.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

The desirability and feasibility of these mechanisms and others are topics warranting investigation and research.

6.4.4.5
Direct Regulation (and Penalties)

Still another approach to changing business cases is the direct regulation of technology and users—legally enforceable mandates requiring that certain technologies must contain certain functionality or that certain users must behave in certain ways. This is an extreme form of changing the business cases—that is: comply or face a penalty. The regulatory approach has been taken in certain sectors of the economy: financial services, health care, utilities such as electricity and gas, and transportation are among the obvious examples of sectors or industries that are subject to ongoing regulation.

For many products in common use today, vendors are required by law to comply with various safety standards—seat belts in cars are an obvious example. But there are few mandatory standards relating to cybersecurity for IT products. Indeed, in many cases the contracts and terms of service that bind users to IT vendors often oblige the users to waive any rights with respect to the provision of security; this is especially true when the user is an individual retail consumer. In such situations, the buyer in essence assumes all security risks inherent in the use of the IT product or service in question. (Note here the contrast to the guarantees made by many credit card companies—the Fair Credit Reporting Act sets a ceiling of $50 on the financial liability of a credit card holder for an unauthorized transaction providing proper notifications have been given, and many credit card issuers have contractually waived such liability entirely if the loss results from an online transactions. These assurances have had an important impact on consumer willingness to engage in electronic commerce.)

Such contracts notwithstanding, direct regulation might call for all regulated institutions to adopt certain kinds of standards relating to cybersecurity “best practices” regarding the services they provide to consumers or their own internal practices. For example, in an attempt to increase security for customers, the Federal Financial Institutions Examination Council (FFIEC) has directed covered financial institutions to implement two-factor authentication for customers using online banking.67 Another

67

Two-factor authentication refers to the use of two independent factors to authenticate one’s identity. An authentication factor could be something that one knows (e.g., a password), something that one has (e.g., a hardware token), or something that one is (e.g., a fingerprint). So, one two-factor authentication scheme calls for a user to insert a smart card into a reader and then to enter a password; neither one alone provides sufficient authentication, but the combination is supposed to do so.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

“best practice” might be the use of tiger teams (red teams) to test an organization’s security on an ongoing basis. (The committee is not endorsing either of these items as a best practice—they are provided as illustrations only of possible best practices.)

However, regulation is difficult to get right under the best of circumstances, as a good balance of flexibility and inflexibility must be found. Regulation so flexible that organizations need not change their practices at all is not particularly effective in driving change, and regulation so inflexible that compliance would require organizations to change in ways that materially harm their core capabilities will meet with enormous resistance and will likely be ignored in practice or not adopted at all.

Several factors would make it especially difficult to determine satisfactory regulations for cybersecurity.68 Attack vectors are numerous and new ones continue to emerge, meaning that regulations based on addressing specific ills would necessarily provide only partial solutions. Costs of implementation would be highly variable and dependent on a number of factors beyond the control of the regulated party. Risks vary greatly from system to system. There is wide variation in the technical and financial ability of firms to support security measures.

In addition, certain regulatory mechanisms have been used for publicly traded companies to ensure that important information is flowing to investors and that these companies follow certain accounting practices in their finances. For example, publicly traded companies must issue annual reports on a U.S. Securities and Exchange Commission (SEC) Form 10-K; these documents provide a comprehensive overview of the company’s business and financial condition and include audited financial statements. In addition, publicly traded companies must issue annual reports to shareholders, providing financial data, results of continuing operations, market segment information, new product plans, subsidiary activities, and research and development activities on future programs. Audits of company finances must be undertaken by independent accounting firms and must follow generally accepted accounting practices. Intrusive auditing and reporting practices have some precedent in certain sectors that are already heavily regulated by federal and state authorities—these sectors include finance, energy, telecommunications, and transportation.

Research is needed to investigate the feasibility of using these mechanisms, possibly in a modified form, for collecting information on security breaches and developing a picture of a company’s cybersecurity posture. As an illustration of the value of regulation, consider that in

68

Alfredo Garcia and Barry Horowitz, “The Potential for Underinvestment in Internet Security: Implications for Regulatory Policy,” Journal of Regulatory Economics, Vol. 31, No. 1, February 2007; available at http://ssrn.com/abstract=889071.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

2002, California passed the first state law to require public disclosure of any breach in the security of certain personal information. A number of states followed suit, and the California law is widely credited with drawing public attention to the problem of identity theft and its relationship to breaches in the security of personal information. An empirical study by Gordon et al. found that the Sarbanes-Oxley Act of 2002 (P.L. No. 107-204, 116 Stat. 745) had a positive impact on the voluntary disclosure of information security activities by corporations, a finding providing strong indirect evidence that the passage of this act has led to an increase in the focus of corporations on information security activities.69 But such regulatory-driven focus is not without cost and may have unintended consequences, including decreased competition, distortions in cybersecurity investments and internal controls, and lost productivity from increased risk aversion.70 Thus, research is needed to better understand the trade-offs involved in implementing information-disclosure regulations.

What might be included under such a rubric? One possibility is that a publicly traded company might be required to disclose all cybersecurity breaches in a year above a certain level of severity—a breach could be defined by recovery costs exceeding a certain dollar threshold. As part of its audit of the firm’s books, an accounting firm could be required to assess company records on such matters. A metric such as the number of such breaches divided by the company’s revenues would help to normalize the severity of the cybersecurity problem for the company’s size. Another possibility is that a publicly traded company might be required to test its cybersecurity posture against a red team, and a sanitized report of the test’s outcome or an independent assessment of the test’s results included in the firm’s SEC Form 10-K report. With more information about a firm’s track record and cybersecurity posture on the public record, consumers and investors would be able to take such information into account in making buying and investment decisions, and a firm would have incentives to improve in the ways reflected in such information. (These possibilities should not be construed as policy recommendations of the committee, but rather as some topics among others that are worth researching for feasibility and desirability.)

69

Lawrence A. Gordon, Martin P. Loeb, William Lucyshyn, and Tashfeen Sohail, “Impact of Sarbanes-Oxley Act on Information Security Activities,” Journal of Accounting and Public Policy, 25(5): 503-530, 2006.

70

Anindya Ghose and Uday Rajan, “The Economic Impact of Regulatory Information Disclosure on Information Security Investments, Competition, and Social Welfare,” 2006 Workshop on Economics of Information Security, Cambridge, England, March 2006.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
6.4.4.6
Use of Liability

Liability is based on the notion of holding vendors and/or system operators financially responsible through the courts for harms that result from cybersecurity breaches. According to this theory, vendors and operators, knowing that they could be held liable for cybersecurity breaches that result from product design or system operation, would be forced to make greater efforts than they do today to reduce the likelihood of such breaches. Courts in the legal system would also define obligations that users have regarding security.

Some analysts (often from the academic sector or from industries that already experience considerable government regulation) argue that the nation’s cybersecurity posture will improve only if liability forces users and/or vendors to increase the attention they pay to security matters. Opponents argue that the threat of liability would stifle technological innovation, potentially compromise trade secrets, and reduce the competitiveness of products subject to such forces. Moreover, they argue that there are no reasonable objective metrics to which products or operations can be held responsible, especially in an environment in which cybersecurity breaches can result from factors that are not under the control of a vendor or an operator.

An intermediate position confines explicit liability to a limited domain. In this view, regulation or liability or some other extrinsic driver can help to bootstrap a more market-driven approach. Believers in this view assert that new metrics, lampposts, criteria, and so on can be integrated with established processes for engineering or acceptance evaluation. Updating the Common Criteria or the Federal Information Security Management Act (FISMA) to include these mandated elements would enable the injection of the new ideas into the marketplace, and their demonstrated value and utility may persuade others not subject to regulation or liability to adopt them anyway.

All of these views on liability were present within the committee, and the committee did not attempt to reconcile them. But it found value in separating the issue into three components. The first is the putative effectiveness of an approach based on liability or direct regulation in strengthening the nation’s cybersecurity posture. The second is the character of the actual link between regulation or liability and technological innovation and trade secret protection. The third is the public policy choice about any trade-offs that such a link might imply.

Regarding the first and the second, the committee found mostly a set of assertions but exceedingly little analytical work. Advocates of regulation or liability to strengthen cybersecurity have not made the case that any regulatory apparatus or case law on liability can move quickly

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

enough as new threats and vulnerabilities emerge, while critics of regulation or liability have not addressed the claim that regulation and liability have a proven record of improving security in other fields, nor have they yet convincingly shown why the information technology field is different. Nor is there a body of research that either proves or disproves an inverse link between regulation or liability and innovation or trade secret protection. Substantial research on this point would help to inform the public debate over regulation by identifying the strengths and weaknesses of regulation or liability for cybersecurity and the points (if any) at which a reconciliation of the tensions is in fact not possible. Regarding the third, and presuming the existence of irreconcilable tensions, it then becomes a public policy choice about how much and what kind of innovation must be traded off in order to obtain greater cybersecurity.

6.5
SECURITY POLICIES

With the increasing sophistication and wide reach of computer systems, many organizations are now approaching computer security using more proactive and methodical strategies than in the past. Central to many of these strategies are formal, high-end policies designed to address an organization’s overall effort for keeping its computers, systems, IT resources, and users secure. While access control is a large component of most security policies, the policies themselves go far beyond merely controlling who has access to what data. Indeed, as Guel points out, security policies communicate a consensus on what is appropriate behavior for a given system or organization.71

Basically, developing a security policy requires making many decisions about such things as which people and which resources to trust and how much and when to trust them. The policy development process comprises a number of distinct considerations:72

  • Developing requirements involves the often-difficult process of determining just how much security attention to pay to a given set of data, resources, or users. Human resources information, for example, or critical proprietary data about a company’s product, might require significantly stronger protections than, say, general information documents on an organization’s intranet. A biological research facility might wish to encrypt genomic databases that

71

Michele D. Guel, “A Short Primer for Developing Security Policies,” SANS Institute, 2002; available at http://www.sans.org/resources/policies/Policy_Primer.pdf.

72

More perspective on developing security policies can be found in Matt Bishop, “What Is Computer Security?” IEEE Security and Privacy, 1(1): 67-69, 2003.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

contain sequence information of pandemic viruses, allowing access only to vetted requestors.

  • Setting a policy entails translating security requirements into a formal document or statement setting the bounds of permissible and impermissible behavior and establishing clear lines of accountability.

  • Implementing a policy can be accomplished using any of a range of technical mechanisms (e.g., a firewall or setting a user’s system permissions) or procedural mechanisms (e.g., requiring users to change passwords on a monthly basis, reviewing access-control lists periodically).

  • Assessing the effectiveness of mechanisms for implementing a policy and assessing the effectiveness of a policy in meeting the original set of requirements are ongoing activities.

Organizations often choose to create a number of distinct policies (or subpolicies) to address specific contexts. For example, most organizations now provide employees with acceptable-use policies that specify what types of behavior are permissible with company computer equipment and network access. Other prevalent policies include wireless network, remote access, and data-backup policies. Having multiple security policies allows organizations to focus specific attention on important contexts (for example, consider the efficiency of having an organization-wide password policy), although harmonizing multiple policies across an organization can often be a challenge.

Determining just how to set one’s security policy is a critical and often difficult process for organizations. After all, long before any security policy is ever drafted, an organization must first get a good sense for its security landscape—for example, what things need what level of protection, which users require what level of access to what different resources, and so on. However, in the beginning of such a process, many organizations may not even know what questions need to be asked to begin developing a coherent policy or what options are open to them for addressing a given concern. One major open issue and area for research, therefore, is how to assist with this early, though all-important, stage of developing requirements and setting a security policy, as well as how to assist in evaluating existing policies.73

One approach to the problem of establishing appropriate policies in large organizations is the use of role-based access control, a practice that

73

One interesting framework for developing and assessing security policies can be found in Jackie Rees, Subhajyoti Bandyopadhyay, and Eugene H. Spafford, “PFIRES: A Policy Framework for Information Security,” Communications of the ACM, 46(7): 101-106, 2003.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×

determines the security policy appropriate for the roles in an organization rather than the individuals (a role can be established for a class of individuals, such as doctors in a hospital, or for a class of devices, such as all wireless devices). However, since individuals may have multiple roles, reconciling conflicting privileges can be problematic.

Other major open issues and research areas include the enforcement of security policies (as discussed in Section 6.1) and the determination of how effective a given security policy is in regulating desirable and undesirable behavior. These two areas (that is, enforcement and auditability) have been made more significant in recent years by an evolving regulatory framework that has placed new compliance responsibilities on organizations (e.g., Sarbanes-Oxley Act of 2002 [P.L. No. 107-204, 116 Stat. 745]; Gramm-Leach-Bliley Act [15 U.S.C., Subchapter I, Sec. 6801-6809, Disclosure of Nonpublic Personal Information]; the Health Insurance Portability and Accountability Act (HIPAA) of 1996; and so on). Another open question in this space involves the effectiveness of using outsourced firms to audit security policies.

Additional areas for research include ways to simulate the effects and feasibility of security policies; how to keep policies aligned with organizational goals (especially in multipolicy environments); methods for automating security policies or making them usable by machines; how to apply and manage security policies with respect to evolving technology such as distributed systems, handheld devices, electronic services (or Web services), and so on; and ways to reconcile security policies of different organizations that might decide to communicate or share information or resources.

Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 124
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 125
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 126
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 127
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 128
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 129
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 130
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 131
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 132
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 133
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 134
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 135
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 136
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 137
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 138
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 139
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 140
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 141
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 142
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 143
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 144
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 145
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 146
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 147
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 148
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 149
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 150
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 151
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 152
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 153
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 154
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 155
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 156
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 157
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 158
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 159
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 160
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 161
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 162
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 163
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 164
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 165
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 166
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 167
Suggested Citation:"6 Category 3 - Promoting Deployment." National Research Council and National Academy of Engineering. 2007. Toward a Safer and More Secure Cyberspace. Washington, DC: The National Academies Press. doi: 10.17226/11925.
×
Page 168
Next: 7 Category 4 - Deterring Would-Be Attackers and Penalizing Attackers »
Toward a Safer and More Secure Cyberspace Get This Book
×
Buy Paperback | $67.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Given the growing importance of cyberspace to nearly all aspects of national life, a secure cyberspace is vitally important to the nation, but cyberspace is far from secure today. The United States faces the real risk that adversaries will exploit vulnerabilities in the nation’s critical information systems, thereby causing considerable suffering and damage. Online e-commerce business, government agency files, and identity records are all potential security targets.

Toward a Safer and More Secure Cyberspace examines these Internet security vulnerabilities and offers a strategy for future research aimed at countering cyber attacks. It also explores the nature of online threats and some of the reasons why past research for improving cybersecurity has had less impact than anticipated, and considers the human resource base needed to advance the cybersecurity research agenda. This book will be an invaluable resource for Internet security professionals, information technologists, policy makers, data stewards, e-commerce providers, consumer protection advocates, and others interested in digital security and safety.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!