Click for next page ( 110


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 109
4 Reinventing Security INTRODUCTION Increasing the immunity of a networked information system (NIS) to hostile attacks is a broad concern, encompassing authentication, access control, integrity, confidentiality, and availability. Any solution will al- most certainly be based on a combination of system mechanisms in addi- tion to physical and personnel controls.] The focus of this chapter is these system mechanisms in particular, what exists, what works, and what is needed. In addition, an examination of the largely disappointing results from more than two decades of work based on what might be called the "theory of security" invites a new approach to viewing security for NISs- one based on a "theory of insecurity" and that, too, is discussed. 1Personnel security is intrinsic in any NIS, since some set of individuals must be trusted to some extent with regard to their authorized interactions with the system. For example, people manage system operation, configure external system interfaces, and ultimately ini- tiate authentication of (other) users of a system. In a similar vein, some amount of physical security is required for all systems, to thwart theft or destruction of data or equipment. The physical and personnel security controls imposed on a system are usually a function of the environment in which the system operates. Individuals who have access to systems pro- cessing classified information typically undergo extensive background investigations and may even require a polygraph examination. In contrast, most employers perform must less stringent screening for their information technology staff. Similarly, the level of physical security afforded to the NISs that support stock markets like the NYSE and AMEX is greater that that of a typical commercial system. Although physical and personnel controls are essential elements of system security, they are largely outside the scope of this study. 109

OCR for page 109
110 TRUST IN CYBERSPACE The choice of system security mechanisms employed in building an NIS should, in theory, be a function of the environment, taking into ac- count the security requirements and the perceived threat. In practice, NISs are constructed with commercial off-the-shelf (COTS) components. What security mechanisms are available is thus dictated by the builders of these COTS components. Moreover, because most COTS components are intended for constructing a range of systems, their security mechanisms usually are not tailored to specific needs. Instead, they reflect perceptions by a product-marketing organization about the requirements of a fairly broad market segment.2 The task faced by the NIS security architect, then, is determining (1) how best to make use of the given generic security mechanisms and (2) how to augment those mechanisms to achieve an acceptable level of security. The NIS security architect's task is all the more difficult because COTS products embody vulnerabilities, but few of the products are subjected by their builders to forms of analysis that might reveal these vulnerabilities. Thus, the NIS security architect will generally be unaware of the residual vulnerabilities lurking in a system's components. This chapter's focus on security technology should not be miscon- strued an overwhelming majority of security vulnerabilities are caused by "buggy" code. At least a third of the Computer Emergency Response Team (CERT) advisories since 1997, for example, concern inadequately checked input leading to character string overflows (a problem peculiar to C programming language handling of character strings). Moreover, less than 15 percent of all CERT advisories described problems that could have been fixed or avoided by proper use of cryptography. Avoiding design and implementation errors in software (the subject of Chapter 3) is an essential part of the security landscape. Evolution of Security Needs and Mechanisms In early computing systems, physical controls were an effective means of protecting data and software from unauthorized access, because these systems were physically isolated and single-user. The advent of multi- programming and time-sharing invited sharing of programs and data among an often closed community of users. It also created a need for mechanisms to control this sharing and to prevent actions by one user 2Some COTS products do allow a system integrator or site administrator to select from among several options for security facilities, thereby providing some opportunity for customization. For example, one may be able to choose between the use of passwords, challenge-response technology, or Kerberos for authentication. But the fact remains that COTS components limit the mechanisms available to the security architect.

OCR for page 109
REINVENTING SECURITY 111 from interfering with those of another or with the operating system itself. As computers were connected to networks, sharing became even more important and access control problems grew more complex. The move to distributed systems (e.g., client-server computing and the advent of wide- spread Internet connectivity) exacerbated these problems while provid- ing ready, remote access not only for users but also for attackers from anywhere in the world. Closed user communities are still relevant in some instances, but more flexible sharing among members of very dy- namic groups has become common. See Box 4.1 for a discussion of threats from within and outside user communities. The evolution of computing and communication capabilities has been accompanied by an evolution in security requirements and increased de- mands on security mechanisms. Computing and communication features and applications have outpaced the ability to secure them. Requirements for confidentiality, authentication, integrity, and access control have be- come more nuanced; the ability to meet these requirements and enforce suitable security policies has not kept up. The result: successful attacks against NISs are common, and evidence suggests that many go undetec- ted (U.S. GAO, 1996~. The increasing use of extensible systems and for- eign or mobile code (e.g., lava "applets" and ActiveX modules delivered via networks) further complicates the task of implementing NIS security. Of growing concern with regard to controlling critical infrastructures is denial-of-service attacks, which compromise availability. The attack may target large numbers of users, preventing them from using a net- worked information system, or may target individuals, destroying their ability to access data, or may target a computing system, preventing it from accomplishing an assigned job. Only recently have denial-of-service attacks become a focus of serious countermeasure development. Clearly, these attacks should be of great concern to NIS security architects. ACCESS CONTROL POLICIES It is common to describe access controls in terms of the policies that they support and to judge the effectiveness of access control mechanisms relative to their support for those policies. This might leave the impres- sion that access control policies derive from first principles, but that would be only partly true. Access control policies merely model in cyberspace notions of authorization that exist in the physical world. However, in cyberspace, programs acting on behalf of users or acting autono- mously and not the users themselves are what interact with data and access other system objects. This can be a source of difficulty since actions by users are the concern but action by programs is what is governed by the policy.

OCR for page 109
2 TRUST IN CYBERSPACE

OCR for page 109
REINVENTING SECURITY 113

OCR for page 109
4 TRUST IN CYBERSPACE The evolution of access control policies and access control mecha- nisms has attempted, first, to keep pace with the new modes of resource sharing supported in each subsequent generation of systems, and, sec- ond, to repel a growing list of attacks to which the systems are subjected. The second driving force is easily overlooked, but crucial. Access controls can enforce the principle of least privilege.3 In this fashion, they prevent and contain attacks. Before suggesting directions for the future, it is instructive to examine the two basic types of access control policies that have dominated com- puter security work for over two and a half decades: discretionary access control and mandatory access control. Discretionary access control policies allow subjects, which model us- ers or processes, to specify for objects what operations other subjects are permitted to perform. Most of the access control mechanisms imple- mented and deployed enforce discretionary access control policies. Indi- vidual users or groups of users (or computers) are identified with sub- jects; computers, networks, files or processes, are associated with objects. For example, read and write permissions might be associated with file system objects (i.e., files); some subjects (i.e., users) might have read access to a given file while other subjects do not. Discretionary access control would seem to mimic physical-world policies of authorization, but there are subtleties. For instance, transitive sharing of data involving interme- diary users or processes can subvert the intent of discretionary access control policies by allowing a subject to learn the contents of an object (albeit indirectly) even though the policy forbids (direct) access to that object by the subject. Mandatory access control policies also define permitted accesses to objects for subjects, but now only security administrators, rather than individual users, specify what accesses are permitted.4 Mandatory access control policies typically are formulated for objects that have been la- beled, and the policies typically are intended to regulate information flow from one object to another. The best-known example of mandatory access controls arises in connection with controlling the flow of data according to military classifications. Here, data are assigned classification labels (e.g., "top secret" and "unclassified") and subjects are assigned clear- ances; simple rules dictate the clearance needed by a subject to access data that have been assigned a given label. 3The principle of least privilege holds that programs and users should operate using the least set of privileges necessary to complete the job. 4In fact, there exist policies that are mandatory access control but user processes do have some control over permissions. One example is a policy in which a user process could irrevocably shed certain permissions.

OCR for page 109
REINVENTING SECURITY 115 Mandatory access controls can prevent Trojan horse attacks; discre- tionary access controls cannot. A Trojan horse is a program that exploits the authorization of the user executing a program for another user's mali- cious purposes, such as copying information into an area accessible by a user not entitled to access that information. Mandatory controls block such attacks by limiting the access of all programs including the Trojan horse in a manner that cannot be circumvented by users. Discretionary access controls are inherently vulnerable to Trojan horse attacks because software executing on behalf of a user inherits that user's privilege with- out restriction (Boebert and Kain, 1996~. Shortcomings of Formal Policy Models Despite the lion's share of attention from researchers and actual sup- port in deployed system security mechanisms, many security policies of practical interest cannot be formulated as discretionary and mandatory access control policies. Discretionary and mandatory access control focus on protecting information from unauthorized access. They cannot model the effects of certain malicious or erroneous software, nor do they com- pletely address availability of system resources and services (i.e., protec- tion against denial-of-service attacks). And they are defined in an access control model defined by the Trusted Computer System Evaluation Cri- teria (U.S. DOD, 1985) that has only limited expressive power, render- ing the model unsuitable for talking about certain application-dependent access controls. The access control model defined by the Trusted Computer System Evaluation Criteria, henceforth called the DOD access control model, pre- supposes that an organization's policies are static and have precise and succinct characterizations. This supposition is questionable. Organiza- tions' security policies usually change with perceived organizational needs and with perceived threat. Even the Department of Defense's policy the inspiration for the best-known form of mandatory access con- trol (Bell and La Padula, 1973) has numerous exceptions to handle spe- cial circumstances (Commission on Protecting and Reducing Government Secrecy, 1997~. For example, senior political or military officials can down- grade classified information for diplomatic or operational reasons. But the common form of mandatory access control does not allow nonsensi- tive objects to be derived from sensitive sources, because the DOD access control model does not associate content with objects nor does it (or can any model) formalize when declassifying information is safe.5 Policies 5This also means that the underlying mathematical model is unable to capture the most basic operation of cryptography, in which sensitive data become nonsensitive when enciphered.

OCR for page 109
116 TRUST IN CYBERSPACE involving application-specific information also cannot be handled, since such information is not part of the DOD access control model.6 At least two policy models that have been proposed do take into account the application involved. The Clark/Wilson model (Clark and Wilson, 1987) sets forth rules for maintaining the integrity of data in a commercial environment. It is significant that this model contains ele- ments of the outside world, such as a requirement to check internal data (e.g., inventories) with the physical objects being tabulated. The "Chinese Wall" model (Brewer and Nash, 1989) expresses rules for separating dif- ferent organizational activities for conformance with legal and regulatory strictures in the financial world. Still, from the outset, there has been a gap between organizational policy and the 1970s view of computing embodied by the DOD access control model: users remotely accessing a shared, central facility through low-functionality ("dumb") terminal equipment. And, as computing tech- nology advanced, the gap has widened. It is significant that, in a glossary of computer security, Brinkley and Schell (1995) use a passive database (a library) as the example and include the important passage: . . . the mapping between our two 'worlds': 1. The world independent of computers, of people attempting to access information on paper. 2. The world of computers, with objects that are repositories for information and subjects that act as surrogates for users in the attempt to access information in objects. Processes, for example, are complex, ephemeral entities without clear boundaries, especially in the distributed and multithreaded systems of today. A modern computing network comprises independent computers that are loosely linked to each other and to complexes of servers. And modern programs likely have their own access controls, independent of what is provided by the underlying operating system and the DOD access control model. An access control model that does not capture this aspect of computing systems is fatally flawed. Subsystems more and more resemble operating systems, and they should be treated as such. To be sure, a subsystem cannot exceed permis- sions granted to it by an underlying operating system. And even though sit should be noted that a formal access control model of a complex application has been defined, and the corresponding implementation subjected to extensive assurance activity. The exercise explored many issues in the construction of such models and is worth study. See Landwehr et al. (1984) for details.

OCR for page 109
REINVENTING SECURITY 117 the resources that a subsystem protects are the user's own, that protection serves an important function. Moreover, even if the access control model did capture the policies of subsystems, there still remains the problem of composing those policies with all the other policies that are being en- forced. Such composition is difficult, especially when policies are in con- flict with each other, as all too often is the case. The object abstraction in the DOD access control model also can be a source of difficulty. Real objects seldom have uniform security levels, despite what is implied by the DOD access control model. Consider a mailbox with multiple messages. Each message may have field-depen- dent security levels (sensitive or nonsensitive message body, sensitive or nonsensitive address list, and so on), and there may be multiple messages in the mailbox. What is the level of the mailbox as a whole? The alterna- tive is to split messages so that individual fields are in individual objects, but that leads to a formulation that could be expensive to implement with fidelity. The all-or-nothing nature of the DOD access control model also de- tracts from its utility. Designers who implement the model are forced to err on the side of being restrictive, in which case the resulting system may be unusable, or to invent escapes, in which case knowing that a system adheres to the model has limited practical significance. In the battle between security and usability, usability loses. Moreover, since the DOD access control model does not account for contemporary defen- sive measures, such as virus scans, approaches to executable content control, or firewalls, the system architect who is bound by the model has no incentive to use these technologies. Deploying them makes no pro- gress toward establishing that the system is consistent with the model and, in addition, transforms the model into an incomplete characteriza- tion of the system's defensive measures (thereby again limiting the model's practical utility). Evidence that DOD has recognized some of the problems inherent in building systems that enforce the DOD access control model appears in the new DOD Goal Security Architecture (DGSA; see Box 4.2~. DGSA does not legislate that only the DOD access control model be used; in- stead it supports a broad set of security policies that go far beyond the traditional information-flow policies. DGSA also does not discourage DOD end users from employing the latest in object-based, distributed systems, networks, and so on, while instituting rich access control, integ- rity, and availability policies. However, DGSA offers no insights about how to achieve an appropriate level of assurance that these policies are correctly implemented (despite upping the stakes significantly regarding what security functionality must be supported). Thus it remains to be seen if the DGSA effort will spur significant progress in system security.

OCR for page 109
118 TRUST IN CYBERSPACE A New Approach One can view the ultimate goal as the building of systems that resist attack. Attackers exploit subtle flaws and side effects in security mecha- nisms and, more typically, exploit interactions between mechanisms. Test- ing can expose such previously hidden aspects of system behavior, but no

OCR for page 109
REINVENTING SECURITY 119 amount of testing can demonstrate the absence of all exploitable flaws or side effects. An alternative to finding flaws in a system is to demonstrate directly that the system is secure by showing the correspondence between the system and some model that embodies the security properties of concern. One problem (system security) is thus reduced to another (model secu- rity) presumably simpler one. Sound in theory, success in this endeavor requires the following: 1. Models that formalize the security policies of concern. 2. Practical methods for demonstrating a correspondence between a system and a formal model. But the arguments given earlier suggest that suitable formal models for NIS security policies, which invariably include stipulations about availability and application semantics, do not today exist and would be difficult to develop. Moreover, establishing a correspondence between a system and a formal model has proved impractical, even for systems built specifically with the construction of that correspondence in mind and for which analysts have complete knowledge and access to internals. Estab- lishing the correspondence is thus not a very realistic prospect for COTS components, which are not built with such verification activities in mind and, generally, do not offer the necessary access to internals. Experience has taught that systems and, in particular, complex sys- tems like NISs can be secure, but only up to a point. There will always be residual vulnerabilities, always a degree of insecurity. The question one should ask is not whether a system is secure, but how secure that system is relative to some perceived threat. Yet this question is almost never asked. Instead, notions of absolute security, based on correspon- dence to formal models, have been the concern. Perhaps it is time to contemplate alternatives to the "absolute security" philosophy. Consider an alternative view, which might be summarized in three "axioms": 1. Insecurity exists. 2. Insecurity cannot be destroyed. 3. Insecurity can be moved around. With this view, the object of security engineering would be to identify insecurities and move them to less exposed and less vulnerable parts of a system. Military cryptosystems that employ symmetric-key cryptogra- phy illustrate the approach. Radio transmissions are subject to intercep- tion, so they are enciphered. This encryption does not destroy the insecu

OCR for page 109
REINVENTING SECURITY 143 2. Authenticating the author or provider of foreign code has not and likely will not prove effective for enforcing security. Users are unwilling and/or unable to use the source of a piece of foreign code as a basis for denying or allowing execution. Revocation of certificates is necessary should a provider be compromised, but is currently not supported by the Internet, which limits the scale over which the approach can be deployed. 3. Confining foreign code according to an interpreter that provides a rich access control model has potential, provided programmers and users have a means to correctly assess and configure suitable sets of access rights. Fine-grained Access Control and Application Security Enforcing access control in accordance with the principle of least privi- lege is an extremely effective defense against a large variety of attacks, including many that could be conveyed using foreign code or application programs. Support for fine-grained access control (FGAC) facilitates this defense by allowing a user or system administrator to confine accesses made by each individual software module. Each module is granted ac- cess to precisely the set of resources it needs to get the job done. Thus, a module that is advertised as offering a mortgage calculator function (with keyboard input of loan amount, interest, and duration) could be pre- vented from accessing the file system or network, and a spelling checker module could be granted read access to a dictionary and to the text files the user explicitly asks to have checked but not to other files. Operating systems usually do provide some sort of access control mechanism, but invariably the controls are too coarse and concern only certain resources.25 FGAC is not supported. For example, access to large segments of memory is what is controlled, but it is access to small regions that is needed. And virtually no facilities are provided for controlling access to abstractions implemented above the level of the operating sys- tem, including accesses that might be sensitive to the state of the resource being controlled and/or the state of the module requesting the access.26 25The notable exception is domain and type enforcement (DTE)-based operating systems (Boebert and Kain, 1996) that are employed in certain limited contexts. In these systems, processes are grouped into domains and are labeled accordingly. All system objects are also given labels, which define their types. A central table then specifies the kinds of accesses each domain can have to each type and to each other domain. The approach, although flexible, is tedious to specify and use. To address this difficulty, extensions are proposed in Badger et al. (1996~. 26A limited form of FGAC is available for Java programs running under the JDK 1.2 security architecture, but state-sensitive access decisions are not (easily) supported there and the technology is limited to programs written in the single programming language.

OCR for page 109
44 TRUST IN CYBERSPACE Mechanisms for managing FGAC solve only part of the problem, though. Once FGAC support is in place, users and system managers must configure access controls for all the resources and all the modules. Being too liberal in setting permissions could allow an attack to succeed; being too conservative could cause legitimate computations to incur secu- rity violations. Experience with users confronting the range of security configuration controls available for compartmented mode workstations, which deal with both discretionary (identity-based, user-directed) and mandatory (rule-based, administratively directed) access policies, sug- gests that setting all the permissions for FGAC could be daunting. The problem is only exacerbated by the all-too-frequent mismatch between application-level security policies, which involve application-level ab- stractions, and the low-level objects and permissions constituting an FGAC configuration. FGAC is important, but there is more to application security than access control. The lack of sound protected execution environments for processes limits what applications can do to protect themselves against users and against other applications. The fundamental insecurity of most deployed operating systems further undermines efforts to develop trust- worthy applications: even when users are offered applications with ap- parent security functionality, they must question any claimed security. For example, Web browsers now incorporate cryptographic mechanisms to protect against wiretapping attacks. However, the keys used are (op- tionally) protected by being encrypted with a user-selected password and stored in a file system managed by an (insecure) operating system. Thus, an attacker who can gain unauthorized access to the computer (as a result of an operating system flaw) has two obvious options for undermining the cryptographic security employed by the browser: Steal the file with the keys and attack it using password searching, or Plant a Trojan horse to steal the key file when it is decrypted by the user and then e-mail the plaintext keys back to the attacker. For some applications, security properties best enforced using crypto- graphic means are important.27 For example, security for e-mail entails preventing unauthorized release of message contents, sender authentica- tion, message integrity, and maybe nonrepudiation with proof of submis- sion and/or receipt. And because implementing cryptographic protocols is subtle, a number of efforts are under way to free application developers from this task. The IETF has developed a series of specifications for 27Note, however, that neither cryptography nor any other application-level mechanism will provide protection in the face of operating system vulnerabilities.

OCR for page 109
REINVENTING SECURITY 145 making simplified, cryptographically protected (stream or message) com- munications available using the generic security services application pro- gramming interface (GSSAPI). Intel's multilayered CDSA API aims to provide an integrated framework for cryptography, key and certificate management, and related services. CDSA has been submitted to the Open Software Foundation for adoption as a standard, and it has the backing of several major operating system vendors. More generally, the applications programmer must either build suit- able mechanisms or harness existing mechanisms when enforcing any particular application's security policy. There will always be many more applications than operating systems, applications will arise and evolve much faster, and applications will be developed by a much wider range of vendors. These facts of life were understood by the early advocates of secure operating system technology and are even truer today, due to the increasing homogeneity of the operating system marketplace and the ad- vent of mobile code. Thus, it is easy to see why government research and development on computer security in the past focused on securing oper- ating systems. Yet these efforts have been largely unsuccessful in the marketplace. Moreover, modern applications tend to involve security policies defined in terms of application-level abstractions rather than operating system ones. Thus, while there remains a need for security mechanisms in an operating system, it seems clear that enforcing security increasingly will be a responsibility shared between the operating system and the applica- tion. Research is needed to understand how the responsibilities might best be partitioned, what operating system mechanisms are suitable for assisting in application-level security implementation, and how best to specify and implement security policies within applications. Findings 1. Operating system implementations of FGAC would help support the construction of systems that obey the principle of least privilege. That, in turn, could be an effective defense against a variety of attacks that might be delivered using foreign code or application programs. 2. Access control features in commercially successful operating sys- tems are not adequate for supporting FGAC. Thus, new mechanisms with minimum performance impact are required. 3 Unless the management of FGAC is shown to be feasible and at- tractive for individual users and system administrators, mechanisms to support FGAC will not be usable in practice. 4. Enforcing application-level security is likely to be a shared respon- sibility between the application and security mechanisms that are pro

OCR for page 109
146 TRUST IN CYBERSPACE vided by lower levels of a system. Little is known about how to partition this responsibility or about what mechanisms are best implemented at the various levels of a system. 5. The assurance limitations associated with providing application- layer security while employing a COTS operating system that offers mini- mum assurance need to be better understood. Language-based Security: Software Fault Isolation and Proof-carrying Code Virtually all operating system and hardware-implemented enforce- ment of security policies has, until recently, involved monitoring system execution (Box 4.5~. Actions whose execution would violate the security policy being enforced are intercepted and aborted; all other actions are executed normally. But another approach to security policy enforcement is also plausible execute only those programs that cannot violate the security policies of interest: By modifying a program before execution commences, it may be possible to add checks and prevent program behavior that will violate the security policy being enforced. By analyzing a program before execution commences, it may be possible to prove that no program behavior will violate the security policy being enforced. Both schemes depend on analysis techniques developed by program- ming language researchers. And both require incorporating program analysis or some other form of automated deduction into the trusted computing base. The idea of program rewriting to enforce security was first proposed in connection with memory safety, a security policy stipulating that memory accesses (reads, writes, and jumps) are confined to specified re- gions of memory. The naive approach add a test and conditional jump before each machine language instruction that reads, writes, or jumps to memory can slow execution significantly enough to be impractical. Soft- ware fault isolation (SFI) (Wahbe et al., 1993) does not add tests. Instead, instructions and addresses are modified (by "and-in"" and "or-in"" masks) so that they do not reference memory outside the specified re- gions. The behavior of programs that never attempt illegal memory ac- cesses is unaffected by the modifications; programs that would have vio- lated memory safety end up accessing legal addresses instead. Note that the use of program modification to enforce security policies is not limited to memory safety, and any security policy that can be enforced by moni

OCR for page 109
REINVENTING SECURITY 147

OCR for page 109
148 TRUST IN CYBERSPACE toring execution can be enforced using a generalization of SFI (Schneider, 1998~. With proof-carrying code (PCC) (Necula, 1997), a program is executed only if an accompanying formal, machine-checkable proof establishes that the security policies of interest will not be violated. The approach works especially well for programs written in strongly typed programming lan- guages because proof generation can then be a side effect of compilation. Of course, the feasibility of automatic proof generation depends on ex- actly what security policy is being enforced. (Proof checking, which is done before executing a program, is, by definition, automatable. But it can be computationally intensive.28) Initial versions of PCC focused on ensuring that programs do not violate memory safety or attempt opera- tions that violate type declarations. However, in reality, the approach is limited only by the availability of proof-generation and proof-checking methods, and richer security policies can certainly be handled. SFI and PCC are in their infancy. So far, each has been tried only on relatively small examples and only on a few kinds of security policies. Each presumes that an entire system will be subject to analysis, whereas, in reality, COTS products may not be available in a form that enables such processing. And, finally, each is limited by available technology for pro- gram analysis, a field that is still moving ahead. In short, there is a great deal of research to be done before the practicality and limits of these approaches can be assessed. Some of that research involves questions about programming language semantics and automated deduction; other research involves trying the approaches in realistic settings so that any impediments to deployment can be identified. SFI and PCC might well represent the vanguard of a new approach to the enforcement of some security policies an approach in which pro- gramming language technology is leveraged to obtain mechanisms that are more efficient and that are better suited to the higher-level abstrac- tions that characterize applications-level security. Most programming today is done in high-level typed languages, and good use might be made of the structural and type information that high-level languages provide. Moreover, certain security policies, like information-flow restrictions, can- not be enforced by monitoring execution but can be enforced by analyz- ing entire program texts prior to execution. Any security policies that can be enforced by a secure operating system or by the use of hardware memory protection can be effected by SFI or PCC (Schneider, 1998~. 28Specifically, proof checking for existing versions of proof-carrying code can be polyno- mial in the size of the input. Proofs, in practice, are linear in the size of the program but in theory can be exponential in the size of the program.

OCR for page 109
REINVENTING SECURITY 149 Findings 1. Software fault isolation (SFI) and proof-carrying code (PCC) are promising new approaches to enforcing security policies. 2. A variety of opportunities may exist to leverage programming lan- guage research in implementing system security. DENIAL OF SERVICE Access control has traditionally been the focus of security mecha- nisms designed to prevent or contain attacks. But for computing systems that control infrastructures, defending against denial-of-service attacks- attacks that deny or degrade services a system offers to its clients is also quite important. Probably of greatest concern are attacks against system- wide services (network switching resources and servers supporting many users), as disruption here can have the widest impact. Whenever finite-capacity resources or servers are being shared, the potential exists for some clients to monopolize use so that progress by others is degraded or denied. In early time-sharing systems, the operat- ing system had to prevent a user's runaway program from entirely con- suming one or another resource (usually processor cycles), thereby deny- ing service to other users. The solutions invariably involved are these: Mechanisms that allowed executing programs to be preempted, with control returned to the operating system; and Scheduling algorithms to arbitrate fairly among competing service and resource requests. Such solutions work if requests can be issued only by agents that are under the control of the operating system. The control allows the operat- ing system to limit load by blocking the agents making unreasonable demands. Also implicit in such solutions is the assumption that, in the long run, demand will not outstrip supply.29 Defending against denial-of-service attacks in an NIS is not as simple. First, in such systems, there is no single trusted entity that can control the agents making requests. Individual servers might ignore specific client requests that seem unreasonable or that would degrade/deny service to others, but servers cannot slow or terminate the clients making those requests. Because the cost of checking whether a request is reasonable consumes resources (e.g., buffer space to store the request, processing 29For example, in early [ime-sharing systems, a user was not permitted to log on if there was insufficient memory or processing capacity to accommodate the increased load.

OCR for page 109
150 TRUST IN CYBERSPACE time to analyze the request), a denial-of-service attack can succeed even if servers are able to detect and discard attacker requests. Such an attack, based on the lack of source address verification and the connectionless nature of the User Datagram Protocol (UDP), is the basis of CERT Advi- sory CA-96.01. There is also a second difficulty with adopting the time-sharing solu- tion suggested for preventing denial-of-service attacks in an NIS. The difficulty derives from the implicit assumptions that accompany any sta- tistical approach to sharing fixed-capacity resources. In a large, highly interconnected system, like an NIS, no client accesses many services, al- though most clients are able to access most of the services. Server capac- ity is chosen accordingly, and scheduling algorithms are used to allocate service among contending clients. But scheduling algorithms are condi- tioned on assumptions about offered workload, and that means that an attacker, by violating those assumptions and altering the character of the offered workload, can subvert the scheduling algorithm. For example, an attacker might wage a denial-of-service attack simply by causing a large number of clients to make seemingly reasonable requests. On the Inter- net, such a coordinated attack is not difficult to launch because PCs and many other Internet hosts run operating systems that are easy to subvert and because the Web and foreign code provide a vehicle for causing attack code to be downloaded onto the hosts. Not all denial-of-service attacks involve saturating servers or re- sources, though. It suffices simply to inactivate a subsystem on which the operation of the system depends. Causing such a critical subsystem to crash is one obvious means. But there are also more subtle means of preventing a subsystem from responding to service requests. As dis- cussed in Chapter 2, by contaminating the Internet's Domain Name Ser- vice (DNS) caches, an attacker can inactivate packet routing and divert traffic from its intended destination. And, in storage systems where up- dates can be "rolled back" in response to error conditions, it may be possible for an attacker's request to create an error condition that causes a predecessor's updates to be rolled back (without that predecessor's knowl- edge of the lost update), effectively denying service (Gligor, 1984~. Findings 1. No mechanisms or systematic design methods exist for defending against denial-of-service attacks, yet defending against such attacks is important for ensuring availability in an NIS. 2. The ad hoc countermeasures that have been successful in securing time-sharing systems from denial-of-service attacks seem to be intrinsi- cally unsuitable for use in an NIS.

OCR for page 109
REINVENTING SECURITY 151 REFERENCES Abadi, Martin, and Roger Needham. 1994. Prudent Engineering Practice for Cryptographic Protocols. Palo Alto, CA: Digital Equipment Corporation, Systems Research Center, June. Badger, L., Daniel F. Sterne, David L. Sherman, and Kenneth M. Walker. 1996. A Domain and Type Enforcement UNIX Prototype. Vol. 9, UNIX Computing Systems. Glenwood, MD: Trusted Information Systems Inc. Bell, D.E., and Leonard J. La Padula. 1973. Secure Computer Systems: Mathematical Founda- tions and Model. MTR 2547, Vol. 2. Bedford, MA: MITRE, November. Bellovin, Steven M., and M. Merritt. 1992. "Encrypted Key Exchange: Password-based Protocols Secure Against Dictionary Attacks," pp. 72-84 in Proceedings of the IEEE Sym- posium on Security and Privacy. Los Alamitos, CA: IEEE Compter Society Press. Boebert, W. Earl, and Richard Y. Kain. 1996. "A Further Note on the Confinement Prob- lem," pp. 198-203 in Proceedings of the IEEE 1996 International Carnahan Conference on Security Technology. New York: IEEE Computer Society. Bolt, Beranek, and Newman (BBN). 1978. "Appendix H: Interfacing a Host to a Private Line Interface," Specification for the Interconnection of a Host and an IMP. BEN Report 1822. Cambridge, MA: BEN, May. Brewer, D., and M. Nash. 1989. "The Chinese Wall Security Policy," pp. 206-214 in Proceed- ings of the IEEE Symposium on Security and Privacy. Los Alamitos, CA: IEEE Computer Society Press. Brinkley, D.L., and R.R. Schell. 1995. "Concepts and Terminology for Computer Security," Information Security, M.D. Abrams, S. Jajodia, and H.J. Podell, eds. Los Alamitos, CA: IEEE Computer Society Press. Chapman, D. Brent, and Elizabeth D. Zwicky. 1995. Internet Security: Building Internet Firewalls. Newton, MA: O'Reilly and Associates. Cheswick, William R., and Steven M. Bellovin. 1994. Firewalls and Internet Security. Read- ing, MA: Addison-Wesley. Clark, D.D., and D.R. Wilson. 1987. "A Comparison of Commercial and Military Com- puter Security Policies," pp. 184-194 in Proceedings of the IEEE Symposium on Security and Privacy. Los Alamitos, CA: IEEE Computer Society Press. Commission on Protecting and Reducing Government Secrecy, Daniel Patrick Moynihan, chairman. 1997. Secrecy: Report of the Commission on Protecting and Reducing Govern- ment Secrecy. 103rd Congress (pursuant to Public Law 236), Washington, DC, March 3. Computer Science and Telecommunications Board (CSTB), National Research Council. 1996. Cryptography's Role in Securing the Information Society, Kenneth W. Dam and Herbert S. Lin, eds. Washington, DC: National Academy Press. Defense Information Systems Agency (DISA). 1996. The Department of Defense Goal Security Architecture (DGSA). Version 3.0. 8 vols. Vol. 6, Technical Architecture Framework for Information Management. Arlington, VA: DISA. Diffie, Whitfield, and Martin E. Hellman. 1976. "New Directions in Cryptography," IEEE Transactions on Information Theory, 22~6~:644-654. Feustel, E., and T. Mayfield. 1998. "The DGSA: Unmet Information Security Challenges for Operating Systems Designers," Operating Systems Review, 32~1~:3-22. Gligor, Virgil D. 1984. "A Note on Denial-of-Service in Operating Systems," IEEE Transac- tions on Software Engineering, 10~3~:320-324. Gong, Li, M.A. Lomas, R.M. Needham, and J.H. Saltzer. 1993. "Protecting Poorly Chosen Secrets from Guessing Attacks," IEEE Journal on Selected Areas in Communications, 11~5~:648-656.

OCR for page 109
52 TRUST IN CYBERSPACE Gong, Li, Marianne Mueller, Hemma Prafullchandra, and Roland Schemers. 1997. "Going Beyond the Sandbox: An Overview of the New Security Architecture in the Java De- velopment Kit 1.2," pp. 103-112 in Proceedings of the USENIX Symposium on Internet Technologies and Systems, Monterey, California. Berkeley, CA: USENIX Association. Hailer, Neil M. 1994. The S/Key One-time Password System. Morristown, NJ: Bellcore. Joncheray, Laurent. 1995. "A Simple Active Attack Against TCP," Proceedings of the 5th USENIX UNIX Security Symposium, Salt Lake City, Utah. Berkeley, CA: USENIX Asso- ciation. Kain, Richard Y., and Landwehr, Carl W. 1986. "On Access Checking in Capability-based Systems," pp. 95-101 in Proceedings of the IEEE Symposium on Security and Privacy. Los Alamitos, CA: IEEE Computer Society Press. Kent, Stephen T. 1997. "How Many Certification Authorities Are Enough?," Computing- related Security Research Requirements Workshop III, U.S. Department of Energy, March. Kneece,Jack. 1986. Family Treason. New York: Stein and Day. Kornfelder, Loren M. 1978. "Toward a Practical Public-Key Cryptosystem," B.S. thesis, Department of Electrical Engineering, Massachusetts Institute of Technology, Cam- bridge, MA. Landwehr, Carl E., Constance L. Heitmeyer, and John McLean. 1984. "A Security Model for Military Message Systems," ACM Transactions on Computer Systems, 9~3~:198-222. Lewis, Peter H. 1998. "Threat to Corporate Computers Often the Enemy Within," New York Times, March 2, p. 1. Menenzes, Alfred J., Paul C. Van Oorschot, and Scott A. Vanstone. 1996. Handbook of Applied Cryptography. CRC Press Series on Discrete Mathematics and Its Applications. Boca Raton, FL: CRC Press, October. Necula, George C. 1997. "Proof-Carrying Code," pp. 106-119 in Proceedings of the 24th Symposium on Principles of Programming Languages. New York: ACM Press. Neuman, B. Clifford, and Theodore Ts'o. 1994. "Kerberos: An Authentication Service for Computer Networks," IEEE Communications Magazine, 32 (9~:33-38. Available online at . Schneider, Fred B. 1998. Enforceable Security Policies, Technical Report TR98-1664, Com- puter Science Department, Cornell University, Ithaca, NY. Available online at . Schneier, Bruce. 1996. Applied Cryptography. 2nd Ed. New York: John Wiley & Sons. Schwartz, John. 1997. "Case of the Intel 'Hacker,' Victim of His Own Access," Washington Post, September 15, p. F17. Sterling, Bruce. 1992. The Hacker Crackdown: Law and Disorder on the Electronic Frontier. New York: Bantam Books. Stoll, Clifford. 1989. The Cuckoo's Egg. New York: Doubleday. U.S. Department of Defense (DOD). 1985. Trusted Computer System Evaluation Criteria, Department of Defense 5200.28-STD, the "Orange Book." Ft. Meade, MD: National Computer Security Center, December. U.S. General Accounting Office (GAO). 1996. Information Security Computer Attacks at Department of Defense Pose Increasing Risks: A Report to Congressional Requesters. Wash- ington, DC: U.S. GAO, May. Van Eng, Ray. 1997. "ActiveX Used to Steal Money Online," World Internet News Digest (W.I.N.D.), February 14. Available online at .

OCR for page 109
REINVENTING SECURITY 153 Wahbe, Robert, Steven Lucco, Thomas E. Anderson, and Susan L. Graham. 1993. "Efficient Software-based Fault Isolation," pp. 203-216 in Proceedings of the 14th ACM Symposium on Operating Systems Principles. New York: ACM Press. War Room Research LLC. 1996. 1996 Information Systems Security Survey. Baltimore, MD: War Room Research LLC, November 21. Weissman, Clark. 1995. "Penetration Testing," Information Security, M.D. Abrams, S. Jajodia, and H.J. Podell, eds. Los Alamitos, CA: IEEE Computer Society Press.