Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 74
Computers at Risk: Safe Computing in the Information Age 3 Technology to Achieve Secure Computer Systems A reasonably complete survey of the technology needed to protect information and other resources controlled by computer systems, this chapter discusses how such technology can be used to make systems secure. It explains the essential technical ideas, gives the major properties of relevant techniques currently known, and tells why they are important. Suggesting developments that may occur in the next few years, it provides some of the rationale for the research agenda set forth in Chapter 8. Appendix B of this report discusses in more detail several topics that are either fundamental to computer security technology or of special current interest—including how some important things (such as passwords) work and why they do not work perfectly. This discussion of the technology of computer security addresses two major concerns: What do we mean by security? How do we get security, and how do we know when we have it? The first involves specification of security and the services that computer systems provide to support security. The second involves implementation of security, and in particular the means of establishing confidence that a system will actually provide the security the specifications promise. Each topic is discussed according to its importance for the overall goal of providing computer security, and not according to how much work has already been done on that topic. This chapter discusses many of the concepts introduced in Chapter 2, but in more detail. It examines the technical process of relating computer mechanisms to higher-level controls and policies, a process
OCR for page 75
Computers at Risk: Safe Computing in the Information Age that requires the development of abstract security models and supporting mechanisms. Although careful analysis of the kind carried out in this chapter may seem tedious, it is a necessary prerequisite to ensuring the security of something as complicated as a computer system. Ensuring security, like protecting the environment, requires a holistic approach; it is not enough to focus on the problem that caused trouble last month, because as soon as that difficulty is resolved, another will arise. SPECIFICATION VS. IMPLEMENTATION The distinction between what a system does and how it does it, between specification and implementation, is basic to the design and analysis of computer systems. A specification for a system is the meeting point between the customer and the builder. It says what the system is supposed to do. This is important to the builder, who must ensure that what the system actually does matches what it is supposed to do. It is equally important to the customer, who must be confident that what the system is supposed to do matches what he wants. It is especially critical to know exactly and completely how a system is supposed to support requirements for security, because any mistake can be exploited by a malicious adversary. Specifications can be written at many levels of detail and with many degrees of formality. Broad and informal specifications of security are called security policies1 (see Chapter 2), examples of which include the following: (1) "Confidentiality: Information shall be disclosed only to people authorized to receive it." (2) "Integrity: Data shall be modified only according to established procedures and at the direction of properly authorized people." It is possible to separate from the whole the part of a specification that is relevant to security. Usually a whole specification encompasses much more than the security-relevant part. For example, a whole specification usually says a good deal about price and performance. In systems for which confidentiality and integrity are the primary goals of security policies, performance is not relevant to security because a system can provide confidentiality and integrity regardless of how well or badly it performs. But for systems for which availability and integrity are paramount, performance specifications may be relevant to security. Since security is the focus of this discussion, "specification" as used here should be understood to describe only what is relevant to security. A secure system is one that meets the particular specifications meant to ensure security. Since many different specifications are possible,
OCR for page 76
Computers at Risk: Safe Computing in the Information Age there cannot be any absolute notion of a secure system. An example from a related field clarifies this point. We say that an action is legal if it meets the requirements of the law. Since different jurisdictions can have different sets of laws, there cannot be any absolute notion of a legal action; what is legal under the laws of Britain may be illegal in the United States. A system that is believed to be secure is called trusted. Of course, a trusted system must be trusted for something; in the context of this report it is trusted to meet security specifications. In some other context such a system might be trusted to control a shuttle launch or to retrieve all the 1988 court opinions dealing with civil rights. Policies express a general intent. Of course, they can be more detailed than the very general ones given as examples above; for instance, the following is a refinement of the first policy: "Salary confidentiality: Individual salary information shall be disclosed only to the employee, his superiors, and authorized personnel people." But whether general or specific, policies contain terms that are not precisely defined, and so it is not possible to tell with absolute certainty whether a system satisfies a policy. Furthermore, policies specify the behavior of people and of the physical environment as well as the behavior of machines, so that it is not possible for a computer system alone to satisfy them. Technology for security addresses these problems by providing methods for the following: Integrating a computer system into a larger system, comprising people and a physical environment as well as computers, that meets its security policies; Giving a precise specification, called a security model, for the security-relevant behavior of the computer system; Building, with components that provide and use security services, a system that meets the specifications; and Establishing confidence, or assurance, that a system actually does meet its specifications. This is a tall order that at the moment can be only partially filled. The first two actions are discussed in the section below titled "Specification," the last two in the following section titled "Implementation." Services are discussed in both sections to explain both the functions being provided and how they are implemented. SPECIFICATION: POLICIES, MODELS, AND SERVICES This section deals with the specification of security. It is based on the taxonomy of security policies given in Chapter 2. There are only a few highly developed security policies, and research is needed to
OCR for page 77
Computers at Risk: Safe Computing in the Information Age develop additional policies (see Chapter 8), especially in the areas of integrity and availability. Each of the highly developed policies has a corresponding (formal) security model, which is a precise specification of how a computer system should behave as part of a larger system that implements a policy. Implementing a security model requires mechanisms that provide particular security services. A small number of fundamental mechanisms have been identified that seem adequate to implement most of the highly developed security policies currently in use. The simple example of a traffic light illustrates the concepts of policy and model; in this example, safety plays the role of security. The light is part of a system that includes roads, cars, and drivers. The safety policy for the complete system is that two cars should not collide. This is refined into a policy that traffic must not move in two conflicting directions through an intersection at the same time. This policy is translated into a safety model for the traffic light itself (which plays a role analogous to that of a computer system within a complete system): two green lights may never appear in conflicting traffic patterns simultaneously. This is a simple specification. Observe that the complete specification for a traffic light is much more complex; it provides for the ability to set the duration of the various cycles, to synchronize the light with other traffic lights, to display different combinations of arrows, and so forth. None of these details, however, is critical to the safety of the system, because they do not bear directly on whether or not cars will collide. Observe also that for the whole system to meet its safety policy, the light must be visible to the drivers, and they must understand and obey its rules. If the light remains red in all directions it will meet its specification, but the drivers will lose patience and start to ignore it, so that the entire system may not support a policy of ensuring safety. An ordinary library affords a more complete example (see Appendix B of this report) that illustrates several aspects of computer system security in a context that does not involve computers. Policies A security policy is an informal specification of the rules by which people are given access to a system to read and change information and to use resources. Policies naturally fall into a few major categories: Confidentiality: controlling who gets to read information; Integrity: assuring that information and programs are changed only in a specified and authorized manner; and
OCR for page 78
Computers at Risk: Safe Computing in the Information Age Availability: assuring that authorized users have continued access to information and resources. Two orthogonal categories can be added: Resource control: controlling who has access to computing, storage, or communication resources (exclusive of data); and Accountability: knowing who has had access to information or resources. Chapter 2 describes these categories in detail and discusses how an organization that uses computers can formulate a security policy by drawing elements from all these categories. The discussion below summarizes this material and supplements it with some technical details. Security policies for computer systems generally reflect long-standing policies for the security of systems that do not involve computers. In the case of national security these are embodied in the information classification and personnel clearance system; for commercial computing they come from established accounting and management control practices. From a technical viewpoint, the most fully developed policies are those that have been developed to ensure confidentiality. They reflect the concerns of the national security community and are derived from Department of Defense (DOD) Directive 5000.1, the basic directive for protecting classified information.2 The DOD computer security policy is based on security levels. Given two levels, one may be lower than the other, or the two may not be comparable. The basic principle is that information can never be allowed to leak to a lower level, or even to a level that is not comparable. In particular, a program that has "read access" to data at a higher level cannot simultaneously have "write access" to lower-level data. This is a rigid policy motivated by a lack of trust in application programs. In contrast, a person can make an unclassified telephone call even though he may have classified documents on his desk, because he is trusted to not read the document over the telephone. There is no strong basis for placing similar trust in an arbitrary computer program. A security level or compartment consists of an access level (either top secret, secret, confidential, or unclassified) and a set of categories (e.g., Critical Nuclear Weapon Design Information (CNWDI), North Atlantic Treaty Organization (NATO), and so on). The access levels are ordered (top secret, highest; unclassified, lowest). The categories, which have unique access and protection requirements, are not ordered, but sets of categories are ordered by inclusion: one set is lower than another if every category in the first is included in the second. One
OCR for page 79
Computers at Risk: Safe Computing in the Information Age security level is lower than another, different level if it has an equal or lower access level and an equal or lower set of categories. Thus [confidential; NATO] is lower than both [confidential; CNWDI, NATO] and [secret; NATO]. Given two levels, it is possible that neither is lower than the other. Thus [secret; CNWDI] and [confidential; NATO] are not comparable. Every piece of information has a security level (often called its label). Normally information is not permitted to flow downward: information at one level can be derived only from information at equal or lower levels, never from information that is at a higher level or is not comparable. If information is computed from several inputs, it has a level that is at least as high as any of the inputs. This rule ensures that if information is stored in a system, anything computed from it will have an equal or higher level. Thus the classification never decreases. The DOD computer security policy specifies that a person is cleared to a particular security level and can see information only at that, or a lower, level. Since anything seen can be derived only from other information categorized as being at that level or lower, the result is that what a person sees can depend only on information in the system at his level or lower. This policy is mandatory: except for certain carefully controlled downgrading or declassification procedures, neither users nor programs in the system can break the rules or change the security levels. As Chapter 2 explains, both this and other confidentiality policies can also be applied in other settings. Integrity policies have not been studied as carefully as confidentiality policies, even though some sort of integrity policy governs the operation of every commercial data-processing system. Work in this area (Clark and Wilson, 1987) lags work on confidentiality by about 15 years. Nonetheless, interest is growing in workable integrity policies and corresponding mechanisms, especially since such mechanisms provide a sound basis for limiting the damage caused by viruses, self-replicating software that can carry hidden instructions to alter or destroy data. The most highly developed policies to support integrity reflect the concerns of the accounting and auditing community for preventing fraud. The essential notions are individual accountability, auditability, separation of duty, and standard procedures. Another kind of integrity policy is derived from the information-flow policy for confidentiality applied in reverse, so that information can be derived only from other information of the same or a higher integrity level (Biba, 1975). This particular policy is extremely restrictive and thus has not been applied in practice. Policies categorized under accountability have usually been formulated
OCR for page 80
Computers at Risk: Safe Computing in the Information Age as part of confidentiality or integrity policies. Accountability has not received independent attention. In addition, very little work has been done on security policies related to availability. Absent this work, the focus has been on the practical aspects of contingency planning and recoverability. Models To engineer a computer system that can be used as part of a larger system that implements a security policy, and to decide unambiguously whether such a computer system meets its specification, an informal, broadly stated policy must be translated into a precise model. A model differs from a policy in two ways: It describes the desired behavior of a computer system's mechanisms, not that of the larger system that includes people. It is precisely stated in formal language that resolves the ambiguities of English and makes it possible, at least in principle, to give a mathematical proof that a system satisfies the model. Two models are in wide use. One, based on the DOD computer security policy, is the flow model; it supports a certain kind of confidentiality policy. The other, based on the familiar idea of stationing a guard at an entrance, is the access control model; it supports a variety of confidentiality, integrity, and accountability policies. There are no models that support availability policies. Flow Model The flow model is derived from the DOD computer security policy described above. In this model (Denning, 1976) each piece of data in the system visible to a user or an application program is held in a container called an object. Each object has an associated security level. An object's level indicates the security level of the data it contains. Data in one object is allowed to affect another object only if the source object's level is lower than or equal to the destination object's level. All the data within a single object have the same level and hence can be manipulated freely. The flow model ensures that information at a given security level flows only to an equal or higher level. Data is not the same as information; for example, an encrypted message contains data, but it conveys no information unless one knows the encryption key or can break the encryption system. Unfortunately, data is all the computer can understand. By preventing an object at one level from being
OCR for page 81
Computers at Risk: Safe Computing in the Information Age affected in any way by data that is not at an equal or lower level, the flow model ensures that information can flow only to an equal or higher level inside the computer system. It does this very conservatively and thus forbids many actions that would not in fact cause any information to flow improperly. A more complicated version of the flow model (which is actually the basis of the rules in the Orange Book) separates objects into active subjects that can initiate operations and passive objects that simply contain data, such as a file, a piece of paper, or a display screen. Data can flow only between an object and a subject; flow from object to subject is called a read operation, and flow from subject to object is called a write operation. Now the rules are that a subject can only read from an object at an equal or lower level, and can only write to an object at an equal or higher level. Not all possible flows in a system look like read and write operations. Because the system is sharing resources among objects at different levels, it is possible for information to flow on what are known as covert channels (Lampson, 1973; IEEE, 1990a). For example, a high-level subject might be able to send a little information to a low-level subject by using up all the disk space if it learns that a surprise attack is scheduled for next week. When the low-level subject finds itself unable to write a file, it has learned about the attack (or at least received a hint). To fully realize the intended purpose of a flow model, it is necessary to identify and attempt to close all the covert channels, although total avoidance of covert channels is generally impossible due to the need to share resources. To fit this model of a computer system into the real world, it is necessary to account for people. A person is cleared to some level of permitted access. When he identifies himself to the system as a user present at some terminal, he can set the terminal's level to any equal or lower level. This ensures that the user will never see information at a higher level than his clearance allows. If the user sets the terminal level lower than the level of his clearance, he is trusted not to take high-level information out of his head and introduce it into the system. Although not logically required, the flow model policy has generally been viewed as mandatory; neither users nor programs in a system can break the flow rule or change levels. No real system can strictly follow this rule, since procedures are always needed for declassifying data, allocating resources, and introducing new users, for example. The access control model is used for these purposes, among others. Access Control Model The access control model is based on the idea of stationing a guard
OCR for page 82
Computers at Risk: Safe Computing in the Information Age in front of a valuable resource to control who has access to it. This model organizes the system into Objects: entities that respond to operations by changing their state, providing information about their state, or both; Subjects: active objects that can perform operations on objects; and Operations: the way that subjects interact with objects. The objects are the resources being protected; an object might be a document, a terminal, or a rocket. A set of rules specifies, for each object and each subject, what operations that subject is allowed to perform on that object. A reference monitor acts as the guard to ensure that the rules are followed (Lampson, 1985). An example of a set of access rules follows: Subject Operation Object Smith Read file ''1990 pay raises" White Send "Hello" Terminal 23 Process 1274 Rewind Tape unit 7 Black Fire three rounds Bow gun Jones Pay invoice 432567 Account Q34 There are many ways to express the access rules. The two most popular are to attach to each subject a list of the objects it can access (a capability list), or to attach to each object a list of the subjects that can access it (an access control list). Each list also identifies the operations that are allowed. Most systems use some combination of these approaches. Usually the access rules do not mention each operation separately. Instead they define a smaller number of "rights" (often called permissions)—for example, read, write, and search—and grant some set of rights to each (subject, object) pair. Each operation in turn requires some set of rights. In this way a number of different operations, all requiring the right to read, can read information from an object. For example, if the object is a text file, the right to read may be required for such operations as reading a line, counting the number of words, and listing all the misspelled words. One operation that can be done on an object is to change which subjects can access the object. There are many ways to exercise this control, depending on what a particular policy is. When a discretionary policy applies, for each object an "owner" or principal is identified who can decide without any restrictions who can do what to the object. When a mandatory policy applies, the owner can make these
OCR for page 83
Computers at Risk: Safe Computing in the Information Age decisions only within certain limits. For example, a mandatory flow policy allows only a security officer to change the security level of an object, and the flow model rules limit access. The principal controlling the object can usually apply further limits at his discretion. The access control model leaves open what the subjects are. Most commonly, subjects are users, and any active entity in the system is treated as acting on behalf of some user. In some systems a program can be a subject in its own right. This adds a great deal of flexibility, because the program can implement new objects using existing ones to which it has access. Such a program is called a protected subsystem; it runs as a subject different from the principal invoking it, usually one that can access more objects. The security services used to support creation of protected subsystems also may be used to confine suspected Trojan horses or viruses, thus limiting the potential for damage from such programs. This can be done by running a suspect program as a subject that is different from the principal invoking it, in this case a subject that can access fewer objects. Unfortunately, such facilities have not been available in most operating systems. The access control model can be used to realize both secrecy and integrity policies, the former by controlling read operations and the latter by controlling write operations, and others that change the state. This model supports accountability, using the simple notion that every time an operation is invoked, the identity of the subject and the object as well as the operation should be recorded in an audit trail that can later be examined. Difficulties in making practical use of such information may arise owing to the large size of an audit trail. Services Basic security services are used to build systems satisfying the policies discussed above. Directly supporting the access control model, which in turn can be used to support nearly all the policies discussed, these services are as follows: Authentication: determining who is responsible for a given request or statement,3 whether it is, "The loan rate is 10.3 percent," or "Read file 'Memo to Mike,'" or "Launch the rocket.'' Authorization: determining who is trusted for a given purpose, whether it is establishing a loan rate, reading a file, or launching a rocket. Auditing: recording each operation that is invoked along with the identity of the subject and object, and later examining these records. Given these services, it is easy to implement the access control
OCR for page 84
Computers at Risk: Safe Computing in the Information Age model. Whenever an operation is invoked, the reference monitor uses authentication to find out who is requesting the operation and then uses authorization to find out whether the requester is trusted for that operation. If so, the reference monitor allows the operation to proceed; otherwise, it cancels the operation. In either case, it uses auditing to record the event. Authentication To answer the question, Who is responsible for this statement?, it is necessary to know what sort of entities can be responsible for statements; we call these entities principals. It is also necessary to have a way of naming the principals that is consistent between authentication and authorization, so that the result of authenticating a statement is meaningful for authorization. A principal is a (human) user or a (computer) system. A user is a person, but a system requires some explanation. A system comprises hardware (e.g., a computer) and perhaps software (e.g., an operating system). A system can depend on another system; for example, a user-query process depends on a database management system, which depends on an operating system, which depends on a computer. As part of authenticating a system, it may be necessary to verify that the systems it depends on are trusted. In order to express trust in a principal (e.g., to specify who can launch the rocket), one must be able to give the principal a name. The name must be independent of any information (such as passwords or encryption keys) that may change without any change in the principal itself. Also, it must be meaningful, both when access is granted and later when the trust being granted is reviewed to see whether that trust is still warranted. A naming system must be: Complete: every principal has a name; it is difficult or impossible to express trust in a nameless principal. Unambiguous: the same name does not refer to two different principals; otherwise it is impossible to know who is being trusted. Secure: it is easy to tell which other principals must be trusted in order to authenticate a statement from a named principal. In a large system, naming must be decentralized to be manageable. Furthermore, it is neither possible nor wise to rely on a single principal that is trusted by every part of the system. Since systems as well as users can be principals, systems as well as users must be able to have names. One way to organize a decentralized naming system is as a hierarchy,
OCR for page 91
Computers at Risk: Safe Computing in the Information Age many bases to cover, and because every base is critical to assurance, there are bound to be mistakes. Hence two other important aspects of assurance are redundant checks like the security perimeters discussed below, and methods, such as audit trails and backup databases, for recovering from failures. The main components of a TCB are discussed below in the sections headed "Computing" and "Communications." This division reflects the fact that a modern distributed system is made up of computers that can be analyzed individually but that must communicate with each other quite differently from the way each communicates internally. Computing The computing part of the TCB includes the application programs, the operating system that they depend on, and the hardware (processing and storage) that both depend on. Hardware Since software consists of instructions that must be executed by hardware, the hardware must be part of the TCB. The hardware is depended on to isolate the TCB from the untrusted parts of the system. To do this, it suffices for the hardware to provide for a "user state" in which a program can access only the ordinary computing instructions and restricted portions of the memory, as well as a "supervisor state" in which a program can access every part of the hardware. Most contemporary computers above the level of personal computers tend to incorporate these facilities. There is no strict requirement for fancier hardware features, although they may improve performance in some architectures. The only essential, then, is to have simple hardware that is trustworthy. For most purposes the ordinary care that competent engineers take to make the hardware work is good enough. It is possible to get higher assurance by using formal methods to design and verify the hardware; this has been done in several projects, of which the VIPER verified microprocessor chip (for a detailed description see Appendix B) is an example (Cullyer, 1989). There is a mechanically checked proof to show that the VIPER chip's gate-level design implements its specification. VIPER pays the usual price for high assurance: it is several times slower than ordinary microprocessors built at the same time. Another approach to using hardware to support high assurance is to provide a separate, simple processor with specialized software to implement the basic access control services. If this hardware controls
OCR for page 92
Computers at Risk: Safe Computing in the Information Age the computer's memory access mechanism and forces all input/output data to be encrypted, that is enough to keep the rest of the hardware and software out of the TCB. (This requires that components upstream of the security hardware do not share information across security classes.) This approach has been pursued in the LOCK project, which is described in detail in Appendix B. Unlike the other components of a computing system, hardware is physical and has physical interactions with the environment. For instance, someone can open a cabinet containing a computer and replace one of the circuit boards. If this is done with malicious intent, obviously all bets are off about the security of the computer. It follows that physical security of the hardware must be assured. There are less obvious physical threats. In particular, computer hardware involves changing electric and magnetic fields, and it therefore generates electromagnetic radiation (often called emanations)5 as a byproduct of normal operation. Because this radiation can be a way for information to be disclosed, ensuring confidentiality may require that it be controlled. Similarly, radiation from the environment can affect the hardware. Operating System The job of an operating system is to share the hardware among application programs and to provide generic security services so that most applications do not need to be part of the TCB. This layering of security services is useful because it keeps the TCB small, since there is only one operating system for many applications. Within the operating system itself the idea of layering or partitioning can be used to divide the operating system into a kernel that is part of the TCB and into other components that are not (Gasser, 1988). How to do this is well known. The operating system provides an authorization service by controlling subjects' (processes) accesses to objects (files and communication devices such as terminals). The operating system can enforce various security models for these objects, which may be enough to satisfy the security policy. In particular it can enforce a flow model, which is sufficient for the DOD confidentiality policy, as long as it is able to keep track of security levels at the coarse granularity of whole files. To enforce an integrity policy like the purchasing system policy described above, there must be some trusted applications to handle functions like approving orders. The operating system must be able to treat these applications as principals, so that they can access objects that the untrusted applications running on behalf of the same user cannot access. Such applications are protected subsystems.
OCR for page 93
Computers at Risk: Safe Computing in the Information Age Applications and the Problem of Malicious Code Ideally applications should not be part of the TCB, since they are numerous, are often large and complicated, and tend to come from a variety of sources that are difficult to police. Unfortunately, attempts to build applications, such as electronic mail or databases that can handle multiple levels of classified information, on top of an operating system that enforces flow have had limited success. It is necessary to use a different operating system object for information at each security level, and often these objects are large and expensive. And to implement an integrity policy, it is always necessary to trust some application code. Again, it seems best to apply the kernel method, putting the code that must be trusted into separate components that are protected subsystems. The operating system must support this approach (Honeywell, 1985–1988). In most systems any application program running on behalf of a user has full access to all that the user can access. This is considered acceptable on the assumption that the program, although it may not be trusted to always do the right thing, is unlikely to do an intolerable amount of damage. But suppose that the program does not just do the wrong thing, but is actively malicious? Such a program, which appears to do something useful but has hidden within it the ability to cause serious damage, is called a Trojan horse. When a Trojan horse runs, it can do a great deal of damage: delete files, corrupt data, send a message with the user's secrets to another machine, disrupt the operation of the host, waste machine resources, and so forth. There are many places to hide a Trojan horse: the operating system, an executable program, a shell command file, or a macro in a spreadsheet or word-processing program are only a few of the possibilities. Moreover, a compiler or other program development tool with a Trojan horse can insert secondary Trojan horses into the programs it generates. The danger is even greater if the Trojan horse can also make copies of itself. Such a program is called a virus. Because it can spread quickly in a computer network or by copying disks, a virus can be a serious threat (''Viruses," in Appendix B, gives more details and describes countermeasures). Several examples of viruses have infected thousands of machines. Communications Methods for dealing with communications and security for distributed systems are less well developed than those for stand-alone centralized systems; distributed systems are both newer and more complex. There
OCR for page 94
Computers at Risk: Safe Computing in the Information Age is no consensus about methods to provide security for distributed systems, but a TCB for a distributed system can be built out of suitable trusted elements running on the various machines that the system comprises. The committee believes that distributed systems are now well enough understood that this approach to securing such systems should also become recognized as effective and appropriate in achieving security. A TCB for communications has two important aspects: secure channels for facilitating communication among the various parts of a system, and security perimeters for restricting communication between one part of a system and the rest. Secure Channels The access control model describes the working of a system in terms of requests for operations from a subject to an object and corresponding responses, whether the system is a single computer or a distributed system. It is useful to explore the topic of secure communication separately from the discussions above of computers, subjects, or objects so as to better delineate the fundamental concerns that underlie secure channels in a broad range of computing contexts. A channel is a path by which two or more principals communicate. A secure channel may be a physically protected path (e.g., a physical wire, a disk drive and associated disk, or memory protected by hardware and an operating system) or a logical path secured by encryption. A channel need not operate in real time: a message sent on a channel may be read much later, for instance, if it is stored on a disk. A secure channel provides integrity (a receiver can know who originally created a message that is received and that the message is intact (unmodified)), confidentiality (a sender can know who can read a message that is sent), or both.6 The process of finding out who can send or receive on a secure channel is called authenticating the channel; once a channel has been authenticated, statements and requests arriving on it are also authenticated. Typically the secure channels between subjects and objects inside a computer are physically protected: the wires in the computer are assumed to be secure, and the operating system protects the paths by which programs communicate with each other, using methods described above for implementing TCBs. This is one aspect of a broader point: every component of a physically protected channel is part of the TCB and must meet a security specification. If a wire connects two computers, it may be difficult to secure physically, especially if the computers are in different buildings.
OCR for page 95
Computers at Risk: Safe Computing in the Information Age To keep wires out of the TCB we resort to encryption, which makes it possible to have a channel whose security does not depend on the security of any wires or intermediate systems through which the messages are passed. Encryption works by computing from the data of the original message, called the clear text or plaintext, some different data, called the ciphertext, which is actually transmitted. A corresponding decryption operation at the receiver takes the ciphertext and computes the original plaintext. A good encryption scheme reflects the concept that there are some simple rules for encryption and decryption, and that computing the plaintext from the ciphertext, or vice versa, without knowing the rules is too difficult to be practical. This should be true even for one who already knows a great deal of other plaintext and its corresponding ciphertext. Encryption thus provides a channel with confidentiality and integrity. All the parties that know the encryption rules are possible senders, and those that know the decryption rules are possible receivers. Obtaining many secure channels requires having many sets of rules, one for each channel, and dividing the rules into two parts, the algorithm and the key. The algorithm is fixed, and everyone knows it. The key can be expressed as a reasonably short sequence of characters, a few hundred at most. It is different for each secure channel and is known only to the possible senders or receivers. It must be fairly easy to generate new keys that cannot be easily guessed. The two kinds of encryption algorithms are described below. It is important to have some understanding of the technical issues involved in order to appreciate the policy debate about controls that limit the export of popular forms of encryption (Chapter 6) and influence what is actually available on the market.7 Symmetric (secret or private) key encryption, in which the same key is used to send and receive (i.e., to encrypt and decrypt). The key must be known only to the possible senders and receivers. Decryption of a message using the secret key shared by a receiver and a sender can provide integrity for the receiver, assuming the use of suitable error-detection measures. The Data Encryption Standard (DES) is the most widely used, published symmetric encryption algorithm (NBS, 1977). Asymmetric (public) key encryption, in which different keys are used to encrypt and decrypt. The key used to encrypt a message for confidentiality in asymmetric encryption is a key made publicly known by the intended receiver and identified as being associated with him, but the corresponding key used to decrypt the message is known only to that receiver. Conversely, a key used to encrypt a message for integrity (to digitally sign the message) in asymmetric
OCR for page 96
Computers at Risk: Safe Computing in the Information Age encryption is known only to the sender, but the corresponding key used to decrypt the message (validate the signature) must be publicly known and associated with that sender. Thus the security services to ensure confidentiality and integrity are provided by different keys in asymmetric encryption. The Rivest-Shamir-Adelman (RSA) algorithm is the most widely used form of public-key encryption (Rivest et al., 1978). Known algorithms for asymmetric encryption run at relatively slow rates (a few thousand bits per second at most), whereas it is possible to buy hardware that implements DES at rates of up to 45 megabits per second, and an implementation at a rate of 1 gigabit per second is feasible with current technology. A practical design therefore uses symmetric encryption for handling bulk data and uses asymmetric encryption only for distributing symmetric keys and for a few other special purposes. Appendix B's "Cryptography" section gives details on encryption. A digital signature provides a secure channel for sending a message to many receivers who may see the message long after it is sent and who are not necessarily known to the sender. Digital signatures may have many important applications in making a TCB smaller. For instance, in the purchasing system described above, if an approved order is signed digitally, it can be stored outside the TCB, and the payment component can still trust it. See the Appendix B section headed "Digital Signatures" for a more careful definition and some discussion of how to implement digital signatures. Authenticating Channels Given a secure channel, it is still necessary to find out who is at the other end, that is, to authenticate it. The first step is to authenticate a channel from one computer system to another. The simplest way to do this is to ask for a password. Then if there is a way to match up the password with a principal, authentication is complete. The trouble with a password is that the receiver can misrepresent himself as the sender to anyone else who trusts the same password. As with symmetric encryption, this means that one needs a separate password to authenticate himself to every system that one trusts differently. Furthermore, anyone who can read (or eavesdrop on) the channel also can impersonate the sender. Popular computer network media such as Ethernet or token rings are vulnerable to such abuses. The need for a principal to use a unique symmetric key to authenticate himself to every different system can be addressed by using a trusted
OCR for page 97
Computers at Risk: Safe Computing in the Information Age third party to act as an intermediary in the cryptographic authentication process, a concept that has been understood for some time (Branstad, 1973; Kent, 1976; Needham and Schroeder, 1978). This approach, using symmetric encryption to achieve authentication, is now embodied in the Kerberos authentication system (Miller et al., 1987; Steiner et al., 1988). However, the requirement that this technology imposes, namely the need to trust a third party with keys that may be used (directly or indirectly) to encrypt the principal's data, may have hampered its widespread adoption. Both of these problems can be overcome by challenge-response authentication schemes. These schemes make it possible to prove that a secret is known without disclosing it to an eavesdropper. The simplest scheme to explain as an example is based on asymmetric encryption, although schemes based on the use of symmetric encryption (Kent et al., 1982) have been developed, and zero-knowledge techniques have been proposed (Chaum, 1983). The challenger finds out the public key of the principal being authenticated, chooses a random number, and sends it to the principal encrypted using both the challenger's private key and the principal's public key. The principal decrypts the challenge using his private key and the public key of the challenger, extracts the random number, and encrypts the number with his private key and the challenger's public key and sends back the result. The challenger decrypts the result using his private key and the principal's public key; if he gets back the original number, he knows that the principal must have done the encrypting.8 How does the challenger learn the principal's public key? The CCITT X.509 standard defines a framework for authenticating a secure channel to a principal with an X.500 name; this is done by authenticating the principal's public key using certificates that are digitally signed. Such a certificate, signed by a trusted authority, gives a public key, K, and asserts that a message signed by K can be trusted to come from the principal. The standard does not define how other channels to the principal can be authenticated, but technology for doing this is well understood. An X.509 authentication may involve more than one agent. For example, agent A may authenticate agent B, who in turn authenticates the principal. (For a more thorough discussion of this sort of authentication, see X.509 (CCITT, 1989b) and subsequent papers that identify and correct a flaw in the X.509 three-way authentication protocol (e.g., Burrows et al., 1989).) Challenge-response schemes solve the problem of authenticating one computer system to another. Authenticating a user is more difficult, since users are not good at doing encryption or remembering large, secret quantities. One can be authenticated by what one knows (a
OCR for page 98
Computers at Risk: Safe Computing in the Information Age password), what one is (as characterized by biometrics), or what one has (a "smart card" or token). The use of a password is the traditional method. Its drawbacks have already been explained and are discussed in more detail in the section titled "Passwords" in Appendix B. Biometrics involves measuring some physical characteristics of a person—handwriting, fingerprints, or retinal patterns, for example—and transmitting this information to the system that is authenticating the person (Holmes et al., 1990). The problems are forgery and compromise. It may be easy to substitute a mold of someone else's finger, especially if the impersonator is not being watched. Alternatively, anyone who can bypass the physical reader and simply inject the bits derived from the biometric scanning can impersonate the person, a critical concern in a distributed system environment. Perhaps the greatest problem associated with biometric authentication technology to date has been the cost of equipping terminals and workstations with the input devices necessary for most of these techniques.9 By providing the user with a tiny computer that can be carried around and will act as an agent of authentication, a smart card or token reduces the problem of authenticating a user to the problem of authenticating a computer (NIST, 1988). A smart card fits into a special reader and communicates electrically with a system; a token has a keypad and display, and the user keys in a challenge, reads the response, and types it back to the system (see, for example, the product Racal Watchword). (At least one token authentication system (Security Dynamics' SecureID) relies on time as an implicit challenge, and thus the token used with this system requires no keypad.) A smart card or token is usually combined with a password to keep it from being easily used if it is lost or stolen; automatic teller machines require a card and a personal identification number (PIN) for the same reason. Security Perimeters A distributed system can become very large; systems with 50,000 computers exist today, and they are growing rapidly. In a large system no single agent will be trusted by everyone; security must take account of this fact. Security is only as strong as its weakest link. To control the amount of damage that a security breach can do and to limit the scope of attacks, a large system may be divided into parts, each surrounded by a security perimeter. The methods described above can in principle provide a high level of security even in a very large system that is accessible to many malicious principals. But implementing these methods throughout a system is sure to be difficult
OCR for page 99
Computers at Risk: Safe Computing in the Information Age and time-consuming, and ensuring that they are used correctly is likely to be even more difficult. The principle of "divide and conquer" suggests that it may be wiser to divide a large system into smaller parts and to restrict severely the ways in which these parts can interact with each other. The idea is to establish a security perimeter around part of a system and to disallow fully general communication across the perimeter. Instead, carefully managed and audited gates in the perimeter allow only certain limited kinds of traffic (e.g., electronic mail, but not file transfers). A gate may also restrict the pairs of source and destination systems that can communicate through it. It is important to understand that a security perimeter is not foolproof. If it allows the passing of electronic mail, then users can encode arbitrary programs or data in the mail and get them across the perimeter. But this is unlikely to happen by mistake, for it requires much more deliberate planning than do the more direct ways of communicating inside the perimeter using terminal connections. Furthermore, a mail-only perimeter is an important reminder of system security concerns. Users and managers will come to understand that it is dangerous to implement automated services that accept electronic mail requests from outside and treat them in the same fashion as communications originating inside the perimeter. As with any security measure, a price is paid in convenience and flexibility for a security perimeter: it is difficult to do things across the perimeter. Users and managers must decide on the proper balance between security and convenience. See Appendix B's "Security Perimeters" section for more details. Methodology An essential part of establishing trust in a computing system is ensuring that it was built according to proper methods. This important subject is discussed in detail in Chapter 4. CONCLUSION The technical means for achieving greater system security and trust are a function of the policies and models that have been articulated and developed to date. Because most work to date has focused on confidentiality policies and models, the most highly developed services and the most effective implementations support requirements for confidentiality. What is currently on the market and known to users thus reflects only some of the need for trust technology. Research
OCR for page 100
Computers at Risk: Safe Computing in the Information Age topics described in Chapter 8 provide some direction for redressing this imbalance, as does the process of articulating GSSP described in Chapter 1, which would both nourish and draw from efforts to develop a richer set of policies and models. As noted in Chapter 6, elements of public policy may also affect what technology is available to protect information and other resources controlled by computer systems—negatively, in the case of export controls, or positively, in the case of federal procurement goals and regulations. NOTES 1. Terminology is not always used consistently in the security field. Policies are often called "requirements"; sometimes the word "policy" is reserved for a broad statement and ''requirement" is used for a more detailed statement. 2. DOD Directive 5200.28, "Security Requirements for Automatic Data Processing (ADP) Systems," is the interpretation of this policy for computer security (encompassing requirements for personnel, physical, and system security). The Trusted Computer Security Evaluation Criteria (TCSEC, or Orange Book, also known as DOD 5200.28-STD; U.S. DOD, 1985d) specifies security evaluation criteria for computers that are used to protect classified (or unclassified) data. 3. That is, who caused it to be made, in the context of the computer system; legal responsibility is a different matter. 4. The simplest such chain involves all the agents in the path, from the system up through the hierarchy to the first ancestor that is common to both the system and the principal, and then down to the principal. Such a chain will always exist if each agent is prepared to authenticate its parent and children. This scheme is simple to explain; it can be modified to deal with renaming and to allow for shorter authentication paths between cooperating pairs of principals. 5. The government's Tempest (Transient Electromagnetic Pulse Emanations Standard) program is concerned with reduction of such emanations. Tempest requirements can be met by using Tempest products or shielding whole rooms where unprotected products may be used. NSA has evaluated and approved a variety of Tempest products, although nonapproved products are also available. 6. In some circumstances a third secure channel property, availability, might be added to this list. If a channel exhibits secure availability, a sender can, with high probability, be confident that his message will be received, even in the face of malicious attack. Most communication channels incorporate some facilities designed to ensure availability, but most do so only under the assumptions of benign error, not in the context of malicious attack. At this time there is relatively little understanding of practical, generic methods of providing communication channels that offer availability in the face of attack (other than those approaches provided to deal with natural disasters or those provided for certain military communication systems). 7. For example, the Digital Equipment Corporation's development of an architecture for distributed system security was reportedly constrained by the availability of specific algorithms: The most popular algorithm for symmetric key encryption is the DES (Data Encryption Standard). … However, the DES algorithm is not specified by the architecture and, for export reasons, ability to use other algorithms is a requirement. The preferred algorithm for asymmetric key cryptography, and the only known algorithm with the properties required by the architecture, is RSA. … (Gasser et al., 1989, p. 308)
OCR for page 101
Computers at Risk: Safe Computing in the Information Age 8. This procedure proves the presence of the principal but gives no assurance that the principal is actually at the other end of the channel; it is possible that an adversary controls the channel and is relaying messages from the principal. To provide this assurance, the principal should encrypt some unambiguous identification of the channel with his private key as well, thus certifying that he is at one end. If the channel is secured by encryption, the encryption key identifies it. Since the key itself must not be disclosed, a one-way hash (see Appendix B) of the key should be used instead. 9. Another problem with retina scans is that individuals concerned about potential health effects sometimes object to use of the technology.
Representative terms from entire chapter: