Click for next page ( 63


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 62
Software for Networked Information Systems INTRODUCTION Background Computing power is becoming simultaneously cheaper and more dis- persed. General-purpose computers and access to global information sources are increasingly commonplace on home and office desktops. Per- haps most striking is the exploding popularity of the World Wide Web. A Web browser can interact with any Web site, and Web sites offer a wide variety of information and services. A less visible consequence of cheap, dispersed computing is the ease with which special-purpose networked information systems (NISs) can now be built. An NIS built to support the activities of a health care provider, such as a medium-sized health maintenance organization (HMO) serving a wide geographic area, is used as an illustration here and throughout this chapter. HMO services might include maintenance of patient records, support for administration of hospitals and clinics, and support for equip- ment in laboratories. The NIS would, therefore, comprise computer sys- tems in hospital departments (such as radiology, pathology, and phar- macy), in neighborhood clinics, and in centralized data centers. By integrating these individual computer systems into an NIS, the HMO management would expect both to reduce costs and to increase the qual- ity of patient care. For instance, although data and records such as laboratory test results, x-ray or other images, and treatment logs previ 62

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 63 ously might have traveled independently, the information now can be transmitted and accessed together. In building an NIS for an HMO, management is likely to have chosen a "Web-centric" implementation using the popular protocols and facili- ties of the World Wide Web and the Internet. Such a decision would be sensible for the following reasons: The basic elements of the system, such as Web servers and brows- ers, can now be commercial off-the-shelf (COTS) components and, there- fore, are available at low cost. A large, growing pool of technical personnel is familiar with the Web-centric approach, so the project will not become dependent on a small number of individuals with detailed knowledge of locally written software. The technology holds promise for extensions into consumer tele- medicine, whereby patients and health care providers interact by using the same techniques as are commonly used on the rest of the Internet. Clearly, the HMO's NIS must exhibit trustworthiness: it must engen- der feelings of confidence and trust in those whose lives it affects. Physi- cians must be confident that the system will display the medical record of the patient they are seeing when it is needed and will not lose informa- tion; patients must be confident that physician-entered prescriptions will be properly transmitted and executed; and all must be confident that the privacy of records will not be compromised. Achieving this trustworthi- ness, however, is not easy. NIS trustworthiness mechanisms basically concern events that are not supposed to happen. Nonmalicious users living in a benign and fault-free world would be largely unaffected were such mechanisms re- moved from a system. But some users may be malicious, and the world is not fault free. Consequently, reliability, availability, security and all other facets of trustworthiness require mechanisms to foster the necessary trust on the part of users and other affected parties. Only with their failure or absence do trustworthiness mechanisms assume importance to a system's users. Users seem unable to evaluate the costs of not having trustworthi- ness mechanisms except when they experience actual damage from inci- dents (see Chapter 6 for an extended discussion). So, while market forces can help foster the deployment of trustworthiness mechanisms, these forces are unlikely to do so in advance of directly experienced or highly publicized violations of trustworthiness properties. Although the construction of trustworthy NISs is today in its infancy, lessons can be learned from experience in building full-authority and other freestanding, high-consequence computing systems for applications

OCR for page 62
64 TRUST IN CYBERSPACE such as industrial process control and medical instrumentation. In such systems, one or more computers directly control processes or devices whose malfunction could lead to significant loss of property or life. Even systems in which human intervention is required for initiating potentially dangerous events can become high-consequence systems when human users or operators place too much trust in the information being dis- played by the computing system. To be sure, there are differences be- tween NISs and traditional high-consequence computing systems. An intent of this chapter is to identify those differences and to point out lessons from high-consequence systems that can be applied to NISs, as well as unique attributes of NISs that will require new research. The Role of Software Software plays a major role in achieving the trustworthiness of an NIS, because it is software that integrates and customizes general-pur- pose components for some task at hand. In fact, the role of software in an NIS is typically so pervasive that the responsibilities of a software engi- neer differ little from those of a systems engineer. NIS software develop- ers must therefore possess a systems viewpoint,2 and systems engineers must be intimately familiar with the strengths (and, more importantly, the limitations) of software technology. With software playing such a pervasive role, defects can have far- reaching consequences. It is notoriously difficult to write defect-free soft- ware, as the list of incidents in, for example, Leveson (1987) or Neumann (1995) confirms. Beyond the intrinsic difficulty of writing defect-free soft- ware, there are constraints that result from the nature of NISs. These constraints derive from schedule and budget; they mean that a software developer has only limited freedom in selecting the elements of the soft- ware system and in choosing a development process: An NIS is likely to employ commercial operating systems, pur- chased "middleware," and other applications, as well as special-purpose code developed specifically for the NIS. The total source code size for the system could range from tens to hundreds of millions of lines. In this setting, it is infeasible to start from scratch in order to support trustwor- thiness. This is a particularly dangerous state of affairs, since designers may assume that system operation is being monitored, when in fact it is not (Leveson, 1995~. 20nce succinctly stated as, "You are not in this alone." That is, that you need to consider not only the narrow functioning of your component but also how it interacts with other components, users, and the physical world in achieving system-level goals. Another aspect of the "systems viewpoint" is a healthy respect for the potential of unexpected side effects.

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 65 Future NISs will, of necessity, evolve from the current ones. There is no alternative, given the size of the systems, their complexity, and the need to include existing services in new systems. Techniques for support- ing trustworthiness must take this diversity of origin into account. It cannot be assumed that NISs will be conceived and developed without any reuse of existing artifacts. Moreover, components reused in NISs include legacy components that were not designed with such reuse in mind; they tend to be large systems or subsystems having nonstandard and often inconvenient interfaces. In the HMO example, clinical laborato- ries and pharmacies are likely to have freestanding computerized infor- mation systems that exemplify such legacy systems. Commercial off-the-shelf software components must be used to control development cost, development time, and project risk. A com- mercial operating system with a variety of features can be purchased for a few hundred dollars, so development of specialized operating systems is uneconomical in almost all circumstances. But the implication is that achieving and assessing the trustworthiness of a networked information system necessarily occur in an environment including COTS software components (operating systems, database systems, networks, compilers, and other system tools) with only limited access to internals or control over their design. Finally, the design of NIS software is likely to be dictated at least, in part by outside influences such as regulations, standards, organiza- tional structure, and organizational culture. These outside influences can lead to system architectures that aggravate the problems of providing trustworthiness. For example, in a medical information system, good security practices require that publicly accessible terminals be logged off from the system after relatively short periods of inactivity so that an unauthorized individual who happens upon an unattended terminal can- not use it. But in emergency rooms, expecting a practitioner to log in periodically is inconsistent with the urgency of emergency care that should be supported by an NIS in this setting. Fortunately, success in building an NIS does not depend on writing software that is completely free of defects. Systems can be designed so that only certain core functionality must be defect free; defects in other parts of the system, although perhaps annoying, become tolerable be- cause their impact is limited by the defect-free core functionality. It now is feasible to contemplate a system having millions of lines of source code and embracing COTS and legacy components, since only a fraction of the code has to be defect free. Of course, that approach to design does de- pend on being able to determine or control how the effects of defects propagate. Various approaches to software design can be seen as provid

OCR for page 62
66 TRUST IN CYBERSPACE ing artillery for attacking the problem, but none has proved a panacea. There is still no substitute for talented and experienced designers. Development of a Networked Information System The development of an NIS proceeds in phases that are similar to the phases of development for other computerized information systems: Decide on the structure or architecture of the system. Build and acquire components. Integrate the components into a working and trustworthy whole. cat r The level of detail at which the development team works forms a V-shaped curve. Effort starts at the higher, systems level, then dips down into details as individual software components are implemented and tested, and finally returns to the system level as the system is integrated into a cohesive whole. Of the three phases, the last is the most problematic. Development teams often find themselves in the integration phase with components that work separately but not together. Theoretically, an NIS can grow by accretion, with service nodes and client nodes being added at will. The problem is that (as illustrated by the Internet) it is difficult to ensure that the system as a whole will exhibit desired global properties and, in par- ticular, trustworthiness properties. On the one hand, achieving a level of connectivity and other basic services is relatively easy. These are the services that general-purpose components, such as routers, servers, and browsers, are designed to provide. And even though loads on networks and demands on servers are hard to predict, adverse outcomes are readily overcome by the addition or upgrade of general-purpose components. On the other hand, the consequences of failures or security breaches propagating through the system are hard to predict, to prevent, and to analyze when they do occur. Thus, basic services are relatively simple to provide, whereas global and specialized services and properties espe- ciallv those supporting trustworthiness are difficult to provide. SYSTEM PLANNING, REQUIREMENTS, AND TOP-LEVEL DESIGN Planning and Program Management A common first step in any development project is to produce a plan- ning and a requirements document. The planning document contains information about budget and schedules. Cost estimation and scheduling

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 67 are hard to do accurately, so producing a planning document is not a straightforward exercise. lust how much time a large project will require, how many staff members it will need (and when), and how much it will cost cannot today be estimated with precision. The techniques that exist, such as the constructive cost model (COCOMO) (Boehm, 1981), are only as good as the data given them and the suitability of their models for a given project. Estimation is further complicated if novel designs and the implementation of novel features are attempted, practices common in software development and especially common in leading-edge applica- tions such as an NIS. Although every attempt might be made to employ standard compo- nents (e.g., operating system, network, Web browsers, database manage- ment systems, and user-interface generators) in building an NIS, the ways in which the components are used are likely to be sufficiently novel that generalizing from past experiences with the components may be useless for estimating project costs and schedules. For example, it is not hard to connect browsers through a network to a server and then display what is on the server, but the result does not begin to be a medical records system, with its varied and often subtle trustworthiness requirements concerning patient privacy and data integrity. The basic services are even farther from a complete telemedicine system, which must be trusted to correctly convey patient data to experts and their diagnoses back to paramedical personnel. All in all, confidence in budget and schedule estimates for an NIS, as for any engineering artifact, can be high only when the new sys- tem is similar to systems that already have been built. Such similarity is rare in the software world and is likely to be even rarer in the nascent field of NIS development. The difficulties of cost estimation and scheduling explain why some projects are initiated with unrealistic schedules and assignments of staff and equipment. The problem is compounded in commercial product development (as opposed to specialized, one-of-a-kind system develop- ment) by marketing concerns. For software-intensive products, early ar- rival in the marketplace is often critical to success in that marketplace. This means that software development practice becomes distorted to maximize functionality and minimize development time, with little atten- tion paid to other qualities. Thus, functionality takes precedence over trustworthiness. A major difficulty in project management is coping with ambiguous and changing requirements. It is unrealistic to expect correct and com- plete knowledge of requirements at the start of a project. Requirements change as system development proceeds and the system, and its environ- ment, become better understood. Moreover, software frequently is re- garded (incorrectly) as something that can be changed easily at any point

OCR for page 62
68 TRUST IN CYBERSPACE during development, and software change requests then become routine. The effect of the changes, however, can be traumatic and lead to design compromises that affect trustworthiness. Another difficulty in project management is selecting, tailoring, and implementing the development process that will be used. The Waterfall development process (Pressman, 1986), in which each phase of the life cycle is completed before the next begins, oversimplifies. So, when the Waterfall process is used, engineers must deviate from it in ad hoc ways. Nevertheless, organizations ignore better processes, such as the Spiral model (Boehm, 1988; Boehm and DeMarco, 1997), which incorporates control and feedback mechanisms to deal with interaction of the life-cycle phases. Also contributing to difficulties in project management and planning is the high variance in capabilities and productivity that has been docu- mented for different software engineers (Curtis, 1981~. An order-of-mag- nitude variation in productivity is not uncommon between the most and the least productive programmers. Estimating schedules, assigning man- power, and managing a project under such circumstances are obviously difficult tasks. Finally, the schedule and cost for a project can be affected by unantici- pated defects or limitations in the software tools being employed. For example, a flawed compiler might not implement certain language fea- tures correctly or might not implement certain combinations of language features correctly. Configuration management tools (e.g., Rochkind, 1975) provide other opportunities for unanticipated schedule and cost pertur- bation. For use in an NIS, a configuration management tool not only must track changes in locally developed software components but also must keep track of vendor updates to COTS components. None of the difficulties are new revelations. Brooks, in his classic work The Mythical Man Month (Brooks, 1975), noted similar problems more than two decades ago. It is both significant and a cause for concern that this book remains relevant today as evidenced by the recent publica- tion of a special 20th anniversary edition. The difficulties, however, be- come even more problematic within the context of large and complex NISs. Requirements at the Systems Level Background There is ample evidence that the careful use of established techniques in the development of large software systems can improve their quality. Yet many development organizations do not employ techniques that have

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 69 been known for years to contribute to success. Nowhere is this refusal to learn the lessons of history more pronounced than with respect to re- quirements documents. Whether an NIS or a simple computer game is being implemented, a requirements document is useful. In special-purpose systems, it forms a contract between the customer and the developer by stating what the customer wants and thereby what the developer must build. In projects aimed at producing commercial products, it converts marketing and busi- ness objectives into technical terms. In the development of large systems, it serves as a vehicle for communication among the various engineering disciplines involved. And it also serves as a vehicle for communication between different software engineers responsible for developing software, as well as between the software engineers and those responsible for pre- senting the software to the outside world, such as a marketing team. It is all too common, however, to proceed with system development without first analyzing and documenting requirements. In fact, require- ments analysis and documentation are sometimes viewed as unnecessary or misdirected activities, since they do not involve creating executable code and are thought to increase time to market. Can system require- ments not be learned by inspecting the system itself? Requirements de- rived by such a posterior) inspections, however, run the risk of being incomplete and inaccurate. It is not always possible to determine a poste- riori which elements of an interface are integral and which are incidental to a particular implementation. In the absence of a requirements docu- ment, project staff must maintain a mental picture of the requirements in order to respond to questions about what should or could be imple- mented. Each putative requirements change must still be analyzed and negotiated, only now the debate occurs out of context and risks overlook- ing relevant information. Such an approach might be adequate for small systems, but it breaks down for systems having the size and complexity of an NIS. The System Requirements Document The system requirements document states in as much detail as pos- sible what the system should (and should not) do. To be useful for de- signers and implementers, a requirements document should be organized as a reference work. That is, it should be arranged so that one can quickly find the answer to a detailed question (e.g., What should go into an ad- missions form?. Such a structure, more like a dictionary than a textbook, makes it difficult for persons unfamiliar with the project to grasp how the NIS is supposed to work. As a consequence, requirements documents are supplemented (and often supplanted) with a concept of operations

OCR for page 62
70 TRUST IN CYBERSPACE (Conops) that describes, usually in the form of scenarios (so-called "use cases"), the operation of the NIS. A Conops for the example HMO system might, for example, trace the computer operations that support a patient from visiting a doctor at a neighborhood clinic, through diagnosis of a condition requiring hospitalization, admission and treatment at the hos- pital, discharge, and follow-up visits to the original clinic. Other scenarios in the Conops might include home monitoring of chronic conditions, emergency room visits, and so forth. The existence of two documents covering the same ground raises the possibility of inconsistencies. When they occur, it is usually the Conops that governs, because the Conops is the document typically read (and understood) by the sponsors of the project. Review and approval of system requirements documents may in- volve substantial organizational interaction and compromise when once- independent systems are networked and required to support overall organizational (as opposed to specific departmental) objectives. The com- promises can be driven more by organizational dynamics than by techni- cal factors, a situation that may lead to a failure to meet basic objectives later on. That risk is heightened in the case of the trustworthiness require- ments, owing (as is discussed below) to the difficulty of expressing such requirements and compounded by the difficulty of predicting the conse- quences of requiring certain features. In the case of the HMO system, for example, advocates for consumer telemedicine might insist on home com- puter access to the network in ways that are incompatible with maintain- ing even minimal medical records secrecy in the face of typical hackers. Anticipating and dealing with such a problem require predicting what sorts of attacks could be mounted, what defenses might be available in COTS products, and how attacks will propagate through an NIS whose detailed design might not be known for several years. Making the worst- case assumption (i.e., all COTS products are completely vulnerable and all defenses must be mounted through the locally developed software of the NIS) will likely lead to unacceptable development costs. Similar situ- ations arise for other dimensions of trustworthiness, such as data integrity or availability. Notation and Style Requirements documents are written first in ordinary English, which is notorious for imprecision and ambiguity. Most industrial developers do not use even semiformal specification notations, such as the SCR/A7 tabular technique (Heninger, 1980~. The principal reason for using natu- ral language (in addition to the cynical observation that without ambigu- ity there can be no consensus) is that, despite significant R&D investment

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 71 in the 1970s (Ross, 1977), no notation for system-level requirements has shown sufficiently commanding advantages to achieve dominant accep tance. Finally, many if not most software developers are forced to lead "unexamined lives." The demand for their services is so great that they must move from one project to the next without an opportunity for reflec- tion or consideration of alternatives to the approaches they used before. The paradoxical result of this situation is that the process of developing software, which has had revolutionary impact on many aspects of society and technology, is itself quite slow to change. One common strategy for coping with the problems inherent in natu- ral language is to divide the requirements into two classes: criteria for success (often called "objectives" or "goals") and criteria for failure (some- times called "absolute requirements". The criteria for success can be a matter of degree: situations where "more is better" without clear cutoff points. The criteria for failure are absolute conditions, such as causing a fatality, that render success in other areas irrelevant. In the HMO ex- ample, a criterion for success might be the time needed to transfer a medical record from the hospital to an outpatient facility quicker is bet- ter, but unless some very unlikely delays are experienced, the system is acceptable. A criterion for failure might be inaccessibility of information about a patient's drug allergies. If the patient dies from an allergic reac- tion that could have been prevented by the timely delivery of drug allergy data, then nothing else the system has done right (such as the smoothness of admission, proper assignment of diagnostic codes, or the correct inter- facing with the insurance carrier) really matters. It is often posited that requirements should state what a particular criterion is but not how that criterion should be achieved. In real-world systems development, this dictum can lead to unnecessarily convoluted and indirect formulations of requirements. The issue is illustrated by turning to building codes, which are a kind of requirements document. Building codes distinguish between performance specifications and de- sign specifications. A performance specification states, "Interior walls should resist heat of x degrees for y minutes." A design specification states, "Interior walls should use 5/8-inch Type X sheetrock." Perfor- mance specifications leave more room for innovation, but determining whether they have been satisfied is more difficult. Design specifications tend to freeze the development of technology by closing the market to innovations, but it is a simple matter to determine whether any given design specification has been fulfilled. More realistic guidance for what belongs in a requirements document is the following: If it defines either failure or success, it belongs in the requirements document, no matter how specific or detailed it is.

OCR for page 62
72 TRUST IN CYBERSPACE A distinction is sometimes made between functional requirements and nonfunctional requirements. When this distinction is made, func- tional requirements are concerned with services that the system should provide and are usually stated in terms of the system's interfaces; non- functional requirements define constraints on the development process, the structure of the system, or resources used during execution (Som- merville, 1996~. For example, a description of expected system outputs in response to various inputs would be considered a functional require- ment. Stipulations that structured design be employed during system development, that average system response time be bounded by some value, or that the system be safe or secure exemplify nonfunctional re- quirements. Nonfunctional requirements concerning execution theoretically can be translated into functional requirements. Doing that translation re- quires knowledge of system structure and internals. The resulting in- ferred functional requirements may concern internal system interfaces that not only are unmentioned in the original functional requirements but also may not yet be known. Moreover, performing the translation invari- ably will involve transforming informal notions, such as "secure," "reli- able," or "safe," into precise requirements that can be imposed on the internals and interfaces of individual modules. Formalizing informal properties at all and decomposing systemwide global properties into properties that must be satisfied by individual components are techni- cally very challenging tasks tasks often beyond the state of the art (Abadi and Lamport, 1993; McLean, 1994~. Where to Focus Effort in Requirements Analysis and Documentation The process of requirements analysis is complicated by the fact that any NIS is part of some larger system with which it interacts. An under- standing of the application domain itself and mastery of a variety of engineering disciplines other than software engineering may be neces- sary to perform requirements analysis for an NIS. Identification of sys- tem vulnerabilities is one process for which a broad understanding of the larger system context (including users, operators, and the physical envi- ronment) is particularly important. Techniques have been developed to deal with some of these issues. Modeling techniques, such as structured analysis (Constantine and Yourdon, 1979), have been developed for con- structing system descriptions that can be analyzed and reviewed by cus- tomers. Rapid prototyping tools (Tanik et al., 1989) offer a means to answer specific questions about the requirements for a new system, and prototyping is today a popular way to determine user interface require

OCR for page 62
98 TRUST IN CYBERSPACE requirements are thereby checked without formalizing an entire set of requirements, which, as observed above in the section on system-level requirements, is likely to be neither complete nor stable. Some of the better-known successful industrial uses of formal methods for analyzing requirements include these:22 With the Software Cost Reduction (SCR) program's tool suite, en- gineers at Rockwell were able to detect 24 errors many of them signifi- cant in the requirements specification for a commercial flight guidance system (Miller, 1998~. Also using the SCR tool suite, Lockheed engineers formalized the operational flight program for the C-130J Hercules aircraft and found six errors in nondeterminism and numerous type errors.23 An informal English specification for the widely deployed aircraft collision avoidance system TCAS II was abandoned for a formal version written in requirements state machine language (RSML) after the English specification was deemed too complex and unwieldy. That formal speci- fication has since been mechanically checked for completeness and con- sistency (Heimdahl and Leveson, 1996~. Formal methods were originally developed as an alternative to ex- haustive testing for increasing one's confidence that a piece of software satisfies a detailed behavioral specification. To date, this use for formal methods has been applied outside the laboratory only for relatively small safety-critical or high-consequence computing systems, for which devel- opment cost is not really a concern but flaws are. Examples include the verification of safety-critical software used in the Hercules C130 aircraft (Croxford and Sutton, 1995), parts of the next-generation command-and- control ground system for the ARIANE rocket launcher (Devauchelle et al., 1997), and highly secure operating systems (Saydjari et al., 1989~. Constructing extremely large proofs is infeasible today and for the foreseeable future, so formal methods requiring the construction of proofs for an entire system are not practical when developing an NIS having tens to hundreds of millions of lines of code. Even if size were not an issue, COTS components are rarely accompanied by the formal specifications necessary for doing formal verification of an NIS built from COTS compo- nents. It would be wrong, however, to conclude that formal verification cannot contribute to the construction of an NIS. 22In addition to Clarke and Wing (1996) and Craigen et al. (1993), further examples of this use of formal methods appear in Easterbrok et al. (1998~. 23As Connie Heitmeyer, U.S. Naval Research Laboratory, described at the NRC's Infor- mation Systems Trustworthiness Committee workshop, Irvine, CA, February 5-6, 1997.

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 99 For one thing, critical components of an NIS can be subject to formal verification, thereby reducing the number of flaws having system-dis- abling impact.24 The aircraft hand-off protocol (Marzullo et al., 1994) in the Advanced Automation Systems air-traffic control system built by IBM Federal Systems Division illustrates such an application of formal meth- ods. Second, entire (large) systems can be subject to formal verification of properties that are checkable mechanically. This is the impetus for recent interest by the software engineering community in so-called lightweight formal methods, like the LCLint tool, which is able to check C programs for a variety of variable type and use errors (Detlefs, 1996), and Eraser, a tool for detecting data races in lock-based multithreaded programs (Sav- age et al., 1997~. Size problems can be circumvented by subjecting a model of the NIS to analysis instead of analyzing the entire NIS. The model might be smaller than the original in some key dimension, as when confidence is built in a memory cache-controller by analyzing a version that handles only a small number of cache-lines. Alternatively, a model might be smaller than the original by virtue of the details it ignores checking a high-level description of an algorithm or architecture rather than check- ing its implementation in a real programming language. Illustrative of this latter approach are the various logics and tools for checking high- level descriptions of cryptographic protocols (Burrows et al., 1990; Lowe and Roscoe, 1997; Meadows, 1992~. For instance, with a logic of authenti- cation (Burrows et al., 1990), successive drafts of the CCITT X.509 stan- dard were analyzed and bugs were found, including a vulnerability to replay attacks even when keys have not been compromised. Observe that a great deal of benefit can be derived from formal meth- ods without committing a project to the use of formal notations either for baseline specifications or throughout. Some argue that formal methods analyses are more effective when performed later, to shake out those last few bugs, rather than earlier, when less costly techniques can still bear fruit. A well-documented example of industrial use of formal methods in building an NIS was the development by Praxis of the CCF display infor- mation system (CDIS) component of the central control function (CCF) air traffic management subsystem in the United Kingdom (Hall, 1996~.25 Here, various formal methods were used at different stages of the development 24At least for those properties that can be described formally. 25This system involved 100 processors linked by using dual local area networks and consisted of approximately 197,000 lines of C code (excluding comments), a specification document of approximately 1,200 pages, and a design document of approximately 3,000 pages.

OCR for page 62
100 TRUST IN CYBERSPACE process: VDM (Tones, 1986) was used during requirements analysis, WSL (Middleburg, 1989) was used for writing a formal specification for the sys- tem, and CSP (Hoare, 1985) was used for describing concurrency in CDIS and its environment. With automated assistance, proofs of correctness were constructed for a few critical protocols. And Hall (1996) reports that productivity for the project was the same or better than has been measured on comparable projects that used only informal methods. Moreover, the defect rate for the delivered software was between two and ten times better than has been reported for comparable software in air traffic control appli- cations that did not use formal methods. Beyond the successful industrial uses of formal methods discussed above and in the work cited, there are other indications that formal methods have come of age. Today, companies are marketing formal verification tools for use in hardware design and synthesis.26 And there are anecdotal reports that the number of doctoral graduates in mechanized formal methods is now insufficient to fill the current de- mands of industry.27 Although once there was a belief that the deployment of formal meth- ods required educating the entire development team, most actual deploy- ments have simply augmented a development team with formal methods experts. The job of these experts was beautifully characterized by I S. Moore:28 Like a police SWAT team, members are trained in the use of "special weapons," in particular, mathematical analysis tools. But they are also extremely good at listening, reading between the lines, filling in gaps, generalizing, expressing precisely the ideas of other people, explaining formalisms, etc. Their role is not to bully or take credit, but to formalize a computing system at an appropriate level of abstraction so that certain behaviors can be analyzed. Here, the absence of shared application assumptions with the develop- ment team actually benefits the formal methods expert by facilitating the discovery of unstated assumptions. Formal methods are gaining acceptance and producing results for industry. What are the impediments to getting broader use and even 26Examples include Formal Check from Lucent Technologies, RuleBase from IBM Corpo- ration, VFormal from Compass, and Checkoff from View Logic. 27As John Rushby described at the NRC's Information Systems Trustworthiness Commit- tee workshop, Irvine, CA, February 5-6, 1997. 28Position statement on the state of formal methods technology submitted for the committee's workshop held on February 5-6, 1997, in Irvine, CA. Moore credits Carl Pixley of Motorola with the SWAT-team simile.

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 101 further leverage from formal methods? With minor exceptions (Taylor, 1989), the formal methods and testing communities have worked inde- pendently of each other, to the advantage of neither. Also, the need for better-integrated tools has been articulated by researchers and formal methods practitioners alike (Craigen et al., 1993), and research efforts are now being directed toward combining, for example, model checkers and proof checkers. Another trend is the development of editors and library support for managing larger proofs and for facilitating development of reusable models and theories. Over the last decade, formal methods researchers survived only by devoting a significant fraction of their effort to performing realistic dem- onstration exercises (and these have helped to move formal methods from the research laboratory into industrial settings). More-fundamental re- search should be a priority. Significant classes of properties remain diffi- cult or impossible to analyze, with fault-tolerance and security high on the list. Methods for decomposing a global property into local ones (which could then be checked more easily) would provide a basis for attacking limitations that bar some uses of formal methods today. Finally, there is a growing collection of pragmatic questions about the use of formal methods. A key to building usable models of NISs is know- ing what dimensions can be safely ignored. Answering that question will require a better understanding about the role of approximation and of simplifying assumptions in formal reasoning. Frictionless planes have served mechanical engineers well what are the analogous abstractions for computing systems in general and NISs in particular? Idealized mod- els of arithmetic, for example, can give misleading results about real computations, which have access only to finite-precision fixed or floating- point arithmetic. And any assumption that might be invalidated consti- tutes a system vulnerability, so analysis predicated on assumptions will be blind to certain system vulnerabilities. There are also questions about the application of formal methods: Where can they give the greatest leverage during system development? When does adding details to a model become an exercise in diminishing returns, given that most errors in requirements and specification are er- rors of omission (and therefore are likely to be caught only as details are added)? And a question that is intimately linked to the problem of identifying and characterizing threats How does one gain confidence that a formal specification is accurate? Testing Testing is a highly visible process; it provides confidence that a sys- tem will operate correctly, because the system is seen to be operating

OCR for page 62
102 TRUST IN CYBERSPACE correctly during testing. And industry today relies heavily on testing. Unfortunately, most real systems have inputs that can take on large num- bers of possible values. Testing all combinations of the input values is impossible. (This is especially problematic for systems employing graphi- cal user interfaces, where the number of possible point-and-click combi- nations is unworkably large.) So, in practice, only a subset of all possible test cases is checked, and testing rarely yields any quantifiable informa- tion about the trustworthiness of a program. The characteristics of net- worked information systems geographic distribution of inputs and out- puts, uncontrollable and unmonitorable subsystems (e.g., networks and legacy systems), and large numbers of inputs make this class of system especially sensitive to the inadequacy of testing only subsets of the input space. Much of the research in testing has been directed at dealing with problems of scale. The goal has been to maximize the knowledge gained about a component or subsystem while minimizing the number of test cases required. Approaches based on statistical sampling of the input space have been shown to be infeasible if the goal is to demonstrate ultra- high levels of dependability (Butler and Finelli, 1993), and approaches based on coverage measures do not provide quantification of useful metrics such as mean time to failure. The result is that, in industry, testing is all too often defined to be complete when budget limits are reached, arbitrary milestones are passed, or defect detection rates drop below some threshold. There is clearly room for research especially to deal with the new complications that NISs bring to the problem: uncon- trollable and unobservable subsystems. System Evolution Software systems typically are modified after their initial deployment to correct defects, to permit the use of new hardware, and to provide new services. Accommodating such evolution is difficult. Unless great care is taken, the changes can cause the system structure to degenerate. That, in turn, can lead to new defects being introduced with each subsequent change, since a poorly structured system is both difficult to understand and difficult to modify. In addition, coping with system evolution re- quires managing the operational transition to new versions of that sys- tem. System upgrade, as this is called, frequently leads to unexpected difficulties, despite extensive testing of the new version before the up- grade. In some cases, withdrawal of the new system once it has been introduced is a formidable problem, because data formats and file con

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 103 tents have already changed. The popular press is full of incidents in which system failures are attributed to system upgrades gone awry. New facilities can be added to an NIS, and especially a Web-based NIS, with deceptive ease: a new server that provides the desired service is connected to the network. However, such action can affect performance and reliability. The dispersed nature of an NIS user community can make it difficult to gauge the impact of new features. And the lack of quality-of- service controls can make one NIS a hostage to changes in the load or features in another. Another potential area of difficulty for NIS evolution is having critical COTS components change or be rendered obsolete. The advent of so- called "push" technology, in which commercial off-the-shelf software is silently and automatically updated when the user visits the vendor's Web site, can cause COTS components to drift away from the configuration that existed during test and acceptance; the situation leads to obscure and difficult-to-locate errors. Findings 1. Very little is known about the integration of subsystems into an NIS. Yet methods for network integration are critical for building an NIS. NISs pose new challenges for integration because of their distributed na- ture and the variability of network behavior. 2. Even though technical reviews are generally considered by the practitioner community to be effective, the utility of technical reviews for establishing trustworthiness properties is not well documented. 3. Formal methods are most effective when the property of interest is subtle but can be rigorously defined, and when either the description of the object being analyzed is relatively small or the formal method being used supports analyses that can be automated. 4. Formal methods are moving from more manual methods toward computer-aided and fully mechanized approaches. 5. Formal methods are being used with success in commercial and industrial settings for hardware development and requirements analysis and with some success for software development. 6. Formal methods should be regarded as but one piece of technology for eliminating design errors in hardware and software. Formal methods are particularly well suited for identifying errors that become apparent only in scenarios not likely to be tested or testable. 7. Fundamental research problems in formal methods should not be neglected in favor of demonstration exercises. Research progress in core

OCR for page 62
104 TRUST IN CYBERSPACE areas will provide a basis for making significant advances in the capabili- ties of the technology. 8. Although the large size of an NIS and the use of COTS limit the use of formal methods for analyzing the entire system, formal verification can still contribute to the development process. 9. Testing subsets of a system does not adequately establish confi- dence in an NIS given its distributed nature and uncontrollable and unob- servable subsystems. 10. Research in testing that addresses issues of scale and concurrency is needed. 11. Postdeployment modification of software can have a significant negative impact on NIS trustworthiness and security. 12. Research directed at better integration of testing and formal meth- ods is likely to have payoffs for increasing assurance in trustworthy NISs. REFERENCES Abadi, Martin, and Leslie Lamport. 1993. "Composing Specifications," ACM Transactions on Programming Languages and Systems, 15~1~:73-132. Boehm, B. 1981. Software Engineering Economics. Englewood Cliffs, NJ: Prentice-Hall International. Boehm, B. 1988. "A Spiral Model of Software Development and Enhancement," IEEE Computer, 21~5~:61-72. Boehm, B., and T. DeMarco. 1997. "Software Risk Management," IEEE Software, 14~3~:17- 19. Bollinger, Terry, and Clement McGowan. 1991. "A Critical Look at Software Capability Evaluations," IEEE Software, 8~4~:2541. Brock, Bishop, Malt Koffman, and J Strother Moore. 1996. "ACL2 Theorems About Com- mercial Microprocessors," pp. 275-293 in Proceedings of Formal Methods in Computer- aided Design. Berlin: Springer-Verlag. Brodman, Judith G., and Donna L. Johnson. 1996. "Return on Investment from Software Process Improvement as Measured by U.S. Industry," Crosstalk: The Journal of Defense Software Engineering, 9~4~. Reprint available online at . Brooks, Frederick P., Jr. 1975. The Mythical Man-Month. Essays on Software Engineering. Reading, MA: Addison-Wesley. Brown, Nat, and Charlie Kindel. 1998. Distributed Component Object Model Protocol DCOM/ 1.0. Microsoft Corporation, January. Available online at . Burrows, Michael, Martin Abadi, and Roger Needham. 1990. "A Logic of Authentication," ACM Transactions on Computer Systems, 8~1~:18-36. Butler, R., and G. Finelli. 1993. "The Infeasibility of Quantifying the Reliability of Life- critical Real-time Software," IEEE Transactions on Software Engineering, 19~1~:3-12. Clarke, Edmund M., O. Grumberg, H.S. Jha, D.E. Long, K.L. McMillan, and L.A. Ness. 1993. "Verification of the Futurebus+ Cache Coherence Protocol," Transactions A (Computer Science and Technology), A32:15-30. Clarke, Edmund M., and Jeanette M. Wing. 1996. "Formal Methods: State of the Art and Future Directions," ACM Computing Surveys, 28~4~:626-643.

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 105 Clingen, C.T., and T.H. van Vleck. 1978. "The Multics System Programming Process," pp. 278-280 in Proceedings of the 3rd International Conference on Software Engineering. New York: IEEE Press. Computer Science and Telecommunications Board (CSTB), National Research Council. 1997. ADA and Beyond: Software Policiesfor the Department of Defense. Washington, DC: National Academy Press. Constantine, L.L., and E. Yourdon. 1979. Structured Design. Englewood Cliffs, NJ: Prentice- Hall. Craigen, Dan, Susan Gerhart, and Ted Ralston. 1993. An International Survey of Industrial Applications of Formal Methods. Gaithersburg, MD: National Institute of Standards and Technology, Computer Systems Laboratory, March. Craigen, Dan, Susan Gerhart, and Ted Ralston. 1995. "Formal Methods Reality Check: Industrial Usage," IEEE Transactions on Software Engineering, 21~2~:90-98. Croxford, M., and J. Sutton. 1995. "Breaking through the V and V Bottleneck," pp. 334-354 in Proceedings of Ada in Europe, in Frankfurt/Main, Germany. New York: Springer. Curtis, Bill. 1981. "Substantiating Programmer Variability," Proceedings of the IEEE, 69~7~:846. DeRemer, F., and H.H. Kron. 1976. "Programming-in-the-Large Versus Programming-in- the-Small," IEEE Transactions on Software Engineering, 2~3~:80-86. Detlefs, D. 1996. "An Overview of the Extended Static Checking System," pp. 1-9 in Proceedings of the First Workshop on Formal Methods in Software Practice. New York: ACM Press. Devauchelle, L., P.G. Larsen, and H. Voss. 1997. "PICGAL: Lessons Learnt from a Practi cal Use of Formal Specification to Develop a High-Reliability Software," European Space Agency SP 199, 409:159-164. Diaz, Michael, and Joseph Sligo. 1997. "How Software Process Improvement Helped Motorola," IEEE Software, 14~5~:75-81. Digital Equipment Corporation. 1997. Workshop on Security and Languages. Palo Alto, CA: Digital Equipment Corporation, Systems Research Center, October 30-31. Available online at . Dill, David L., A.J. Drexler, A.J. Hu, and C.H. Yang. 1992. "Protocol Verification as a Hardware Design Aid," pp. 522-525 in Proceedings of the IEEE International Conference on Computer Design: VLSI in Computers and Processors. Los Alamitos, CA: IEEE Com- puter Society Press. Dill, David L., and John Rushby. 1996. "Acceptance of Formal Methods: Lessons from Hardware Design," IEEE Computer, 29~4~:16-30. Dion, Raymond. 1993. "Process Improvement and the Corporate Balance Sheet," IEEE Software, 10(4):28-35. Easterbrok, Steve, Robyn Lutz, Richard Covington, John Kelly, and Yoko Ampo. 1998. "Experiences Using Lightweight Formal Methods for Requirements Modeling," IEEE Transactions on Software Engineering, 24~7):4-13. Fagan, M.E. 1986. "Advances in Software Inspections," IEEE Transactions on Software Engi- neering, 12~7):744-751. Fayad, Mohamed, and Mauri Laitinen. 1997. "Process Assessment Considered Harmful," Communications of the ACM, 40~11~:125-128. Garland, David, and Mary Shawl 1996. Software Architecture: Perspectives on an Emerging Discipline. Englewood Cliffs, N.J.: Prentice-Hall. Glass, R.L. 1981. "Persistent Software Errors," IEEE Transactions on Software Engineering, 7~2):162-168. Hall, Anthony. 1996. "Using Formal Methods to Develop an ATC Information System," IEEE Software, 13~6):66-76.

OCR for page 62
106 TRUST IN CYBERSPACE Hamilton, Graham, ed. 1997. JavaBeans. Palo Alto, CA: Sun Microsystems. Heimdahl, M., and Nancy G. Leveson. 1996. "Completeness and Consistency in Hierarchi- cal State-based Requirements," IEEE Transactions on Software Engineering, 22~6~:363- 377. Heninger, K. 1980. "Specifying Software Requirements for Complex Systems: New Tech- niques and Their Application," IEEE Transactions on Software Engineering, 6~1~:2-13. Herbsleb, James, and Dennis Goldenson. 1996. "A Systematic Survey of CMM Experience and Results," pp. 323-330 in Proceedings of the 18th International Conference on Software Engineering (ICSE). Los Alamitos, CA: IEEE Computer Society Press. Hernandez, J.A. 1997. The SAP R/3 Handbook. New York: McGraw-Hill. Hoare, C.A.R. 1985. Communicating Sequential Processes. Englewood Cliffs, NJ: Prentice-Hall. Honeywell Corporation. 1975. Aerospace and Defense Group Software Program, Final Report. Waltham, MA: Honeywell Corporation, Systems and Research Center. Jones, C.B. 1986. Systematic Software Development Using VDM. Englewood Cliffs, NJ: Prentice-Hall. Kitson, David, and Stephen Masters. 1993. "An Analysis of SKI Software Process Assess- ment Results: 1987-1991," pp. 68-77 in Proceedings of the 15th International Conference on Software Engineering (ICSE-15). Los Alamitos, CA: IEEE Computer Society Press. Kuehlmann, A., A. Srinivasan, and D.P. LaPotin. 1995. "Verity A Formal Verification Program for Custom CMOS Circuits," IBM Journal of Research and Development, 39~1 /2~:149-165. Lampson, Butler W. 1983. "Hints for Computer System Design," Operating Systems Review, 17(5):3348. Lawlis, Patricia K., Robert M. Flowe, and James B. Thordahl. 1995. "A Correlational Study of the CMM and Software Development Performance," Crosstalk: The Journal of Defense Software Engineering, 8~9~:21-25. Leveson, Nancy. 1995. Safeware. Reading, MA: Addison-Wesley. Leveson, Nancy G. 1987. Software Safety. Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute, July. Lowe, Gavin, and Bill Roscoe. 1997. "Using CSP to Detect Errors in the TMN Protocol," IEEE Transactions on Software Engineering, 23~10~:659-669. Manes, Stephen. 1998. "Settlement Near in Technical Help-line Suit," New York Times, March 3, p. F2. Marzullo, K., Fred B. Schneider, and J. Dehn. 1994. "Refinement for Fault Tolerance: An Aircraft Hand-off Protocol," pp. 39-54 in Foundations of Ultradependable Parallel and Distributed Computing, Paradigms for Dependable Applications. Amsterdam, The Nether- lands: Kluwer. McGarry, Frank, Steve Burke, and Bill Decker. 1997. "Measuring Impacts of Software Process Maturity in a Production Environment," Proceedings of the 21st Goddard Soft- ware Engineering Laboratory Software Engineering Workshop. Greenbelt, MD: Goddard Space Flight Center. McLean, John. 1994. "A General Theory of Composition for Trace Sets Closed Under Selective Interleaving Functions," pp. 79-93 in Proceedings of the IEEE Computer Society Symposium on Research in Security and Privacy. Los Alamitos, CA: IEEE Computer Society Press. Meadows, Catherine. 1992. "Applying Formal Methods to the Analysis of a Key Manage- ment Protocol," Journal of Computer Security, 1~1~:5-36. Meyer, Bertrand. 1988. Object-oriented Software Construction. Englewood Cliffs, NJ: Prentice- Hall.

OCR for page 62
SOFTWARE FOR NETWORKED INFORMATION SYSTEMS 107 Microsoft Corporation and Digital Equipment Corporation. 1995. The Component Object Model Specification (COM). Microsoft Corporation and Digital Equipment Corporation, Octo- ber. Available online at . Middleburg, C.A. 1989. "VVSL: A Language for Structured VDM Specifications," Formal Aspects of Computing, 1~1~:115-135. Miller, Steven P. 1998. "Specifying the Mode Logic of a Flight Guidance System in CoRE and SCR," pp. 44-53 in Proceedings of the 2nd Workshop on Formal Methods in Software Practice. New York: ACM Press. Musser, David R., and Atul Saini. 1996. STL Tutorial and Reference Guide: C++ Programming with the Standard Template Library. Reading, MA: Addison-Wesley. Neumann, Peter G. 1995. Computer Related Risks. New York: ACM Press. Ousterhout, John K. 1998. "Scripting: Higher-level Programming for the 21st Century," IEEE Computer, 31~3~:23-30. Owre, Sam, John Rushby, Natarajan Shankar, and Frederich von Henke. 1995. "Formal Verification for Fault-tolerant Architectures: Prolegomena to the Design of PVS," IEEE Transactions on Software Engineering, 21~2~:107-125. Parnas, D.L. 1974. "On a 'Buzzword': Hierarchical Structure," pp. 335-342 in Programming Methodology. A Collection of Articles by Members of the IFIP Congress, D. Gries, ed. Ber- lin: Springer-Verlag. Paulk, Mark C., Bill Curtis, Mary Beth Chrissis, and Charles V. Weber. 1993. "Capability Maturity Model for Software Version 1.1," IEEE Software, 10~4~:18-27. Pfleeger, S.L., N. Fenton, and S. Page. 1994. "Evaluating Software Engineering Standards," IEEE Computer, 27~9~:71-79. Porter, A.A., H.P. Siy, C.O. Toman, and L.G. Votta. 1997. "An Experiment to Assess the Cost-Benefits of Code Inspections in Large Scale Software Development," IEEE Trans- actions on Software Engineering, 23~6~:329-346. Potts, Colin, Kenji Takahashi, and Annie I. Anton. 1994. "Inquiry-based Requirements Analysis," IEEE Software, 11~2~:21-32. Pressman, Roger S. 1986. Software Engineering: A Practitioner's Approach. New York: McGraw-Hill. Raymond, Eric, and Guy L. Steele. 1991. The New Hacker's Dictionary. Cambridge, MA: MIT Press. Rochkind, Marc J. 1975. "The Source Code Control System," IEEE Transactions on Software Engineering, 1~4~:364-370. Ross, Douglas T. 1977. "Guest Editorial Reflections on Requirements," IEEE Transactions on Software Engineering, 3~1~:2-5. Rushby, J. 1995. Formal Methods and Their Role in Certification of Critical Systems. Menlo Park, CA: SRI International, March. Savage, Stefan, Michael Burrows, Greg Nelson, Patrick Sobalvarro, and Thomas E. Ander- son. 1997. "Eraser: A Dynamic Data Race Detector for Multi-threaded Programs," Operating Systems Review, 31~5~:27-37. Saydjari, O. Sami, J.M. Beckman, and J.R. Leaman. 1989. "LOCK Trek: Navigating Un- charted Space," pp. 167-175 in Proceedings of the IEEE Symposium on Security and Pri- vacy. Los Alamitos, CA: IEEE Computer Society Press. Sommerville, Ian. 1996. Software Engineering. 5th Ed. Reading, MA: Addison-Wesley. Srivas, Mandayam K., and Steven P. Miller. 1995. "Formal Verification of the AAMP5 Microprocessor," Applications of Formal Methods, Michael G. Hinchey and Jonathan P. Bowden, eds. Englewood Cliffs, NJ: Prentice-Hall. Tanik, Murat M., Raymond T. Ye, and guest editors. 1989. "Rapid Prototyping in Software Development," IEEE Computer Magazine, Vol. 22, Special Issue (5~.

OCR for page 62
108 TRUST IN CYBERSPACE Taylor, T. 1989. "FTLS-based Security Testing for LOCK," Proceedings of the 12th National Computer Security Conference. Washington, DC: U.S. Government Printing Office. Thompson, Kenneth. 1984. "Reflections on Trusting Trust," Communications of the ACM, 27(8):761-763. Watts, Humphrey, and Bill Curtis. 1991. "Comment on 'A Critical Look,"' IEEE Software, 8~4~:4246. Watts, Humphrey, Terry Synder, and Ronald Willis. 1991. "Software Process Improve- ment at Hughes Aircraft," IEEE Software, 8~4~:11-23. Weissman, Clark. 1995. "Penetration Testing," Information Security, M.D. Abrams, S. Jajodia, and H.J. Podell, eds. Los Alamitos, CA: IEEE Computer Society Press.