Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 19
--> 2 Software Engineering and the Role of Ada in DOD Systems Recent progress in software engineering includes the development of models and technology to improve software processes and architectures. This chapter highlights some of these approaches as a framework for crafting a DOD software policy that is more broadly conceived than the current policy on programming languages, and as a method of evaluating Ada's role in DOD systems. In addition to evaluating Ada's capability for supporting software engineering processes, this chapter compares Ada with other third-generation programming languages (3GLs). Technical comparisons between Ada and other languages can be made with greater confidence than can quantifying the performance of programming languages. The committee found the quality of the data available for such empirical analyses to be lacking; in addition to discussing these data limitations the committee thus suggests ways to improve collection of and access to needed data. In the absence of data that are reliable enough to serve as a basis for sound conclusions, the committee's findings are based on a combination of those data, technical comparisons, anecdotal project experience, and, ultimately, on the deliberations of the committee and its expert judgment. SOFTWARE ENGINEERING PROCESS AND ARCHITECTURE In the 1980s, when DOD's current programming language policy was first established, implementation of such a policy was perceived to be the most straightforward approach to improving DOD's software engineering capability. But in the past decade software engineering technology and practices have changed fundamentally. Examples of important developments include advanced tools and techniques such as computer-automated software engineering (CASE) tools, application generators, and object-oriented methods; process improvement, including iterative/spiral development processes, the Software Engineering Institute's Capability Maturity Model, the Air Force Software Development Capability Evaluation, and the ISO 9000 quality standard; product-line management such as architecture-driven processes and components and common operating environments; and technology for
OCR for page 20
--> heterogeneous software, including Open Systems, Internet, and Common Object Request Broker Architecture standards. Reflecting these fundamental changes, a consistent theme emerged throughout the committee's deliberations and in presentations from industry and DOD experts: programming language is important, but not as important as a thorough understanding of application requirements, a mature software development process, and a good software architecture. While understanding of requirements is certainly an important factor in project success, it represents a largely language-independent aspect of software engineering and thus is not emphasized in the following discussion. Process maturity and architecture quality, on the other hand, represent aspects of software engineering that are influenced and supported by programming languages and the environments in which they are developed. Furthermore, one very important aspect of "good" processes and architectures is their ability to accommodate changes in requirements.1 While there is much debate over what constitutes an architecture,2 the following observations are made to summarize architecture's importance and its close linkage with modern software development processes:3 Achieving a stable software architecture represents a significant project milestone at which the critical decisions to make, buy, or reuse software should have been resolved. This milestone in a project's life-cycle represents a transition from the exploratory stage (characterized by discovery and resolution of numerous unknowns and sources of risk) of a project to the development phase (characterized by management according to a particular development plan).4 Architecture representations provide a basis for balancing the trade-offs between the problem space (i.e., requirements and constraints) and the solution space (i.e., the product design, implementation, and quality). The software architecture and process provide the framework for most of the important (i.e., high-payoff or high-risk) human communications among the analysis, design, implementation, and test activities. Poor architectures and immature processes often underlie many project failures. A mature process, an understanding of the primary requirements, and a demonstrable architecture are important prerequisites for predictable outcomes. Architecture development and process definition are the intellectual steps that map a problem to its solution without violating existing constraints; these tasks require human innovation and cannot be automated. DOD's formulation of an improved policy regarding its use of the Ada programming language should take into consideration the fundamental need for improved software architectures, more effective and mature development processes, and increased process automation. Because DOD's requirements for quality—generally high reliability, state-of-the-art performance, and maintainability by DOD personnel—usually cannot be compromised, DOD software development projects often require increased funds and/or extended schedules. The four subsections below describe process and architecture as elements fundamental to needed improvements in software economics—it is this significance of architecture and process that motivates the committee's belief that DOD should expand its software policy to encompass more than just a programming language policy. The discussion below focuses on improving the cost-effectiveness of DOD software and achieving a better return on investment (ROI); it is assumed that quality is held fixed at the levels necessary for DOD systems.
OCR for page 21
--> Economics of Software Engineering Most software development costs are a function of four basic parameters: The size of the software end product in terms of human-generated source code and documentation; The process employed to produce the software; The environment and tools employed to produce the software; and The expertise of the personnel involved, that is, the capability of the software developers. One very important aspect of software economics (as represented in today's software cost models) is that the relationship between cost and size exhibits diseconomies of scale: the cost per unit of functionality increases with software size. This relationship stems primarily from the complexity of managing interpersonal and inter team communications; as the number of team members increases, the complications relating to the team members' differing perspectives and backgrounds increase even more rapidly. Given the factors affecting software development costs, economic leverage is best achieved by focusing on technologies that enable the following:5 Reducing the size or complexity (or improving the architecture) of what needs to be developed; Improving the development process; and Employing better environments and tools to automate the process. Most software experts would distinguish among the above factors; they would also acknowledge significant interrelationships. For example, tools enable reduction of the amount of source code and process improvements; attempts to reduce size lead to process improvements; and improved processes drive tool requirements. These interrelationships mean that even though programming languages do not directly affect project outcomes, they can have significant indirect effects. Reducing the Complexity of Software Products In general, the most significant step toward reducing cost and improving ROI is to design an architecture that achieves product requirements and quality goals with the minimum amount of human-generated source material. This is the primary motivation behind development of high-order languages (e.g., Ada 83 and 95, C++, and fourth-generation programming languages), use of automatic code generators (CASE tools and graphical user interface builders), reuse of commercial off-the-shelf (COTS) software products (operating systems, "windowing" environments, database management systems, "middleware," and networks), and reliance on object orientation (encapsulation, abstraction, component reuse, and architecture frameworks). Since the difference between large and small projects has a greater than linear impact on life-cycle cost, using the highest-level language and appropriate COTS or non-developmental items can lower costs significantly, especially in warfighting domains where large-scale systems are the norm. Furthermore, simpler is generally better: reducing its size usually makes a program more understandable, easier to change, and more reliable. One typical negative side effect is that higher-level abstractions tend to degrade performance—i.e., increase resource consumption, whether in processor cycles, memory usage, or communications bandwidth. Fortunately, these drawbacks have been greatly offset by improvements in hardware performance, compiler technology, and code optimization (although much less so in embedded platforms). Ada, and particularly Ada 95, allows for reduction in the source size of
OCR for page 22
--> software products through language features that support abstraction, object-oriented programming, and component integration (e.g., of reusable components, COTS products, and legacy components). The language C++ provides similar advantages in the commercial market. In numerous DOD domains, a common approach today is to maximize the use of COTS products.6 While this is certainly desirable as a means of reducing the overall amount of custom development, it has often not been a "silver bullet" in practice. Table 2.1 identifies some of the advantages and disadvantages of employing COTS products, compared to custom software, in DOD domains. The advantages of using COTS rather than custom software are significant, but there are still application areas (particularly in warfighting) in which the advantages of having control over reliability, performance, or rapid enhancements provided by custom software are compelling. The disadvantages of COTS are not sufficient to peremptorily drop that approach, but they point to areas in which architectural trade-off analysis and risk management approaches will be needed. Improving Software Processes The importance of a mature software development process has been well established (CSTB, 1987; DOD, 1987b). Modern software development processes have moved away from the conventional waterfall model, in which each stage of the development process depends on completion of the previous stage. While there are variations, current concepts call for an initial version of a system to be constructed rapidly early in the development process, with an emphasis on addressing the high-risk areas, stabilizing the basic architecture, and refining the driving requirements (with extensive user input where possible). Development then proceeds as a series of iterations ("spirals," "increments," "generations," "releases," and other terms have been used), building on the core architecture until the desired level of functionality, performance, and robustness is achieved. Software cost models, such as the COCOMO model (Boehm et al., 1996), have been updated to reflect the use of modern software development processes and can be used to quantify the importance of process. The parameters defining the effects of process on the cost and schedule estimates produced by COCOMO include the following: Application familiarity--the developer's degree of domain experience; Process flexibility--the degree of contractual rigor, ceremony, and freedom of change inherent in the project "contract," "vision," and "plan"; Architecture definition and risk resolution-the degree of technical feasibility demonstrated prior to commitment to full-scale production; Teamwork-the degree of cooperation and shared vision among stakeholders (buyers, developers, users, and personnel responsible for verification, validation, and maintenance, among others); and Software process maturity-the maturity level of the development organization, as defined by the Capability Maturity Model (Paulk et al., 1993). Cost estimates produced by COCOMO 2.0 show that the difference between a good and bad process for a large (300,000 lines of source code) program will often exceed a factor of 1.3 in the length of time it will take for a team to develop a software product, a factor of 2 in cost, and a factor of 5 in quality (delivered defect rate). Realization of this relationship has led to significant investments and advances in software process improvement techniques over the past 10 years, exemplified by DOD investment in the Capability Maturity Model, developed by the Software Engineering Institute at Carnegie Mellon University.
OCR for page 23
--> Table 2.1 Advantages and Disadvantages of Commercial off-the-shelf (COTS) and Custom Software Integration of COTS Custom Development Advantages Predictable license costs Broadly used, mature technologies Immediately available Dedicated support organization Hardware/software independence Rich functionality Frequent upgrades Complete freedom Smaller, often simpler Often better performance Control of development and enhancement Control of reliability trade-offs Disadvantages Up-front license fees Recurring maintenance fees Dependence on vendor Sacrifices in efficiency Constraints on functionality Integration not always trivial No control over upgrades and maintenance Unnecessary features that consume extra resources Reliability often unknown or inadequate Scale difficult to change Incompatibilities among vendors Licensing and intellectual property issues Difficulties in testing and evaluation Development expensive, unpredictable Availability date unpredictable Maintenance expensive Portability often expensive Drains expert resources Influence of Software Environments, Tools, and Languages on the Software Engineering Process The tools and environment employed in the software engineering process generally have a linear effect on the productivity of the process. Compilers, editors, "debuggers," configuration management tools, traceability tools, quality assurance analysis tools, test tools, and user interfaces provide the foundation for carrying out the process and maximizing automation. However, the maturity, availability, and performance of these tools-and their procurement and maintenance costs-must be taken into account. Cost models indicate that tools and automation generally enable cost savings ranging from 30 to 60 percent (see Box 3.2 in Chapter 3). As mentioned above, environments and tools have indirect effects; however, they also enable certain improvements in process and reductions in size that have much greater impacts. Thus, the view that the quality of the software engineering process is independent of the programming language can be misleading. Language standardization has led to tools for automated support of configuration control and increased automation of quality assessment (through interface specification, compilation and consistency analysis, readability, and "inspection automation"). These, in turn, have led to practical and significant process improvements, such as iterative development, architecture-driven design, and automation of documentation (Royce, 1990). Furthermore, languages like Ada 95 add object-oriented features, which have enhanced their versatility. Such features, in some cases, allow Ada 95 programs to implement the same function as Ada 83 programs with a significant reduction in the number of source
OCR for page 24
--> lines of code.7 These improvements are not unique to Ada 95, but absent the technical features and automation support that are standard in the Ada environment (compiler, library manager, and debugger), many of these process improvements have been impractical for other languages. Over the past 10 years, Ada 83 has supported the process and software engineering design goals described above by enabling (1) integration of components (abstraction and encapsulation, language standardization, separation of module specification from body, strong typing, and library management) that allow structured design early in the life-cycle and incremental improvement of the breadth, depth, and performance of the evolving design through multiple iterations; (2) reduction of rework via early definition and verification of architectural interfaces prior to coding; (3) incorporation of configuration-management discipline, separate compilation, and interface and implementation partitioning directly in the language, thus enabling environments that are much more controlled and are instrumented for single and multiple team development and management of continuous change; and (4) reliability features, allowing errors to be automatically identified earlier in the life-cycle by compile-time and run-time consistency checks. Ada 95 further improves this language support. The primary point of this section-that the software engineering community has benefited greatly from the use of Ada, owing mostly to the language's support for the transition to development of better processes and better architectures-was predicted by Fred Brooks a decade ago (Brooks, 1986): I predict that a decade from now, when the effectiveness of Ada is assessed, it will be seen to have made a substantial difference, but not because of any particular language feature, nor indeed of all of them combined. Neither will the new Ada environments prove to be the cause of the improvements. Ada's greatest contribution will be that switching to it occasioned training programmers in modern software design techniques. It is from such a perspective that the committee analyzed the business-case for use of Ada (Chapter 3) and as a result recommends a broader software engineering policy for DOD (Chapter 4). TECHNICAL EVALUATION OF ADA 95 AND OTHER THIRD-GENERATION PROGRAMMING LANGUAGES This section provides a brief technical evaluation of the programming languages Ada 95, C, C++, and Java, based on the summaries of language features given in Appendix B and focusing specifically on attributes related to the development of real-time critical systems. The committee's evaluation led it to conclude that, for real-time critical systems, Ada 95 is superior to the other languages, from a technical and software engineering standpoint. It is important to recognize that some facets of this technical evaluation may change over the next several years as the other languages, particularly Java, mature and evolve in response to applications with requirements for higher integrity or real-time multimedia interaction, for example. Criteria related to critical systems development fall into two sets of categories: (1) compile-time and run-time checking to support encapsulation and safety, and (2) support for hard real-time systems.8 Criteria related to encapsulation and safety include: Support of user-defined abstractions and enforcement of modularity and information hiding; Compile-time enforcement of type distinctions; Run-time management of pointers, arrays, and variant structures; and Support for software fault tolerance and recovery.
OCR for page 25
--> With respect to these criteria, Ada 95 and Java fare well. Ada 95 offers more support than does Java for compile-time type distinctions by (1) employing generic templates, and (2) simplifying the expression of strong type distinctions between otherwise structurally equivalent numeric, enumeration, array, and pointer types. Java and Ada provide stronger enforcement of modularity and information hiding than do C and C++, because C and C++ both provide "back doors" that allow external access to internal variables. Java and Ada also provide the following safety features: (1) default "null" initialization of pointers, and run-time checks for null on all dereferences of pointers; (2) run-time checks for out-of-bounds indexing into an array and attempts to select the wrong variant from a subtype hierarchy; and (3) run-time exceptions indicating all failures of run-time checks, allowing programmer-specified exception handlers to implement appropriate fault tolerance and recovery actions. In C and C++, there are no checks to prevent the creation of dangling references to data structures, making the use of pointers more error-prone. Criteria related to real-time systems development include: Support for safe, static allocation of all run-time data structures; Predictability of constructs with respect to real-time deadlines; and Language support for real-time-oriented interactions between multiple threads of control. With respect to these criteria, Ada 95 provides a number of advantages, including providing mechanisms for statically preallocating data structures while still allowing safe and convenient manipulation of such structures with pointers. In Java, all non-primitive data structures are allocated dynamically (on the "heap"), with the attendant danger of run-time storage exhaustion and unpredictable storage allocation and reclamation times. Using only "static" structures in Java could make this allocation predictable, but in many cases this restriction would create additional problems. Ada 95 provides data-oriented synchronization mechanisms that reduce overhead and minimize the potential for high-priority threads being delayed indefinitely while waiting for release of resources held by lower-priority threads (priority "inversion"). Java provides some multithreading primitives in the language and the standard library, but the standard Java locking mechanism provides no standard support for limiting the amount of time a high-priority thread will wait for a low-priority thread. The inter thread interaction model in Java is based on explicit notification rather than state-oriented guards, increasing the likelihood of race conditions, which can lead to uncertainty in data access. The C and C++ languages do not directly address multithreading and support for real-time processing. For critical real-time code, Ada 95 emerges as technically superior compared to Java, C, and C++. The Java language has not yet been standardized and its design is still somewhat in flux, and it may evolve to provide further support for critical and/or real-time systems. C, C++, and Ada can also be expected to continue to evolve, albeit at a slower pace. From a business-case standpoint it is too early for DOD to consider Java in this application domain. Java might evolve into a language with strong real-time support capabilities, or it might not. For the foreseeable future, Ada provides the strongest available support for high-assurance, real-time software development. As languages develop attractive new capabilities, DOD should be prepared to periodically perform technical comparisons, such as the one provided here and in Appendix B. But as discussed in Chapter 3, such a technical comparison is only one part of the business-case associated with establishing a software management policy.
OCR for page 26
--> AVAILABLE COMPARISONS OF ADA 83 AND OTHER THIRD-GENERATION PROGRAMMING LANGUAGES Over the past 5 years several studies have concluded that, for custom software development, Ada 83 is more effective than its leading alternatives (Cobol, C, C++, Fortran, Jovial, and CMS2) in improving software maintainability and reliability, improving overall life-cycle cost, and enabling management of the risks of large-scale development (Mosemann, 1991; Masters, 1996). These studies are based on various mixes of expert opinion and project results. The data supporting the conclusions are generally not normalized or controlled, and none of the studies to date has been rigorously peer reviewed. This section summarizes the available information comparing Ada 83 with other leading 3GLs. This information falls into three main categories: 1. Analyses of language features: comparisons of how the features of programming languages contribute to such desired properties as reliability, maintainability, and efficient programming; 2. Comparisons of empirical data: comparisons based on data collected from completed projects in various languages; and 3. Anecdotal experience from projects: qualitative responses to project outcomes. Analyses of Language Features9 Analyses of language features have the advantage of being based on full and open examination of well-defined language features. Their main disadvantage is that they are partial at best and are particularly weak in assessing the complex trade-offs among language features accomplished in the course of actual projects. The most thorough of these analyses can be found in a 1985 Federal Aviation Administration study (IBM, 1985) comparing Ada 83 with C, Pascal, Jovial, and Fortran, and in a 1991 Software Engineering Institute study (Weiderman, 1991) covering Ada 83 and C++. Both studies used the same evaluation scales, covering the desired properties of capability, efficiency, availability/reliability, maintainability/extensibility, life-cycle cost, and risk. Figure 2.1 summarizes the results of these analyses, comparing Ada 83 and C (IBM, 1985) and Ada 83 and C++ (Weiderman, 1991) to the theoretical maximum score (higher numbers indicate better performance; the full definitions of criteria and numerical results are provided in Appendix D). The differences in the ratings for Ada 83 in the 1985 and 1991 studies are probably good indicators of the variability attendant on evaluations of this nature.10 Allowing for the range of variability, the most significant difference shown in Figure 2.1 is Ada 83's much stronger rating in the availability/reliability area, corroborating the results of this committee's comparative analysis in the preceding section ("Technical Evaluation of Ada 95 and Other Third-Generation Languages") and in Appendix B. Comparisons of Empirical Data The major advantage of empirical project data is that the data represent the end results of projects and reflect the various features of each language. The major disadvantage is that the varying conditions associated with disparate projects make it difficult to assess sources of variability caused by differing definitions, assumptions, and contexts. Moreover, many of the results come from proprietary (and thus unavailable) data on project productivity and quality, such as those presented in Jones (1994) and Reifer (1996).11 The major source of empirical data and information derived from them are summarized in Table 2.2 More details on the data are provided in Appendix D.
OCR for page 27
--> FIGURE 2.1 Comparisons of language features. SOURCES: Software Engineering Institute (SEI) data from Weiderman (1991); Federal Aviation Administration (FAA) data from IBM (1985). One major problem with empirical data is that quantitative determination of important features such as cost per source line of code (cost/SLOC) and defects per 1,000 source lines of code (defects/KSLOC) is confounded by differences in the expressive power of a source line of code in different programming languages. One way of normalizing is to look at source lines of code per function point (Jones, 1995). However, as shown in Table 2.3, these ratios have wide variability. Lubashevsky (1996) reports variations in source lines of code per function point exceeding factors of 2 for C and 6 for C++. Finally, there are differences in expressiveness for the same language across different application domains. Appendix D points out that the similarity in the data for C, Ada, and C++ in Jones (1994) relating to cost per function point and defects per function point (see Table 2.2) appears simply to reflect Jones's (1995) mean values of SLOC/function point for C, Ada, and C++. Thus, the Jones data appear to indicate that cost/SLOC and defects/KSLOC show little variation across programming languages. While this conclusion is perhaps warranted for cost, it is a conclusion at considerable variance with the data from other studies on defects/KSLOC. The different cost/SLOC values given by Reifer (1996) appear to be overshadowed by potential differences in the relative expressive power of a line of code in Ada, C, and C++. However, the lower number of defects/KSLOC reported for Ada in Reifer (1996) is still significant, particularly with respect to embedded weapon systems software.
OCR for page 28
--> Table 2.2 Comparisons of Languages: Summary of Empirical Data (Study) Domain Data, Analysis Available? Number of Ada Projects Number of Total Projects Cost ($ per source line of code) Defects per KSLOCa Fortran Ada 83 C C++ Fortran Ada 83 C C++ (Zeigler, 1995) Compilers, tools Yes 1 2 n/a 6.6 10.5 n/a n/a 0.10 0.68 n/a (NASA-SEL, 1994, 1995)b Engineering Yes 15 120 Ada 83 40% lower than Fortran n/a n/a 4.8 2.10 n/a n/a (Reifer, 1996) Military information Systems No ? 190 n/a 30 25 25 n/a 3.00 6.00 4.00 (Reifer, 1996) Embedded weapon systems No ? 190 n/a 150 175 n/a n/a 0.30 0.80 0.60 (Jones, 1994)c Telecommunications No ? Many n/a 1760 2966 1180 n/a 0.17 0.29 0.14 a KSLOC: 1,000 source lines of code. b McGarry et al. (1994); Waligora et al. (1995). c Values in terms of function points, rather than lines of code.
OCR for page 29
--> Table 2.3 Source Lines of Code per Function Point Language Low Mean High Ada 83 60 71 80 C 60 128 170 C++ 30 53 125 SOURCE: Data from Jones (1995). Two sources of data are particularly sound with respect to comparability of projects and availability of data and analysis. The Zeigler (1995) study has the most thorough analysis of whether the differences in data on the cost and number of defects for Ada and C could be caused by other factors. With respect to expressiveness of a line of code, Zeigler analyzed lines of code per feature (LOC/feature) and found that Ada was about 16 percent more "verbose" than C (109 LOC/feature for Ada, compared to 94 for C). Applying this correction to the data indicates that Ada outperforms C on the basis of cost per feature by a factor of 1.37 and on the basis of defects per feature by a factor of 5.9. Zeigler also analyzed the potential for confounding effects of relative software complexity, personnel capability, and learning curve effects associated with C and Ada, and presents a good case that these factors did not cause any significant bias. Zeigler thus provides a strong argument for Ada programs having had lower life-cycle costs and fewer defects than C programs in the large (more than 1 million lines of both Ada and C) project in compilers and tools that was the basis for his study. However, the Zeigler study is suggestive, rather than definitive, about the applicability of this result to other domains and other teams (the development teams on the projects studied were stable and composed of highly capable, seasoned personnel). The NASA Software Engineering Laboratory (SEL) projects described by McGarry et al. (1994) also provide some helpful comparative data but do not cover most DOD domains of interest. The SEL projects are highly precedented, flight dynamics engineering applications that are not embedded and do not require real-time functionality. SEL's analyses of Ada initially concluded that, owing to greater reuse, Ada projects enjoyed significant cost and schedule reductions compared to those using Fortran. Subsequently, application of the Ada object-oriented reuse approach in Fortran projects yielded comparable gains. Significantly, however, over the period from 1988 to 1994, the rate of defects associated with Ada was less than half the level of defects seen with Fortran. A subsequent NASA-SEL study (Waligora et al., 1995) corroborated the reduction in defect rate with Ada, and concluded that Ada development costs were 40 percent less than those of Fortran, for equivalent functionality. This conclusion was based on analysis indicating that Ada's generic features achieved reuse with many fewer statements than Fortran's repeated code. Both the SEL and Zeigler analyses also concluded that programming languages were not the dominant factor in influencing software productivity and quality. SEL found several other variables (object-oriented reuse, use of Cleanroom techniques, code reading) to be more significant. Zeigler cites architecture and design, configuration management, testing, process, programmer expertise, and management skills as more significant than the particular programming language used. In summary, based on analysis of available empirical data and comparisons of language features, a conclusion that Ada is superior in ensuring availability, reliability, and fewer defects appears warranted. The evidence is not strong enough to assert Ada's superiority with respect to cost, but when considered with other data (Appendix D), and given the lack of solid evidence indicating less expensive custom software development in other languages, a case can be made that using Ada provides cost savings in building custom software, particularly for real-time, high-assurance warfighting applications.
OCR for page 30
--> Anecdotal Experience from Projects The current DOD base of experience with Ada is substantial. In DOD software inventories, Ada represents about one-third of weapon systems code (Hook et al., 1995), representing more than 50 million source lines of operational code in many of DOD's most crucial systems. The Aerospace Industries Association (AIA) has stated that its member companies have all "... greatly benefited from the software engineering support features of Ada, including reduced error rates," and notes that Ada has had a "substantial positive impact" especially in "large, high visibility projects such as F22, BSY-2, Boeing 777, and Peace Shield" (AIA, 1996). However, AIA also notes that "some [projects] have suffered because Ada support tools were not robust nor available when needed, or because Ada presented interface difficulties in heterogeneous environments" (AIA, 1996); AIA has advocated ending DOD's Ada mandate. Representatives from the DOD services related to the committee numerous instances of success in Ada projects that were delivered (to varying degrees) on budget, on schedule, and with satisfied users. Some of the most compelling data in this regard, drawn from a broad range of projects, were provided by Robert Kent of the Air Force Electronic Systems Center (ESC). According to Mr. Kent, ESC's experience is that "Ada projects have a much higher success rate than non-Ada projects." He substantiated this claim with several case studies indicating that a substantial number of very large mission-critical applications (greater than I million source lines) have been successfully delivered and maintained in Ada. While the financial success of Ada projects is not universal, some of the results from the case studies indicate that a mature software organization will perform better with Ada than with other languages. Other case studies presented to the committee to illustrate successful Ada development include the following: Air Force: CCPDS-R, Cobra Dane System Modernization, REACT, STARS demonstration project,12 PRISM; Navy: BSY-2, AEGIS; and Army: FAADC2I. Ten years ago, it was very difficult to find a single software success story in any programming language. Today, there are several, and most of the large-scale, successful DOD projects have employed Ada as one of the technologies to support their efforts to improve both processes and architectures.</font> Successful Ada users outside DOD include DOE (AdaSAGE), NASA (Space Station program and Software Engineering Laboratory), numerous international organizations (Transport Canada, Canadian Department of National Defense, Celsius Tech, Eurocontrol, Australian Commonwealth, United Kingdom Ministry of Defense). Commercial organizations have also utilized Ada in their products. A primary example is Boeing Corporation, which, like DOD, sought a single, common programming language for its commercial mission-critical software (Box 2.1). Successful Ada users outside DOD include DOE (AdaSAGE), NASA (Space Station program and Software Engineering Laboratory), numerous international organizations (Transport Canada, Canadian Department of National Defense, Celsius Tech, Eurocontrol, Australian Commonwealth, United Kingdom Ministry of Defense). Commercial organizations have also utilized Ada in their products. A primary example is Boeing Corporation, which, like DOD, sought a single, common programming language for its commercial mission-critical software (Box 2.1). THE NEED TO INSTITUTE COLLECTION OF DATA FOR SOFTWARE METRICS The committee searched for sources of data that could provide a strong scientific basis for concluding that Ada is or is not a superior programming language in any given application domain. With respect to such confirmatory data, the committee concluded the following:
OCR for page 31
--> BOX 2.1 Use of Ada on Boeing Commercial Airplanes Decision to use Ada. In the late 1970s, Boeing began to use airborne software on its 757 and 767 airplanes. Due to the state of practice, a large variety of languages and language processors were used, thus making the application of standards difficult. In 1985, Boeing's commercial Airplane Group (BCAG) initiated a program to solve the airborne software language problem. The first step in the initiative was to choose a preferred language. The major elements of the criteria were support of structured programming practices, high-order language, block structured language, portability, and understandability. Additionally goals included the use of software engineering principles such as information hiding, abstract data types, and string type checking; the ability to specify interfaces precisely; and the use of standardized fixed-point and floating-point arithmetic. The evaluation process involved consulting with several key suppliers. The process resulted in the selection of the Ada language. Preparation to use Ada. BCAG relies on its suppliers to provide airborne software for its commercial airplanes. Hence, preparation to use Ada had to be a joint program with its suppliers. The charter of the joint program included evaluating compilers, preparing personnel to use Ada, and sharing of operational experiences in the use of Ada. Several joint meetings were held over a period of years, and a newsletter was published to share information and insights. Boeing also prepared guidelines to define a subset of Ada for use in safety-critical applications. The guidelines benefited from input from the joint program and other industry sources. Experience with Ada. The use of Ada has significantly reduced the number of different programming languages used on the Boeing 777. Ada was used on 60 percent of the systems on the 777 and represented 70 percent of the lines of code developed. No correlation was found between the language used and the number of problems found on a system. The other principle language used in new development was C. The richness and complexity of the Ada language helped knowledgeable users with mature tools achieve modest gains in productivity. However, the complexity of the language caused problems for other users who had to work through compiler problems. A key lesson was that the need for retraining was not adequately understood. Future plans for use of Ada. BCAG expects to continue its use of Ada for airborne application. A standard language allows the use of tools to aid in the development of software that would be difficult or impossible to implement in a multiple-language environment. The use of Ada in the future would be improved by greater consistency among the available compilers. SOURCE: Leonard L. Tripp, Boeing Commercial Airplane Group, personal communication, August 27, 1996. The data are uneven. The Ada community has collected a good deal of data on DOD's and other organizations' experience with Ada, but comparable data are not available on DOD's experience with C, C++, and other languages. The data are largely unavailable. The software data that have been collected are largely in proprietary databases held by DOD contractors, consultants, or commercial cost-modeling or market analysis firms.
OCR for page 32
--> The data lack common points of comparison and are incommensurable. Some of the data are known to be inconsistent with respect to rules for counting lines of code, functionality, effort, and defects. Other data are accumulated with no knowledge of their degree of consistency. The data are incomplete. Most of the data collected address quantities such as size, level of effort, and number of defects, but do not take into account the environmental variables (e.g., cost drivers) associated with the quantities measured. Also, it is unclear to what extent the data collected are fully representative of project experience (e.g., perhaps only the good projects collect or report data). Data are collected but are not systematically organized. Many DOD organizations collect software data for project monitoring and control, as well as environmental data such as those indicating process maturity. But the data are not organized and stored in a repository that could facilitate analysis to support DOD software engineering processand product-improvement efforts and policy analyses. Most of the available data generally support the conclusion that Ada is preferable for DOD warfighting applications, and the committee did not find any data to refute that conclusion. But for future policy analyses and initiatives to improve DOD software engineering practices, a stronger base of DOD software metrics data that describes project outcomes would more than repay the investment necessary to develop it. Without more reliable data, decision making will have only a weak foundation. On an individual project level, DOD has endorsed the concept of using metrics data to improve software process management through its endorsement of the Software Engineering Institute's Capability Maturity Model, which includes quantitative process management as a key process. Within DOD, several local efforts to collect and analyze data for evaluating software add considerable value. Some good examples are the Air Force Cost Analysis Agency, the Army Software Test and Evaluation Program, the Navy Undersea Warfare Center, and the Defense Logistics Agency's Columbus, Ohio, facility. However, DOD is not applying this practice at a more strategic level to improve its overall software cost-effectiveness and is missing a major opportunity to improve its software cost-effectiveness. DOD should establish a sustained commitment to collect and analyze consistent software experience data. Foundations for such a program already exist. The Joint Logistics Commanders "Practical Software Measurement" guidebook (DOD, 1996a) provides good case studies and guidelines for tailoring and focusing measurement of software capabilities on activities that add value. The Software Engineering Institute's software core metrics reports (Carleton et al., 1992) provide a foundation for collecting consistent data across projects and organizations. NASA's Software Engineering Laboratory (McGarry et al., 1994) provides a good model. Also, a good start toward a DOD software metrics initiative is represented by its National Software Data and Information Repository; this effort needs some improvement and has languished due to lack of a sustained commitment. As DOD's chief information officer, the Assistant Secretary of Defense (C3I) is the logical focal point for establishing and sustaining a DOD-wide software metrics initiative. The initiative would need a precise scope, strong staffing, and focused management, but the examples above provide evidence that such investments can generate significant positive results. NOTES 1. Barry M. Horowitz, president of MITRE Corporation, has noted the following with respect to requirements and architecture: "Both government and industry typically put almost all of their efforts into the initial performance and functionality of a program in spite of the fact that these will change substantially over the life of the system. At the same time, there is a near-total lack of attention to an architectural baseline that would form a stable foundation for incorporating the system's changing requirements" (Horowitz, 1991, p. 10).
OCR for page 33
--> 2. No technical standard exists for software architecture; however, the IEEE Software Engineering Standards Committee has created a planning group to investigate the issue. See "Standards Annual Report-1996," located at http://www.computer.org/standard/anreport/toc.htm. 3. Horowitz (1991) emphasizes the importance of architecture; more recently, a Defense Science Board Task Force emphasized the importance of software architecture, and estimated that "a well-formulated architecture might reduce costs of changes/upgrades by 30-50%" (DOD, 1994a). 4. See the contents of the life-cycle architecture milestone and associated rationale in Boehm (1996). 5. While personnel capability and understanding of requirements are also important cost factors, these topics are excluded from the discussion below because they are mostly language-independent variables. 6. This trend is driven by the dual use initiative within DOD (DOD, 1995b) and by legislative changes, namely the Federal Acquisition Reform Act of 1996. 7. One source (Jones, 1995) found that the mean value of source code statements per function point is 71 for Ada 83 and 49 for Ada 95, for a 30 percent reduction. The amount of empirical evidence is small, however. 8. Other sets of criteria would apply for other classes of applications. 9. No independent evaluations of language features were located by the committee, prompting the analysis presented above in the section titled "Technical Evaluation of Ada 95 and Other Third-Generation Programming Languages" and in Appendix B of this report. Most evaluations have been carried out by government agencies or at their direction. 10. Some of the differences in efficiency and risk ratings may be due to increased Ada maturity, but the decline in availability/reliability is more likely due to differences in interpretation of the evaluation criteria. 11. Both of these authors are software consultants; Capers Jones is president of Software Productivity Research Inc., and Donald Reifer, formerly director of DOD's Ada Joint Program Office, is with Reifer Consultants Inc. 12. See Frazier and Bailey (1996) for a recent discussion of STARS demonstration project outcomes.
Representative terms from entire chapter: