National Academies Press: OpenBook

Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale (2007)

Chapter: 2 Summary of Workshop Discussions

« Previous: 1 Introduction and Overview
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 4
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 5
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 6
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 7
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 8
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 9
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 10
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 11
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 12
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 13
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 14
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 15
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 16
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 17
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 18
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 19
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 20
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 21
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 22
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 23
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 24
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 25
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 26
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 27
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 28
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 29
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 30
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 31
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 32
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 33
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 34
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 35
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 36
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 37
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 38
Suggested Citation:"2 Summary of Workshop Discussions." National Research Council. 2007. Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale. Washington, DC: The National Academies Press. doi: 10.17226/11936.
×
Page 39

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 Summary of Workshop Discussions SESSION 1: Process, architecture, and the grand scale Panelists: John Vu, Boeing, and Rick Selby, Northrop Grumman Corporation Moderator: Michael Cusumano Panelist presentations and general discussions at this session were intended to explore the following questions from the perspectives of soft- ware development for government and commercial aerospace systems: • What are the characteristics of successful approaches to architec- ture and design for large-scale systems and families of systems? • Which architecture ideas can apply when systems must evolve rapidly? • What kinds of management and measurement approaches could guide program managers and developers? Synergies Across Software Technologies and Business Practices Enable Successful Large-Scale Systems Context matters in trying to determine the characteristics of success- ful approaches—different customer relationships, goals and needs, pacing of projects, and degree of precedent all require different practices. For example, different best practices may apply depending on what sort of 

SUMMARY OF WORKSHOP DISCUSSIONS  system or application is under development. Examples discussed include commercial software products, IT and Internet financial services, air- planes, and government aerospace systems. • Different systems and software engineering organizations have different customers and strategies. They may produce a variety of deliverables, such as a piece of software, an integrated hardware-software environment, or very large, complicated, interconnected, hardware-software networked systems. • Different systems and software engineering organizations have different goals and needs. Product purposes vary—user empowerment, business operations, and mission capabilities. Projects can last from a month to 10 or 12 years. The project team can be one person or literally thousands. The customer agreement can be a license, service-level agreement, or contract. There can be a million customers or just one—for example, the government. The managerial focus can be on features and time to market; cycle time, workflow, and uptime; or reliability, contract milestones, and interdependencies; and so on. • While some best practices, such as requirements and design reviews and incremental and spiral life cycles, are broadly applicable, specific practice usu- ally varies. Although risk management is broadly applicable, commercial, financial, and government system developers may adopt different kinds of risk management. While government aerospace systems developers may spend months or years doing extensive system modeling, this may not be possible in other organizations or for other types of products. Com- mercial software organizations may focus on daily builds (that is, each day compiling and possibly testing the entire program or system incor- porating any new changes or fixes); for aerospace systems, the focus may be on weekly or 60-day builds. Other generally applicable best practices that vary by market and organization include parallel small teams, design reuse, domain-specific languages, opportunity management, trade-off studies, and portability layers. These differences are driven by the differ- ent kinds of risks that drive engineering decisions in these sectors. • Government aerospace systems developers, along with other very large software-development enterprises, employ some distinctive best practices. These include independent testing teams and, for some aspects of the systems under consideration, deterministic, simple designs. These practices are driven by a combination of engineering, risk-management, and contrac- tual considerations. In a very large organization, synergy across software technologies and business practices can contribute to success. Participants explored Very large in this case means over 100,000 employees throughout a supply chain doing systems engineering, systems development, and systems management; managing multiple product lines; and building systems with millions of lines of code.

 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE the particular case of moderately precedented systems and major com- ponents with control-loop architectures. For systems of this kind there are technology and business practice synergies that have worked well. Here are some examples noted by speakers: • Decomposition of large systems to manage risk. With projects that typi- cally take between 6 and as many as 24 months to deliver, incremental decomposition of the system can reduce risk, provide better visibility into the process, and deliver capability over time. Decomposition accelerates system integration and testing. • Table-based design, oriented to a system engineering view in terms of states and transitions, both nominal and off-nominal. This enables the use of clear, table-driven approaches to address nominal modes, failure modes, transition phases, and different operations at different parts of the system operations. • Use of built-in, domain-specific (macro) languages in a layered architec- ture. The built-in, command-sequencing macro language defines table- driven executable specifications. This permits a relatively stable infra- structure and a run-time system with low-level, highly deterministic designs yet extensible functionality. It also allows automated testing of the systems. • Use of precedented and well-defined architectures for the task manage- ment structure that incorporates a simple task structure, deterministic process- ing, and predictable timelines. For example, a typical three-task management structure might have high-rate (32 ms) tasks, minor-cycle (128 ms) tasks, and background tasks. The minor cycle reads and executes commands, formats telemetry, handles fault protection, and so forth. The high-rate cycle handles message traffic between the processors. The background cycle adds capability that takes a longer processing time. This is a ­reusable processing architecture that has been used for over 30 years in space- craft and is aimed at the construction of highly reliable, deterministic systems. • Gaining advantages from lack of fault proneness in reused components by achieving high levels of code, design, and requirement reuse. One example of code reuse was this: Across 25 NASA ground systems, 32 percent of software components were either reused or modified from previous sys- tems (for spacecraft, reuse was said to be as high as 80 percent). Designs and requirements can also be reused. Typically, there is a large backward   Precedent refers to the extent to which we have experience with systems of a similar kind.  More specifically, there can be precedent with respect to requirements, architecture and de- sign, infrastructure choices, and so on.  Building on precedent leads to routinization, reduced engineering risk, and better predictability (lower variance) of engineering outcomes. 

SUMMARY OF WORKSHOP DISCUSSIONS  compatibility set of requirements, and these requirements can be reused. Requirements reuse is very common and very successful even though the design and implementation might each be achieved differently. Design reuse might involve allocation of function across processors in terms of how particular algorithms are structured and implemented. The functions might be implemented differently in the new system, for example, in com- ponents rather than custom code or in different programming languages. This is an example of true design reuse rather than code reuse. In addition to these synergies, it was suggested that other types of analyses could also contribute to successful projects. Data-driven statisti- cal analyses can help to identify trends, outliers, and process improve- ments to reduce or mitigate defects. For example, higher rates of compo- nent interactions tend to be correlated with more faults, as well as more fault-correction effort. Risk analyses prioritize risks according to cost, schedule, and technical safety impacts. Charts that show project risk mitigation over time and desired milestones help to define specific tasks and completion criteria. It was suggested that each individual risk almost becomes a microcosm of its own project, with schedules and milestones and progressive mitigation of that risk to add value. One approach to addressing the challenge of scale is to divide and conquer. Of course, arriving at an architectural design that supports decomposition is a prerequisite for this approach, which can apply across many kinds of systems development efforts. Suggestions included the following: • Divide the organization into parallel teams.  Divide very large 1,000- person teams into parallel teams; establish a project rhythm of design cycles and incremental releases. This division of effort is often based on a system architectural structure that supports separate development paths (an example of what is known as Conway’s law—that software system structures tend to reflect the structures of the organizations they are devel- oped by). Indeed, without agreement on architectural fundamentals—the key internal interfaces and invariants in a system—division of effort can be a risky step. • Innovate and synchronize.  Bring the parallel teams together, whether the task is a compilation or a component delivery and interface integra- tion. Then stabilize, usually through some testing and usage period. • Encourage coarse-grain reuse.  There is a lot of focus on very fine- grain reuse, which tends to involve details about interfaces and depen- dencies; there is also significant coarse-grain opportunity to bring together both legacy systems and new systems. A coarse-grain approach makes possible the accommodation of systems at different levels of maturity.

 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE Examples of success in coarse-grain reuse are major system frameworks (such as e-commerce frameworks), service-based architectures, and lay- ered architectures. • Automate.  Automation is needed in the build process, in testing, and in metrics. Uncertainty is inherent in the development of software-intensive systems and must be reassessed constantly, because there are always unknowns and incomplete information. Waiting for complete information is costly, and it can take significant time to acquire the information—if it is possible to acquire it at all. Schedules and budgets are always limited and almost never sufficient. The goal, it was argued, should be to work effec- tively and efficiently within the resources that are available and discharge risks in an order that is appropriate to the goals of the system and the nature of its operating environment: Establish the baseline design, apply systematic risk management, and then apply opportunity management, constantly evaluating the steps needed and making decisions about how to implement them. Thus, it was suggested that appropriate incentives and analogous penalty mechanisms at the individual level and at the organization or supplier level can change behavior quickly. The goal is thus for the incentive structure to create an opportunity to achieve very efficient balance through a “self-managing organization.” In a self-man- aging organization, it was suggested, the leader has the vision and is an evangelist rather than a micromanager, allowing others to manage using systematic incentive structures. Some ways to enable software technology and business practices for large-scale systems were suggested: • Creating strategies, architectures, and techniques for the devel- opment and management of systems and software, taking into account multiple customers and markets and a broad spectrum of projects, from small scale through large. • Disseminating validated experiences and data for best practices and the circumstances when they apply (for example, titles like “Case Studies of Model Projects”). • Aligning big V waterfall-like systems engineering life-cycle models with incremental/spiral software engineering life-cycle models.    The V model is a V-shaped, graphical representation of the systems development life cycle that defines the results to be achieved in a project, describes approaches for developing these results, and links early stages (on the left side of the V) with evaluation and outcomes (on the right side). For example, requirements link to operational testing and detailed design links to unit testing.

SUMMARY OF WORKSHOP DISCUSSIONS  • Facilitating objective interoperability mechanisms and benchmarks for enabling information exchange. • Lowering entry barriers for research groups and nontraditional suppliers to participate in large-scale system projects (Grand Challenges, etc.). • Encouraging advanced degree programs in systems and software engineering. • Defining research and technology roadmaps for systems and soft- ware engineering. • Collaborating with foreign software developers. Process, Architecture, and Very Large-Scale Systems Remarks during this portion of the session were aimed at thinking outside the box about what the state of the art in architectures might look like in the future for very large-scale, complex systems that exhibit unpre- dictable behavior. The primary context under discussion was large-scale commercial aircraft development—the Boeing 777 has a few million lines of code, for example, and the new 787 has several million and climbing. It was argued that very large-scale, highly complex systems and fami- lies of systems require new thinking, new approaches, and new processes to architect, design, build, acquire, and operate. It was noted that these new systems are going from millions of lines of code to tens of millions of lines of code (perhaps in 10 years to billions of lines of code and beyond); from hundreds of platforms (servers) to thousands, all interconnected by heterogeneous networks; from hundreds of vendors (and subcontractors) to thousands, all providing code; and from a well-defined user com- munity to dynamic communities of interdependent users in changing environments. It was suggested that the issue for the future—10 or 20 years from now—is how to deal with the potential billion lines of code and tens of thousands of vendors in the very diverse, open-architecture- environment global products of the future, assembled from around the world. According to the forward-looking vision presented by speakers, these systems may have the following characteristics: • Very large-scale systems would integrate multiple systems, each of them autonomous, having distinctive characteristics, and performing its own functions independently to meet distinct objectives. • Each system would have some degree of intelligence, with the objectives of enabling it to modify its relationship to other component sys- tems (in terms of functionality) and allowing it to respond to changes, per- haps unforeseen, in the environment. When multiple systems are joined

10 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE together, the significant emergent capabilities of the resulting system as a whole would enable common goals and objectives. • Each very large-scale system would share information among the various systems in order to address more complex problems. • As more systems are integrated into a very large-scale system, the channels connecting all systems through which information flows would become more robust and continue to grow and expand throughout the life cycle of the very large-scale system. It was argued that a key benefit of a very large-scale system is the interoperability between operational systems that allows decision mak- ers to make better, more informed decisions more quickly and accurately. From a strategic perspective, a very large-scale system is an environment where operational systems have the ability to share information among geographically distributed systems with appropriate security and to act on the shared information to achieve their desired business goals and objectives. From an operational perspective, a very large-scale system is an environment where each operational subsystem performs its own functions autonomously, consistent with the overall strategy of the very large-scale system. The notion of continuous builds or continuous integration was also discussed. Software approaches that depend on continuous integration— that is, where changes are integrated very frequently—require processes for change management and integration management. These processes are incremental and build continuously from the bottom up to support evolution and integration, instead of from the top down, using a plan- driven, structured design. They separate data and functions for faster updates and better security. To implement these processes, decentralized organizations and an evolving concept of operations are required to adapt quickly to changing environments. The overall architectural framework for large-scale systems described by some participants in this session consists of five elements: • Governance.  These describe the rules, regulations, and change man- agement that control the total system. • Operational.  These describe how each operational system can be assembled from preexisting or new components (building blocks) to oper- ate in their own new environment so they can adapt to change. • Interaction.  These describe the communication (information pipe- line) and interaction between operational systems that may affect the very large system and how the very large system will react to the inputs from the operational systems.

SUMMARY OF WORKSHOP DISCUSSIONS 11 • Integration and change management.  These describe the processes for managing change and the integration of systems that enable emergent capabilities. • Technical.  These depict the technology components that are neces- sary to support these systems. It was suggested that large-scale systems of that future that will cope with scale and uncertainty would be built from the bottom up by start- ing with autonomous building blocks to enable the rapid assembly and integration of these components to effectively evolve the very large-scale system. The architectural framework would ensure that each building block would be aligned to the total system. Building blocks would be assembled by analyzing a problem domain through the lens of an opera- tional environment or mission for the purpose of creating the character- istics and functionality that would satisfy the stakeholders’ requirements. In this mission-focused approach, all stakeholders and modes of opera- tions should be clearly identified; different user viewpoints and needs should be gathered for the proposed system; and stakeholders must state which needs are essential, which are desirable, and which are optional. Prioritization of stakeholders’ needs is the basis for the development of such systems; vague and conflicting needs, wants, and opinions should be clarified and resolved; and consensus should be built before assembling the system. At the operational level, the system would be separated from current rigid organization structures (people, processes, technology) and would evolve into a dynamic concept of operation by assembling separate build- ing blocks to meet operational goals. The system manager should ask: What problem will the system solve? What is the proposed system used for? Should the existing system be improved and updated by adding more functionality or should it be replaced? What is the business case? To realize this future, participants suggested that research is needed in several areas, including these: • Governance (rules and regulations for evolving systems). • Interaction and communication among systems (including the pos- sibility of negative interactions between individual components and the integrity, security, and functioning of the rest of the system). • Integration and change management. • User’s perspective and user-controlled evolution. • Technologies supporting evolution. • Management and acquisition processes. • An architectural structure that enables emergence.

12 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE • Processes for decentralized organizations structured to meet opera- tional goals. SESSIOn 2: DoD Software challenges for future systems Panelists: Kristen Baldwin, Office of the Under Secretary of Defense for Acquisitions, Technology and Logistics, and Patrick Lardieri, L ­ ockheed Martin Moderator: Douglas Schmidt Panelist presentations and general discussions during this session were intended to explore two questions, from two perspectives: that of the government and that of the government contractor: • How are challenges for software in DoD systems, particularly cyber-physical systems, being met in the current environment? • What advancements in R&D, technology, and practices are needed to achieve success as demands on software-intensive system development capability increase, particularly with respect to scale, complexity, and the increasingly rapid evolution in requirements (and threats)? DoD Software Engineering and System Assurance An overview of various activities relating to DoD software engineering was given. Highlights from the presentation and discussion follow. The recent Acquisition & Technology reorganization is aimed at positioning systems engineering within the DoD, consistent with a renewed emphasis on software. The director of Systems and Software Engineering now reports directly to the Under Secretary of Defense for Acquisition and Technology. The mission of Systems and Software Engineering, which addresses evolv- ing system—and software—engineering challenges, is as follows: • Shape acquisition solutions and promote early technical planning. • Promote the application of sound systems and software engineer- ing, developmental test and evaluation, operational test and evaluation to determine operational suitability and effectiveness, and related technical disciplines across DoD’s acquisition community and programs. • Raise awareness of the importance of effective systems engineering and raise program planning and execution to the state of the practice. • Establish policy, guidance, best practices, education, and train- ing in collaboration with the academic, industrial, and government communities.

SUMMARY OF WORKSHOP DISCUSSIONS 13 • Provide technical insight to program managers and leadership to support decision making. DoD’s Software Center of Excellence is made up of a community of participants including industry, DoD-wide partnerships, national part- nerships, and university and international alliances. It will focus on sup- porting acquisition; improving the state of the practice of software engi- neering; providing leadership, outreach, and advocacy for the systems engineering communities; and fostering resources that can meet DoD goals. These are elements of DoD’s strategy for software, which aims to promote world-class leadership for DoD software engineering. Findings from some 40 recent program reviews were discussed. These reviews identified seven systemic issues and issue clusters that had con- tributed to DoD’s poor execution of its software program, which were highlighted in the session discussion. The first issue is that software requirements are not well defined, traceable, and testable. A second issue cluster involves immature architectures; integration of commercial-off- the-shelf (COTS) products; interoperability; and obsolescence (the need to refresh electronics and hardware). The third cluster involves software development processes that are not institutionalized, have missing or incomplete planning documents, and inconsistent reuse strategies. A fourth issue is software testing and evaluation that lacks rigor and breadth. The fifth issue is lack of realism in compressed or overlapping schedules. The sixth issue is that lessons learned are not incorporated into successive builds—they are not cumulative. Finally, software risks and metrics are not well defined or well managed. To address these issues, DoD is pursuing an approach that includes the following elements: • Identification of software issues and needs through a software industrial base assessment, a National Defense Industrial Association (NDIA) workshop on top software issues, and a defense software strategy summit. The industrial base assessment, performed by CSIS, found that the lack of comprehensive, accurate, timely, and comparable data about software projects within DoD limits the ability to undertake any bottom-up analysis or enterprise-wide assessments about the demand for software. Although the CSIS analy- sis suggests that the overall pool of software developers is adequate, the CSIS assessment found an imbalance in the supply of and demand for the specialized, upper echelons of software developer/management cadres. These senior cadres can be grown, but it takes time (10 or more years) and   Center for Strategic and International Studies (CSIS), Defense-Industrial Initiatives Group, 2006. Software Industrial Base Assessment: Phase I Report, October 4.

14 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE a concerted strategy. In the meantime, management/architecture/systems engineering tools might help improve the effectiveness of the senior cadres. Defense business system/COTS software modification also places stress on limited pools of key technical and management ­talent. Moreover, the true cost and risk of software maintenance deferral are not fully understood. • Creation of opportunities and partnerships through an established net- work of government software points of contact; chartering the NDIA Software Committee; information exchanges with government, academia, and industry, and planning a systems and software technology conference. Top issues emerg- ing from the NDIA Defense Software Strategy Summit in October 2006 included establishment and management of software requirements, the lack of a software voice in key system decisions, inadequate life-cycle planning and management, the high cost and ineffectiveness of traditional software verification methods, the dearth of software management exper- tise, inadequate technology and methods for assurance, and the need for better techniques for COTS assessment and integration. • Execution of focused initiatives such as Capability Maturity Model Inte- gration (CMMI) support for integrity and acquisition, a CMMI guidebook, a handbook on engineering for system assurance, a systems engineering guide for systems of systems (SoSs), the provision of software support to acquisition programs, and a vision for acquisition reform. SoSs to be used for defense require special considerations for scale (a single integrated architecture is not feasible), ownership and management (individual systems may have different owners), legacy (given budget considerations, current systems will be around for a long time), changing operations (SoS configurations will face changing and unpredictable operational demands), criticality (systems are integrated via software), and role of the network (SoSs will be network-based, but budget and legacy challenges may make imple- mentation uneven). To address a complex SoS, an initial (incremental) version of the DoD’s SoS systems engineering guide is being piloted; future versions will address enterprise and net-centric considerations, management, testing, and sustaining engineering. The issue of system assurance—reducing the vulnerability of systems to malicious tampering or access—was noted as a fundamental consid- eration, to the point that cybertrust considerations can be a fundamental driver of requirements, architecture and design, implementation practice, and quality assurance. Because current assurance, safety, and protection   separate National Research Council study committee is exploring the issue of cyberse- A curity research and development broadly, and its report, Toward a Safer and More Secure Cyber- space, will be published in final form in late 2007. See http://cstb.org/project_cybersecurity for more information.

SUMMARY OF WORKSHOP DISCUSSIONS 15 initiatives are not aligned, a comprehensive strategy for system assurance initiatives is being developed, including standards activities and guidance to put new methods into practice. One additional challenge for DoD is that, given its shortage of soft- ware resources and critical dependence on software, it cannot afford to have stovepipes in its community. To that end, the DoD Software Center of Excellence is intended to be a focal point for the community. Areas to be explored include, for example, agile methods, software estimation (a harder problem to address for unprecedented systems than for prec- edented ones), and software testing. Challenges in Developing DoD Cyber-physical Systems This session explored challenges in building cyber-physical systems for DoD—systems that integrate physical processes and computer pro- cesses in a real-time distributed fashion—from the perspective of a large contractor responsible for a wide range of systems and IT services. Cyber-physical systems are increasingly systems- and software-inten- sive—for example, in 1960, only 8 percent of the F-4 fighter capability was provided by software; in 2000, 85 percent of the F-22 fighter capability was provided by software. Such systems are distributed, real-time systems with millions of lines of code, driven by multiple sensors reporting at a variety of timescales and by multiple weapon system and machinery control protocols. Current examples of cyber-physical systems include the Joint Strike Fighter (JSF) and Future Combat Systems (FCS); examples of forthcoming technologies would be teams of autonomous robots or teams of small, fast surface ships. Characteristics of these systems exemplify the challenges of uncertainty and scale; they include • Large scale—tens of thousands of functional and performance requirements, 10 million lines of code, and 100 to 1,000 software configu- ration items; • Simultaneous conflicting performance requirements—real-time processing, bounded failure recovery, security; • Implementation diversity—programming languages, operating systems, middleware, complex deployment architectures, 20-40 year sys- tem life cycles, stringent certification standards; and • Complex deployment architectures—systems of systems; mixed wired, wireless and satellite networks; multi-tiered servers; personal digital assistants (PDAs) and workstations; and multiple system configurations. These systems are challenging, complex, and costly. Accordingly, sys- tem design challenges are frequently simplified by deferring or eliminat-

16 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE ing capability to bound costs and delivery dates. Nevertheless, cost over- runs and schedule delays are common. The GAO reported that in fiscal year 2003 (FY03) the DoD spent $21 billion on research, development, testing, and evaluation (RDT&E) for new warfighting systems; about 40 percent of that may have been spent on reworking software to remedy quality-related issues. For the F/A-22, the GAO reported that Air Force officials do not understand avionics software instability well enough to predict when they will be able to resolve its problems. Because of the complex interrelationships between parts of these cyber-physical systems and the high degree of interactive complexity, piecewise deployment of partial systems is not helpful. An example given was a situation regarding the JSF, where changing an instruction memory layout to accommodate built-in test processing unexpectedly damaged the system’s ability to meet timing requirements. It was suggested that as a result of experiences such as the one with the Aegis Combat System, where Aegis Baseline 6, Phase I, deployment was delayed for months because of integration prob- lems between two independently designed cyber-physical systems, certi-   Government Accountability Office (GAO), 2004, “Defense acquisitions: Stronger man- agement practices are needed to improve DOD’s software-intensive weapon acquisitions,” Report to the Committee on Armed Services, U.S. Senate, GAO-04-393, March.   Government Accountability Office (GAO), 2003, “Tactical aircraft: Status of the F/A-22,” Statement of Alan Li, Director, Acquisition and Sourcing Management, Testimony Before the Subcommittee on Tactical Air and Land Forces, Committee on Armed Services, House of Representatives (GAO-03-603T), February. See also: Government Accountability Office (GAO), 2005, “Tactical aircraft: F/A-22 and JSF acquisition plans and implications for tactical aircraft modernization,” Statement of Michael Sullivan, Director, Acquisition and Sourcing Management Issues, Testimony Before the Subcommittee on AirLand, Committee on Armed Services, U.S. Senate (GAO-05-519T), April 6, which concluded as follows: The original business case elements—needs and resources—set at the outset of the program are no longer valid, and a new business case is needed to justify future investments for aircraft quantities and modernization efforts. The F/A-22’s acquisition approach was not knowledge-based or evolutionary. It attempted to develop revolutionary capability in a single step, causing significant technology and design uncertainties and, eventually, significant cost overruns and schedule delays; and Government Accountability Office (GAO), 2007, “Tactical aircraft: DOD needs a joint and integrated investment strategy,” Report to the Chairman, Subcommittee on Air and Land Forces, Committee on Armed Services, House of Representatives (GAO-07-415), April, which concluded as follows: We have previously recommended that DOD develop a new business case for the F-22A program before further investments in new aircraft or modernization are made. DOD has not concurred with this recommendation, stating that an internal study of tactical aircraft has justified the current quantities planned for the F-22A. Because of the frequently changing OSD-approved requirements for the F-22A, repeated cost overruns, significant remaining in- vestments, and delays in the program we continue to believe a new business case is required and that the assumptions used in the internal OSD study be validated by an independent source.

SUMMARY OF WORKSHOP DISCUSSIONS 17 fication communities have become extremely conservative and require a static configuration for certification. Despite software’s centrality and criticality in DoD cyber-physical systems and in warfighting in general, participants suggested that it is underemphasized in high-level management reviews. For example, the Quadrennial Defense Review calls for more complex systems for advanced warfighting capabilities but mentions software only twice. Some inherent scientific and research challenges underlying engineer- ing and engineering management of DoD cyber-physical systems cited by workshop participants include these: • The management of knowledge fragmentation—fragmentation among people and teams, geographic areas, organizations, and temporal boundaries; • Design challenges—the many problems that cannot be clearly defined without specifying the solution and for which every solution is a one-time-only operation (these are sometime referred to as “wicked problems”); and • Team collaboration complexity—thousands of requirements, huge teams (hundreds or thousands of engineers), with frequent turnover and highly variable ranges of skill. With respect to knowledge fragmentation (that is, knowledge split across individual minds, knowledge split across different phases of the development cycle and the life cycle, knowledge split across different artifacts, and knowledge split across various components of an organiza- tion), system engineering today is a concurrent top-down process. There is ad hoc coordination among engineers (domain engineers, system engi- neers, software engineers) at different levels and loose semantic coupling between design and specifications. There are some problems where it is difficult to say what to do without specifying how and thereby commit- ting to an implementation; participants noted that current tools do not generally help manage the tremendous interdependence between the specification of the problem and the realization of a solution. Solutions are not necessarily right or wrong, and designers have to iterate rapidly, switching repeatedly between thinking about problem and solution con- cerns, along the lines of Fred Brooks’ description of throw-away proto­ typing. The process is slow, it is error prone because interaction is ad hoc,   Note that the previous discussion noted the desirability of not having a static configura- tion in early stages of development.   Frederick P. Brooks, 1995, The Mythical Man-Month: Essays in Software Engineering, A ­ ddison-Wesley Professional, New York.

18 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE it uses imprecise English prose, and automated checking is relegated to the lowest level where formal specifications exist. Matters become even worse as the program or system grows in size and complexity. Large teams managing complex systems must grapple with the issues of large scale in a complex collaborative environment. Interactive com- plexity has two dimensions: coupling (tight or loose) and interactions (complex or linear). Systems with high interactive complexity—for exam- ple, nuclear power plants and chemical plants—possess numerous hid- den interactions that can lead to systems accidents and hazards. Interac- tive complexity can complicate reuse. Well-known cyber-physical system accidents cited by participants included the Ariane 5, which reportedly reused a module developed for Ariane 4. That module assumed that the horizontal velocity component would not overflow a 16-bit variable. This was true for Ariane 4 but not for Ariane 5, leading to self-destruction roughly 40 seconds after the launch.10 Cyber-physical systems typically have high interactive complexity. New systems have more resource sharing, which leads to hidden depen- dencies. There is limited design time support to understand or reduce interactive complexity. Modeling and analytic techniques are difficult to employ and often are underutilized. Simulations may not capture the system that is actually built; diagrams are not sufficient to convey all consequences of decisions. Thus, present cyber-physical systems rely on human ingenuity at design time and extensive system testing to manage interactive complexity. They also rely on particular knowledge of and experience with specific vendor-sourced components in the “technology stack.” For this reason, the structure of the stack tends to resist change, impeding architectural progress and increased complexity in these sys- tems. The resulting long and costly development efforts are expected to run into system accidents. Elements of a research agenda for cyber-physical systems that per- form predictably were discussed. One goal of such research would be to find ways to manage the uncertainty that arises from the highly-coupled nature and interactive complexity of system design at very large scale. Two areas were focused on during discussions at this session: • Platform technology.  One example of a platform technology would be generation of custom run-time infrastructures. Current run-time infra- structure is deployed in general-purpose layers that are not designed for specific applications. It is a significant challenge to configure controls 10  J.C. Lyons, 1996, “Ariane 5 Flight 501 failure: Report by the inquiry board,” Paris, July 19. Downloaded from http://www.ima.umn.edu/~arnold/disasters/ariane5rep.html on March 15, 2007.

SUMMARY OF WORKSHOP DISCUSSIONS 19 across the layers to achieve performance requirements; analysis is difficult because of many hidden dependencies and because of complex inter- faces and capabilities. The generation of custom run-time infrastructure (e.g., WebSprocket) reduces system complexity. Another such technology would be certifiable dynamic resource management services. Current certification processes are based on extensive analysis and testing (hun- dreds of man-years) of fixed system configurations. Furthermore, these are human-intensive evaluation processes with limited technological sup- port that occur over the design, development, and production lifecycle. There is no way to achieve the same level of assurance for untested system configurations that may be generated by an adaptive system in the run- time environment—and these are the kinds of systems that are likely to be deployed in the future. • System design tools.  Model-centric system design would allow eval- uation of the design before final implementation by developing proto­ typing systems and using forms of static verification. Domain-specific modeling languages enable unambiguous system specifications. Model generation tools could be used to make models the center of the devel- opment process, synchronized with software artifacts. Tools that enable automated characterization of the behavior of third-party and COTS applications would be helpful. And, program transformation tools could be used to make the legacy code base and COTS software compatible with new platforms. In addition to platform technologies and design tools, cultural ele- ments are also needed to address the challenges of cyber-physical system development. Speakers noted some aspects of these elements: • Independent, neutral-party benchmarking and evaluation—speak- ers believed that there is currently insufficient funding for this type of work. • Development challenges that are realistic and at scale, allowing credible evaluation of technology solutions (measure technologies, not just artifacts). • System design education as part of the undergraduate curriculum. SESSION 3: AGILITY at Scale Panelists: Kent Beck and Cynthia Andres, Three Rivers Institute Moderator: Douglas Schmidt This session addressed the application and applicability of extreme programming’s “agile techniques” to very large, complex systems from

20 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE the perspectives of technology, development practices, and social psy- chology. For this session, both speakers and workshop participants were asked the following question: • How can the engineering and management values that the “agile” community has identified be achieved for larger-scale systems and for systems that must evolve in response to rapidly changing requirements? Values and Sponsorship Participants noted that extreme programming (XP)—an agile software development methodology—was one of the first methodologies to be explicit about the value system behind its approach and about what is fun- damental to this perspective on software development.11 Different develop- ment approaches have their own underlying values. Speakers argued that over the 10 years or so since the coining of extreme programming, the key to success seems to be sponsorship—senior-level commitment to adopt- ing XP ideas within an organization. Trying extreme programming can be disruptive, stirring up internal tension and controversy. Effective sponsors advocate among their peers and mitigate these effects. Senior-level sponsors also can help acquire resources to foster teamwork and communication. When trying extreme programming, it was suggested that people tend to focus initially on the more visible and explicit changes to prac- tices, such as pair programming, weekly releases, sitting together in open rooms, or using a test first approach. If a fundamental value shift is taking place, practices will change accordingly. However, under pressure, people tend to revert back to their old ways. Without support at higher levels for changes in approach and underlying values and without sustaining that support through periods of organizational discomfort during the transi- tion, simply trying to put new or different practices in place is not very effective. One speaker noted that senior-level commitment and sponsor- ship are therefore key to changing values and conveying these changes to larger groups and organizations. Human Issues in Software Development One speaker noted that many of the challenges in software develop- ment are human issues: People are the developers and people write the 11  For a brief summary of some of the underlying values in agile software development, see “The Agile Manifesto,” available online at http://www.agilemanifesto.org/. These values include a focus on individuals over process, working software over documentation, and responsiveness to change over following a particular plan.

SUMMARY OF WORKSHOP DISCUSSIONS 21 software. Limitations to what can be done with software are often limita- tions of human imagination and how much complexity can be managed in one person’s mind. Innovation requires fresh ideas, and if all parties are thinking similarly, not as many ideas are generated. Many problems have multiple solutions—a key is to sort out which solutions are sufficient and doable. It was suggested that one way to promote innovation is to encourage diversity: Small projects with diverse groups can be effective in that fostering interaction and coordination across disciplines often results in a stronger, richer set of ideas to choose from. It was suggested that having interesting problems to work on is a nonmonetary motivator for many software and computer science practi- tioners. A good example of nonmonetary incentives is open source tech- nology. Participants also noted that marketing innovation, intellectual curiosity, and creativity as an organizational goals are important. The perception or image of the work can be crucial to attracting new hires, who may know that the organization’s work involves a lot of processes, requires great care, and takes a long time, but not necessarily that it is also interesting and creative work. Trust, Communication, and Risk Speakers argued that much of the effort in extreme programming comes down to finding better ways of building trust. Examples were given of ways to begin conversations and to putpeople in contact with one another in order to establish trust. These include the techniques of Appreciative Inquiry (talking about what works), World Café (acquire the collective knowledge, insight, and synergies of a group in a fairly short time), and Open Space (people talk about the concerns that they have and the issues that matter to them in breakout sessions whose highlights are reported to the rest of the group).12 Some of the technical aspects of extreme programming are useful ways for programmers to demonstrate their trustworthiness. It was suggested that enhancing communication— in part, by using these communication techniques—could be useful to DoD. Tools to make developers’ testing activities more visible, not just to themselves but also to teammates, contribute to accountability and trans- parency in development and increase communication as well. There are technical things that can be done to reduce risk in projects. Some risk-reduction principles persist throughout all of extreme program- ming (XP). However, the bulk of the XP experience is not at the scale that 12  See the Appreciative Inquiry Commons (http://appreciativeinquiry.case.edu/), the World Café (http://www.theworldcafe.com/), and Open Space (http://www.openspaceworld. org/).

22 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE DoD systems manifest. Therefore, one issue raised at the meeting was whether and how the agile programming/extreme programming experi- ence can scale. Three risk reduction techniques were mentioned: • Reduce the amount of work that is half done.  Half-done work is an inherent risk. The feedback cycle has not been closed. No value has yet been received from the effort that has been expended; the mix of done and undone work occupies and distracts people. By gradually reducing the inventory of half-done work, a project can be made to run more smoothly with lower overall risk. • Find ways to defer the specification of requirement detail.  If a project experiences requirement churn, the chances are that too much detail has been specified too soon. There is a clear case to be made for much more carefully specifying the goals of a project up front but not the means for accomplishing those goals. • Testing sooner.  The longer a bug lives, the more expensive it becomes. One way of addressing that situation and improving the overall effectiveness of development is finding ways to validate software sooner, such as by developer testing. Integration is part of that testing. Several research topics were discussed at this session: • Techniques for communication.  Examine how teams actually com- municate and how they could communicate more effectively.13 • Encouragement of multidisciplinary work and collaboration. • Learning how to value simplicity.  In complex systems, fewer components in the architecture means fewer possible unpredictable interactions. • Empirical research in software.  One example of the results of such research was noted—namely, the appearance of the power law distribu- tions for object usage in software. That is, many objects are only used once, some are used multiple times, and very few are used very fre- quently. Exploring the implications of this and other phenomena may provide insight into development methodologies and how to manage complexity and scale. • Testing and integration techniques.  In a complicated deployment environment, finding better ways to get more assurance sooner in the 13  One example is some research under way at the University of Sheffield. Psychologists watch teams using XP methodologies and then report on the psychological effects of using XP, as opposed to other metrics such as defect rates. Results suggest that people are ­happier doing things this way. See http://www.shef.ac.uk/dcs/research/groups/vt/research/­ observatory.html for more information.

SUMMARY OF WORKSHOP DISCUSSIONS 23 cycle and more frequently should improve software development as a whole. Session 4: Quality and assurance with scale and uncertainty Panelists: Joe Jarzombek, Department of Homeland Security; Kris Britton, National Security Agency; Mary Ann Davidson, Oracle Corporation; Gary McGraw, Cigital Moderator: William Scherlis Panelist presentations and general discussions in this session were intended to address the following questions, from government and indus- try perspectives: • What are the particular challenges for achieving particular assur- ances for software quality and cybersecurity attributes as scale and inter- connection increase? • What are emerging best practices and technologies? • What kinds of new technologies and practices could assist? This includes especially interventions that can be made as part of the devel- opment process rather than after the fact. Interventions could include practices, processes, tools, and so on. • How should cost-effectiveness be assessed? • What are the prospects for certification, both at a comprehensive level and with respect to particular critical quality attributes? The presentations began by describing the goals and activities of two federal programs in software assurance and went on to explore present and future approaches. Software Assurance Considerations and the DHS Software Assurance Program The U.S. Department of Homeland Security (DHS) has a strategic pro- gram (discussed in more detail later in this section) to promote integrity, security, and reliability in software.14 This program for software assur- 14  The definition of software assurance that DHS uses comes out of the Committee on National Security Systems: namely, it is the level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle, and that the software functions in the intended manner. More generally, “assurance” is about confidence—that is, it is a human judgment, not an objective test/verification/analytic result but rather a judgment based on those results.

24 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE ance emphasizes security; the risk exposures associated with reliance on software leave a lot of room for improvements. In industry as well as government there is increased concern about security. Security is difficult to measure. It is difficult to quantify security or assess relative progress in improving it. Participants noted the need for more comprehensive diagnostic capabilities and standards on which to base assurance claims. Two suggestions were made: • The software assurance tool industry has not been keeping pace with changes in software systems—tools that provide point solutions are available, but much of the software industry cannot apply them. As testing processes become more complex, costly, and time consuming, the testing focus frequently narrows to functional testing. • Tools are not interoperable. This leads to more standards but, para- doxically, less standardization. Less standardization, in turn, leads to decreased confidence and lower levels of assurance. One remedy for this situation would have the following elements: • Government, in collaboration with industry and academia, works to raise expectations on product assurance. This would help to advance more comprehensive diagnostic capabilities, methodologies, and tools to mitigate risks. • Acquisition managers and users start to factor information about suppliers’ software development processes and the risks posed by the supply chain into their decision making and acquisition/delivery pro- cesses. Information about evaluated products would become available, and products in use could be securely configured. • Suppliers begin to deliver quality products with requisite integ- rity and make assurance claims about their IT and the software’s safety, security, and dependability. To do this, they would need to have and use relevant standards, qualified tools, independent third-party certifiers, and a qualified workforce. It was suggested that software is an industry that demands only minimal levels of responsible practice compared to some other industries and that this is part of the challenge. But raising the level of responsible practice could increase sales to customers that demand high-assurance products. From the perspective of the DHS, critical infrastructure around the United States is often not owned or operated by U.S. interests. As cyber- space and physical space become increasingly intertwined and software- controlled or -enabled, these interconnections and controls are often

SUMMARY OF WORKSHOP DISCUSSIONS 25 implemented using the Internet. This presents a target-rich environment especially given the asymmetries at work: According to one speaker, extrapolating from data on average defect rates, a deployed software package of a million lines of code will have 6,000 defects. Even if only 1 percent of those defects introduce security-related vulnerabilities, then there are 60 different vulnerabilities for an adversary to exploit. In an era riddled with asymmetric cyberattacks, claims about system reliability, integrity, and safety must also address the built-in security of the enabling software. Security is an enabler for reliability, integrity, and safety. Cyber- related disruptions have an economic and business impact because they can lead to the loss of money and time, delayed or cancelled products, and loss of sensitive information, reputation, even life. From a CEO/CIO perspective, disruptions and security flaws can mean having to redeploy staff to deal with problems and increase IT security, reduced end user productivity, delayed products, and unanticipated patch management costs. Results from a survey of CIOs in 2006 by the CIO Executive Council indicate that reliable and vulnerability-free software is a top priority. In that same survey respondents expressed “low to medium confidence” that software is “free from” flaws, vulnerabilities, and malicious code. The majority of these CIOs would like vendors to certify and test software using qualified tools. Speakers noted that the second national software summit had iden- tified major gaps in requirements for tools and technologies, as well as major shortcomings in the state-of-the-art and the state-of-the practice for developing error-free software. A national software strategy was recom- mended in order to enhance the nation’s capability to routinely develop trustworthy software products and ensure the continued competitive- ness of the U.S. software industry. This strategy focused on improving software trustworthiness, educating and fielding the software workforce, re-energizing software R&D, and encouraging innovation in the U.S. industry.15 In addition to the gaps and shortcomings identified at that software summit, a recent PITAC report on national priorities for cyber- security listed security software engineering and software assurance as among the top ten goals.16 Software assurance contributes to trustworthy software systems. The goals of the DHS Software Assurance (SwA) program promote the secu- rity of software across its development, acquisition, and implementation 15  Center for National Software Studies, 2005, “Software 2015: A national software strat- egy to ensure U.S. security and competitiveness,” available online at www.cnsoftware. org/nss2report, April 29. 16  President’s Information Technology Advisory Committee (PITAC), 2005, Cybersecurity: A Crisis of Prioritization, February.

26 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE life cycles.17 The SwA program is scoped to address trustworthiness, predictable execution, and conformance to requirements, standards, and procedures. It is structured to target people, process, technology, and acquisition. The SwA program is process-agnostic, providing practical guidance in assurance practices and methodologies for process improvement. A developer’s guide and glossary discussed during this session, Securing the Software Life Cycle, is not a policy or standard. Instead, it focuses on touch points and artifacts throughout the life cycle that are foundational knowledge, best practices, and tools and resources for building assurance in. Integrating security into the systems engineering life cycle enables the implementation of software assurance. It was suggested that software assurance would be well-served by standards that assign names to practices or collections of practices. Standards are needed to facilitate communication between buyer and seller, government and industry, insurer and insured. They are needed to improve information exchange and interoperability among practices and among tools. The goal is to close the gap between art and practice and raise the minimum level of responsible practice. Some current standards efforts for software and system life cycle processes include ISO SC7, ISO SC22, ISO SC27, and IEEE S2ESC. A critical aspect, it was suggested, is language: articulating structured assurance claims supported by evidence and reasoning. For example, the Object Management Group (OMG) has been working with industry and federal agencies to help collaboration in a common framework for the analysis and exchange of information related to software trustworthiness. This framework can be used for building and assembling software components, including legacy systems and large systems and networks: Looking only at product evaluation overlooks the places where systems and networks are most vulnerable, because it is the interaction of all the components as installed that becomes the problem. One of the challenges often noted regarding standardization of prac- tices is the lag between identification of a best practice and its codification into a standard. This is particularly challenging in areas such as software assurance, where there is rapid evolution of technologies, practices, and related standards. In the future, the goal is for customers to have expec- tations for product assurance—including information about evaluated products, suppliers’ process capabilities, and secure configurations of software—and for suppliers to be able to distinguish themselves by deliv- ering quality products with the requisite integrity and to be able to make assurance claims based on relevant standards. 17  The MITRE Web site, http://www.cwe.mitre.org, can be used to track SwA progress.

SUMMARY OF WORKSHOP DISCUSSIONS 27 The National Security Agency Center for Assured Software According to the historical perspective offered by one speaker, the DoD assurance requirements of 30 years ago mostly focused on what became the National Information Assurance Partnership (NIAP) and the trusted product evaluation program.18 Developers were known and trusted intrinsically. By contrast, in today’s environment, the market for software is a global one: Even U.S. companies are international. DoD has become increasingly concerned about malicious intent. Malicious code done very well is going to look like an accident. Unfortunately, it was argued, assurance is gained today the same way as it was 30 years ago. Various mechanisms for gaining confidence in general-purpose software are also being used for DoD software: functional testing, penetration test- ing, design and implementation analysis, advanced development envi- ronments, trusted developers, process, discipline, and so forth are mecha- nisms to build confidence. The intention is for the Center for Assured Software to contribute to the advancement of measurable confidence in software. In today’s environment, vendors do not have an incentive to be involved early in the design process, so testing typically is done after the fact, with a third-party orientation. The problem with this model is that it is all about penetration analysis, not building security in, and trust is bestowed by a third party. Moreover, this model does not scale very well. In one speaker’s view, assurance models for COMSEC devices will not, for example, scale to the DoD’s Joint Tactical Radio System (JTRS) pro- gram. In addition, composition has always been a problem in the context of assurance: The current state of knowledge about how to compose sys- tems well and to know what we have, with the inadequacy in assurance, is compounded by the problem of malicious intent. A challenge for NSA’s Center for Assured Software is to be able to scale assurance. To do that, the future assurance paradigm needs to acknowledge the role of the development process in the assurance argu- ment. How software is built, what processes are used, and what tools are used all have to be part of the assurance argument. That is a subtle but important shift in the paradigm. In the speaker’s view, the way to achieve scale in the development process and the way to gain assurance in the development process and in third-party analysis is by increasing the extent of automation. The cur- 18  NIAP is a U.S. government initiative originated to meet the security testing needs of both information technology consumers and producers and is operated by the NSA (see http://www.niap-ccevs.org/). The Trusted Product Evaluation Program (TPEP) was started in 1983 to evaluate COTS products against the Trusted Computer Systems Evaluation C ­ riteria (TCSEC).

28 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE rent paradigm does not embrace that means of achieving scale very well. Previous measurement techniques mostly entailed humans looking for vulnerabilities. What the Center for Assured Software is trying to do is find correlations between assurance and positive things that can be mea- sured—for instance, the properties that are important—to give confidence that the software is indeed built appropriately. Another area where work is needed, it was suggested, is to create a science of composition that enables making an argument for levels of assurance at scale. In the mid- 1980s, there were attempts to do that with the Trusted Database Manage- ment System Interpretation of the Trusted Computer System Evaluation Criteria (often referred to as the Orange Book), but the results did not scale very well. Participants mentioned a variety of ideas being pursued in industry and academia in response to business and government needs in the area of software assurance: anomaly identification, model checking, repeatable methodology for assurance analysis and evaluation, and intermediate representation of executable code.19 Suggested research areas mentioned during this discussion include these: • Assurance composition, • Verifiable compilation, • Software annotation, • Model checking, • Safe languages and automated migration from unsafe languages, • Software understanding, and • Measurable attributes that have strong correlation to assurance. More broadly, participants suggested that it will be important to under- stand how to build confidence from all of these (and other approaches) and to improve these approaches. In particular, it will be important to understand how they “combine” (that is, what multiple techniques col- lectively convey regarding confidence) since it is at best highly unlikely that one technique will ever by itself be sufficient. Software Assurance: Present and Future This vendor-perspective presentation and discussion focused on COTS (it was suggested that 80 percent of DoD systems have at least some COTS components) and on taking tactical, practical, and economical steps at the component level to improve assurance. As scale and interconnec- 19  Other approaches include static analysis, extended static checkers, and rule-based a ­ utomatic code analysis.

SUMMARY OF WORKSHOP DISCUSSIONS 29 tivity increase, it was argued that the assurance bar for software quality and cybersecurity attributes can be raised by (1) raising the component assurance bar (resources are finite and organizations can spend too much time and too many resources trying to patch their way to security) and (2) getting customers to understand and accept that assurance for custom software can be raised if they are willing to pay more (if customers do not know about costs that are hidden, they cannot accept or budget for them). One set of best practices and technologies to write secure software was described. It includes • Secure coding standards, • Developer training in secure coding, • Enabled, embedded security points of contact (the “missionary model”), • Security as part of development including functional, design, test (include threat modeling), • Regressions (including destructive security tests), • Automated tools (home grown, commercial of multiple flavors), • Locked-down configurations (delivering products that are secure on installation), and • Release criteria for security. However, these practices are not routinely taught in universities. Nei- ther the software profession not the industry as a whole can simply rely on a few organizations doing these kinds of things. Discussion identified some necessary changes in the long run: • University curricula.  It was argued that university programs should do a better job of teaching secure coding practices and training future developers to pay attention to security as part of software development. If the mindset of junior developers does not change, the problem will not be solved. One participant said, “Process won’t fix stupidity or arrogance.” Incentives to be mindful of security should be integrated throughout the curriculum. When security is embedded throughout the development process, a small core of security experts is not sufficient. One challenge is how to balance the university focus on enduring knowledge and skills against the need for developers to understand particular practices and techniques specific to current technologies. • Automation.  Automated tools are promising and will be increas- ingly important, but they are not a cure-all. Automated tools are not yet ready for universal prime time for a number of reasons, including: Tools need to be trained to understand the code base; programmers have dif-

30 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE ficulty establishing sound and complete rules; most of today’s tools look only for anticipated vulnerabilities (e.g., buffer overruns) and cannot be readily adapted to new classes of vulnerabilities; there are often too many false positives; scalability is an issue; one size does not fit all (it is prema- ture for standards) and therefore multiple tools are needed; and there is not a good system for rating tools. Conventional wisdom holds that people will not pay more for secure software. However, people already are paying for bad security—a 2002 study by the National Institute of Standards and Technology (NIST) reported that the consequences of bad software cost $59 billion a year in the United States.20 It was argued that from a development standpoint, security cost-effectiveness should be measured pragmatically. However, a simple return on investment (ROI) is not the right metric. From a devel- oper’s perspective, the goal should be the highest net present value (NPV) for cost avoidance of future bugs—not raw cost savings or the ROI from fixing bugs in the current year. Another way of valuing security is oppor- tunity cost savings—what can be done (e.g., building new features) with the resources saved from not producing patches. From the customer’s perspective, it is the life-cycle cost of applying a patch weighed against the expected cost of the harm from not applying the patch. Customers want predictable costs, and the perception is that they cannot budget for the cost of applying patches (even though the real cost driver is the consequences of unpatched systems). If customers know what they are getting, they can plan for a certain level of risk at a certain cost. The goal is to find the match between expected risk for the customer and for the vendor—how suitable the product is for its intended use. Certification is a way of assessing what is “good.”21 But partici- pants were not optimistic when considering prospects for certification of development processes. There is too much disagreement and ideol- ogy surrounding development processes. However, there can be some commonality around aspects of good development processes. Certifying developers is also problematic. In engineering, there are accredited degree programs and clear licensing requirements. The awarding of a degree in computer science is not analogous to licensing an engineer because there is not the same common set of requirements, especially robustness and safety requirements. In addition, it can be difficult to replicate the results 20 See NIST, 2002, “Planning Report 02-3: The economic impacts of inadequate infra­structure for software testing.” Available online at http://www.nist.gov/ director/prog-ofc/report02-3.pdf. 21  recent NRC study examines the issue of certification and dependability of software A systems. See information on the report Software for Dependable Systems: Sufficient Evidence? at http://cstb.org/project_dependable.

SUMMARY OF WORKSHOP DISCUSSIONS 31 of software engineering processes, making it hard to achieve confidence such that developers are willing to sign off on their work. Moreover, it was argued that with current curricula, developers generally do not even learn the basics of secure coding practice. There is little to no focus on security, safety, or the possibility that the code is going to be attacked in most educational programs. It was argued that curricula need to change and that computer science graduates should be taught to “assume an enemy.” Automated tools can give better assurance to the extent that ven- dors use them in development and fix what they find. Running evaluation tools after the fact on something already in production is not optimal. 22 It was suggested that there is potential for some kind of “goodness meter” (a complement to the “badness meter” described in the next section) for tool use and effectiveness—what tool was used, what the tool can and cannot find, what the tool did and did not find, the amount of code cov- ered, and that tool use was verified by a third party. Software Security: Building Security In Discussions in this session focused on software security as a systems problem as opposed to an application problem. In the current state of the practice, certain attributes of software make software security a challenge: (1) connectivity—the Internet is everywhere and most software is on it or connected to it; (2) complexity—networked, distributed, mobile code is hard to develop; and (3) extensibility—systems evolve in unexpected ways and are changed on the fly. This combination of attributes also con- tributes to the rise of malicious code. Massively multiplayer online games (MMOGs) are bellwethers of things to come in terms of sophisticated attacks and exploitation of vul- nerabilities. These games experience the cutting edge of what is going on in software hacking and attacks today.23 Attacks against such games are 22  Itwas suggested that vendors should not be required to vet products against numerous tools. It was also suggested that there is a need for some sort of Common Criteria reform with mutual recognition in multiple venues, eliminating the need to meet both Common Criteria and testing requirements. Vendors, for example, want to avoid having to give gov- ernments the source code for testing, which could compromise intellectual property, and want to avoid revealing specifics on vulnerabilities (which may raise security issues and also put users of older versions of the code more at risk). Common Criteria is an international standard for computer security. Documentation for it can be found at http://www.niap- ccevs.org/cc-scheme/cc_docs/. 23  World of Warcraft, for example, was described as essentially a global information grid with approximately 6 million subscribers and half a million people playing in real time at any given time. It has its own internal market economy, as well as a significant black market economy.

32 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE also at the forefront of so-called rootkit24 technology. Examining attacks on large-scale games may be a guide to what is likely to happen in the non-game world. It was suggested that in 2006, security started to become a differentiator among commercial products. Around that time, companies began televising ads about security and explicitly offering security features in new products. Customers were more open to the idea of using multiple vendors to take advantage of diversity in features and suppliers. Security problems are complicated. There is a difference between implementation bugs such as buffer overflows or unsafe systems calls, and architectural flaws such as compartmentalization problems in design or insecure auditing. As much attention needs to be paid to looking for architectural or requirements flaws as is paid to looking for coding bugs. Although progress is being made in automation, both processes still need people in the loop. When a tool turns up bugs or flaws, it gives some indication of the “badness” of the code—a “badness-o-meter” of sorts. But when use of a tool does not turn up any problems, this is not an indica- tion of the “goodness” of the code. Instead, one is left without much new knowledge at all. Participants emphasized that software security is not application security. Software is everywhere—not just in the applications. Software is in the operating system, the firewall, the intrusion detection system, the public key infrastructure, and so on. These are not “applications.” Appli- cation security methods work from the outside in. They work for COTS software, require relatively little expertise, and are aimed at protecting installed software from harm and malicious code. System software secu- rity works from the inside out, with input into and analysis of design and implementation, and requires a lot of expertise. In one participant’s view, security should also be thought of as an emergent property of software, just like quality. It cannot be added on. It has to be designed in. Vendors are placing increased emphasis on security, and most customers have a group devoted to software security. It was suggested that the tools market is growing, for both application security (a market of between $60 million and $80 million) and software security (a market of about $20 million, mostly for static analysis tools). Consult- ing services, however, dwarf the tools market. One speaker described the “three pillars” of software security: 24  rootkit A is a set of software tools that can allow hackers to continue to gain undetected, unauthorized access to a system following an initial, successful attack by concealing pro- cesses, files, or data from the operating system.

SUMMARY OF WORKSHOP DISCUSSIONS 33 • Risk management, tied to the mission or line of business. Financial institutions such as banks and credit card consortiums are in the lead here, in part because Sarbanes-Oxley made banks recognize their software risk. • Touchpoints, or best practices. The top two are code review with a tool and architectural risk analysis. • Knowledge, including enterprise knowledge bases about security principles, guidelines, and rules; attach patterns; vulnerabilities; and his- torical risks. SESSION 5: enterprise scale and beyond Panelists: Werner Vogels, Amazon.com, and Alfred Spector, AZS-Services Moderator: Jim Larus The speakers at this session focused on the following topics, from the perspective of industry: • What are the characteristics of successful approaches to addressing scale and uncertainty in the commercial sector, and what can the defense community learn from this experience? • What are the emerging software challenges for large-scale enter- prises and interconnected enterprises? • What do you see as emerging technology developments that relate to this? Life Is Not a State-Machine: The Long Road from Research to Production Discussions during this session centered on large-scale Web opera- tions, such as that of Amazon.com, and what lessons about scale and uncertainty can be drawn from them. It was argued that in some ways, software systems are similar to biological systems. Characteristics and activities such as redundancy, feedback, modularity, loose coupling, purg- ing, apoptosis (programmed cell death), spatial compartmentalization, and distributed processing are all familiar to software-intensive systems developers, and yet these terms can all be found in discussions of robust- ness in biological systems. It was suggested that there may be useful les- sons to be drawn from that analogy. Amazon.com is very large in scale and scope of operations: It has seven Web sites; more than 61 million active customer accounts and over 1.1 million active seller accounts, plus hundreds of thousands of

34 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE registered associates; over 200,000 registered Web services developers; over 12,500 employees worldwide; and more than 20 fulfillment centers worldwide. About 30 percent of Amazon’s sales are made by third-party sellers; almost half of its sales are to buyers outside the United States. On a peak shipping day in 2006, Amazon made 3.4 million shipments. Amazon.com’s technical challenges include how to manage millions of commodity systems, how to manage many very large, geographically dispersed facilities in concert, how to manage thousands of services run- ning on these systems, how to ensure that the aggregate of these services produces the desired functionality, and how to develop services that can exploit commodity computing power. It, like other companies providing similar kinds of services, faces challenges of scale and uncertainty on an hourly basis. Over the years, Amazon has undergone numerous transformations—from retailer to technology provider, from single application to platform, from Web site and database to a massively distributed parallel system, from Web site to Web service, from enterprise scale to Web scale. Amazon’s approach to man- aging massive scale can be thought of as “controlled chaos.” It continuously uses probabilistic and chaotic techniques to monitor business patterns and how its systems are performing. As its lines of business have expanded these techniques have had to evolve—for example, focusing on tracking customer returns as a negative metric does not work once product lines expand into clothing (people are happy to order multiple sizes, keep the one that fits, and return the rest). Amazon builds almost all of its own software because the commercial and open source infrastructure available now does not suit Amazon.com’s needs. The old technology adoption life cycle from product development to useful acceptance was between 5 and 25 years. Amazon and similar companies are trying to accelerate this cycle. However, it was suggested that for an Amazon developer to select and use a research technology is almost impossible. In research, there are too many possibilities to choose from, experiments are unrealistic compared to real life, and underly- ing assumptions are frequently too constrained. In real life, systems are unstable, parameters change and things fail continuously, perturbations and disruptions are frequent, there are always malicious actors, and fail- ures are highly correlated. In the real world, when the system fails, the mission of the organization cannot stop—it must continue.25 Often, complexity is introduced to manage uncertainty. However, there may well exist what one speaker called “conservation laws of com- plexity.” That is, in a complex interconnected system, complexity cannot 25  Examples of systems where assumptions did not match real life include the Titanic, the Tacoma Narrows bridge, and the Estonian ferry disaster.

SUMMARY OF WORKSHOP DISCUSSIONS 35 be reduced absolutely, it can only be moved around. If uncertainty is not taken into account in large scale system design, it makes adoption of the chosen technology fairly difficult. Engineers in real life are used to deal- ing with uncertainty. Assumptions are often added to make uncertainty manageable. At Amazon, the approach is to apply Occam’s razor: If there are competing systems to choose from, pick the system that has the fewest assumptions. In general, assumptions are the things that are really limit- ing and could limit the system’s applicability to real life. Two different engineering approaches were contrasted, one with the goal of building the best possible system (the “right” system) whatever the cost, and the other with the more modest goal of building a smaller, less-ambitious system that works well and can evolve. The speaker char- acterized the former as being incredibly difficult, taking a long time and requiring the most sophisticated hardware. By contrast, the latter approach can be faster, it conditions users to expect less, and it can, over time, be improved to a point where performance almost matches that of the best possible system. It was also argued that traditional management does not work for complex software development, given the lack of inspection and control. Control requires determinism, which is ultimately an illusion. Amazon’s approach is to optimize team communication by reducing team size to maximum of 8-10 people (a “two-pizza team”). For larger problems, decompose the problem and reduce the size of the team needed to tackle the subproblems to a two-pizza group. If this cannot be done, it was sug- gested, than do not try to solve that problem—it’s too complicated. A general lesson that was promoted during this session was to let go of control and the notion that these systems can be controlled. Large systems cannot be controlled—they are not deterministic. For various reasons, it is not possible to consider all the inputs. Some may not have been included in the original design; requirements may have changed; the environment may have changed. There may be new networks and/or new controllers. The problem is not one of control; it is dealing with all the interactions among all the different pieces of the system that cannot be controlled. Amazon.com’s approach is to let these systems mature incrementally, with iterative improvement to yield the desired outcome during a given time period. The Old, the Old, and the New In this session’s discussions, the first “old” was the principle of abstraction-encapsulation-reuse. Reuse is of increasing importance every- where as the sheer quantity of valuable software components continues to grow. The second “old” was the repeated quest (now using Web services

36 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE and increasingly sophisticated software tools) to make component reuse and integration the state of practice. Progress is being made in both of these areas, as evidenced by investment and anecdotes. The “new” dis- cussed was the view that highly structured, highly engineered systems may have significant limitations. Accordingly, it was argued, “semantic integration,” more akin to Internet search, will play a more important role. There are several global integration agendas. Some involve broad societal goals such as trade, education, health care, and security. At the firm or organization level, there is supply chain integration and N to 1 integration of many stores focusing on one consumer, as in the case Ama- zon and its many partners and vendors. In addition, there is collaborative engineering, multidisciplinary R&D, and much more. Why is global integration happening? For one thing, it is now tech- nically possible, given ubiquitous networking, faster computers, new software methodologies. People, organizations, computation, and devel- opment are distributed, and networked systems are now accepted as part of life and business, along with the concomitant benefits and risks (including security risks). An emerging trend is the drive to integrate these distributed people and processes to get efficiency and cost-effective development derived from reuse. Another factor is that there are more software components to inte- grate and assemble. Pooling of the world’s software capital stock is creat- ing heretofore unimaginably creative applications. Software is a major element of the economy. It was noted that by 2004, the amount of U.S. commercial capital stock relating to software, computer hardware, and telecommunications accounted for almost one-quarter of the total capital stock of business; about 40 percent of this is software. Software’s real value in the economy could even be understated because of accounting rules (depreciation), price decreases, and improvements in infrastructure and computing power. The IT agenda and societal integration reinforce each other. Core elements of computer science, such as algorithms and data struc- tures, are building blocks in the integration agenda. The field has been focusing more and more on the creation and assembly of larger, more flexible abstractions. It was suggested that if one accepts that the notion of abstraction-encapsulation-reuse is central, then it might seem that ser- vice-oriented computing is a done deal. However, the challenge is in the details: How can the benefits of the integration agenda be achieved throughout society? How are technologists and developers going to create these large abstractions and use them? When the Internet was developed, some details—such as quality of service and security—were left undone. Similarly, there are open chal-

SUMMARY OF WORKSHOP DISCUSSIONS 37 lenges with regard to integration and service-oriented approaches. What are the complete semantics of services? What security inheres in the ser- vice being used? What are the failure modes and dependencies? What is the architectural structure of the world’s core IT and application services? How does it all play out over time? What is this hierarchy that occurs globally or, for the purposes of this workshop, perhaps even within DoD or within one of the branches of the military? Service-oriented computing is computing whereby one can create, flexibly deploy, manage, meter and charge for (as appropriate), secure, locate, use, and modify computer programs that define and implement well-specified functions, having some general utility (services), often recursively using other services developed and deployed across time and space, and where computing solutions can be built with a heavy reliance on these services. Progress in service-oriented computing brings together information sharing, programming methodologies, transaction process- ing, open systems approaches, distributed computing technologies, and Web technologies. There is now is a huge effort on the part of industry to develop appli- cation-level standards. In this approach, companies are presented with the definition of some structure that they need to use to interoperate with other businesses, rather than, for example, having multiple individual fiefdoms within each company develops unique customer objects. The Web services approach generally implies a set of services that can be invoked across a network. For many, Web services comprise things such as Extensible Markup Language (XML) and SOAP (a protocol for exchanging XML-based messages over computer networks) along with a variety of Web service protocols that have now been defined and are heav- ily used, developed, produced, and standardized (many in a partnership between IBM and Microsoft). Web services are on the path to full-scale, service-oriented computing; it was argued that this path can be traced back to the 1960s and the airlines’ Sabre system, continuing through Arpanet, the Internet, and the modern World Wide Web. Web services based on abstraction-encapsulation-reuse are a new approach to applying structure-oriented engineering tradition to informa- tion technology (IT). For example, integration steps include the precise definition of function (analogous to the engineering specifications and standards for transportation system construction), architecture (analo- gous to bridge design, for example), decomposition, rigorous component production (steel beams, for example), careful assembly, and managed change control. The problem is, there may be limits to this at scale. In software, each of these integration steps is difficult in itself. Many projects are inherently multiorganizational, and rapid changes have dire conse- quences for traditional waterfall methodologies.

38 SOFTWARE-INTENSIVE SYSTEMS AND UNCERTAINTY AT SCALE It was argued that “semantic integration,” a dynamic, fuzzier inte- gration more akin to Internet search, will play a larger role in integration than more highly structured engineering of systems. Ad hoc integration is a more humble approach to service-based integration, but it is also more dynamic and interpretive. Components that are integrated may be of lower generality (not a universal object) and quality (not so well specified). Because they will be of lower generality, perhaps with dif- ferent coordinate systems, there will have to be automated impendence matching between them. Integration may take place on an intermediate service, perhaps in a browser. Businesses are increasingly focusing on this approach for the same reasons that simple approaches have always been favored. This is a core motivational component of the Web 2.0 mash-up focus. Another approach to ad hoc integration uses access to massive amounts of information—with no reasonable set of predefined, param- eterized interfaces, annotation and search will be used as the integration paradigm. It is likely that there will be tremendous growth in the standards needed to capitalize on the large and growing IT capital plant. There will be great variability from industry to industry and from place to place around the world, depending on the roles of the industry groups involved, differential regulations, applicable types of open source, and national interests. Partnerships between the IT industry and other indus- tries will be needed to share expertise and methodologies for creating usable standards, working with competitors, and managing intellectual property. A number of topics for service-oriented systems and semantic inte- gration research were identified, some of which overlap with traditional software system challenges. The service-oriented systems research areas and semantic integration research areas spotlighted included these: • Basics.  Is there a, practical, normative general theory of consistency models? Are services just a remote procedure call invocation or a complex split between client and server? How are security and privacy to be pro- vided for the real world, particularly if one does not know what services are being called? How does one utilize parallelism? This is an increasingly important question in an era of lessening geometric clock-speed growth. • Management.  With so many components and so much information hiding, how does one manage systems? How does one manage intellec- tual property? • Global properties.  Can one provide scalability generally? How does one converge on universality and standards without bloat? What systems can one deploy as really universal service repositories?

SUMMARY OF WORKSHOP DISCUSSIONS 39 • Economics.  What are realistic costing/charging models and implementations? • Social networking.  How does one apply social networking technol- ogy to help? • Ontologies of vast breadth and scale. • Automated discovery and transformation. • Reasoning in the control flow. • Use of heuristics and redundancy. • Search as a new paradigm. Complexity grows despite all that has been done in computer science. There is valuable, rewarding, and concrete work for the field of computer science in combating complexity. This area of work requires focus. It could prove as valuable as direct functional innovation. Participants identified several research areas to address complexity relevant to service-oriented systems and beyond, including: meaning, measuring, methodology, sys- tem architecture, science and technology, evolutionary systems design, and legal and cultural change.

Next: 3 Wrap-up Discussion and Emergent Themes »
Summary of a Workshop on Software-Intensive Systems and Uncertainty at Scale Get This Book
×
Buy Paperback | $29.00 Buy Ebook | $23.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!