In general, the concept of "testing in" quality is costly and ineffectual; software quality is achieved in the requirements, architecture, specification, design, and coding activities. The problem of doing just enough testing to remove uncertainty regarding critical performance issues, and to support the decisions that must be made in the software life cycle is a problem amenable to solution by statistical science. The question is not whether to test, but when to test, what to test, and how much to test.

Statistical testing enables efficient collection of empirical data that will remove uncertainty about the behavior of the software-intensive system and support economic decisions regarding further testing, deployment, maintenance, and evolution. A statistical principle of fundamental importance is that a population to be studied must first be characterized, and that characterization must include the infrequent and exceptional as well as the common and typical. It must be possible to represent all questions of interest and all decisions to be made in terms of this characterization. When applied to software testing, the population is the set of all possible scenarios of use with each accurately represented as to frequency of occurrence. The operational usage model is a formalism presented in this paper that enables the application of many statistical principles to software testing and forms the basis for efficient testing in support of decision making.

Most usage modeling and related statistical testing experience to date is with embedded real-time systems, application program interfaces, and graphical user interfaces. One very advanced industrial user of this technology is the mass storage devices business. Use of this technology has led to extensive test automation, significant reduction in the time these software-intensive products are in testing, improved feedback to the developers regarding product deficiencies or quality, improved advice to management regarding suitability for deployment, and greatly improved field reliability of products shipped.

From a statistical point of view, all the topics in this paper follow sound problem-solving principles and are direct applications of well-established theory and methodology. From a software testing point of view, the application of statistical science is relatively new and rapidly evolving, as an increasing range of statistical principles is applied to a growing variety of systems. Statistical testing is used in pockets of industry and agencies of government, including DoD, on both experimental and routine bases. This paper is a composite of what is in hand and within reasonable reach in the application of statistical science to software testing.

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement