Statistics, Testing, and Defense Acquisition

Background Papers

Panel on Statistical Methods for Testing and Evaluating Defense Systems

Michael L. Cohen, Duane L. Steffey, and John E. Rolph, Editors

Committee on National Statistics

Commission on Behavioral and Social Sciences and Education

National Research Council

NATIONAL ACADEMY PRESS
Washington, D.C.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page R1
--> Statistics, Testing, and Defense Acquisition Background Papers Panel on Statistical Methods for Testing and Evaluating Defense Systems Michael L. Cohen, Duane L. Steffey, and John E. Rolph, Editors Committee on National Statistics Commission on Behavioral and Social Sciences and Education National Research Council NATIONAL ACADEMY PRESS Washington, D.C.

OCR for page R1
--> NOTICE: The project that is the subject of this report was approved by the Governing Board of the National Research Council, whose members are drawn from the councils of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine. The members of the committee responsible for the report were chosen for their special competences and with regard for appropriate balance. The National Academy of Sciences is a private, nonprofit, self-perpetuating society of distinguished scholars engaged in scientific and engineering research, dedicated to the furtherance of science and technology and to their use for the general welfare. Upon the authority of the charter granted to it by the Congress in 1863, the Academy has a mandate that requires it to advise the federal government on scientific and technical matters. Dr. Bruce Alberts is president of the National Academy of Sciences. The National Academy of Engineering was established in 1964, under the charter of the National Academy of Sciences, as a parallel organization of outstanding engineers. It is autonomous in its administration and in the selection of its members, sharing with the National Academy of Sciences the responsibility for advising the federal government. The National Academy of Engineering also sponsors engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. Dr. William A. Wulf is president of the National Academy of Engineering. The Institute of Medicine was established in 1970 by the National Academy of Sciences to secure the services of eminent members of appropriate professions in the examination of policy matters pertaining to the health of the public. The Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government and, upon its own initiative, to identify issues of medical care, research, and education. Dr. Kenneth I. Shine is president of the Institute of Medicine. The National Research Council was organized by the National Academy of Sciences in 1916 to associate the broad community of science and technology with the Academy's purposes of furthering knowledge and advising the federal government. Functioning in accordance with general policies determined by the Academy, the Council has become the principal operating agency of both the National Academy of Sciences and the National Academy of Engineering in providing services to the government, the public, and the scientific and engineering communities. The Council is administered jointly by both Academies and the Institute of Medicine. Dr. Bruce Alberts and Dr. William A. Wulf are chairman and vice chairman, respectively, of the National Research Council. The project that is the subject of this report is supported by Contract DASW01-94-C-0119 between the National Academy of Sciences and the Director of Operational Test and Evaluation at the Department of Defense. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the organizations or agencies that provided support for this project. International Standard Book Number 0-309-06627-1 Copyright 1999 by the National Academy of Sciences. All rights reserved. Printed in the United States of America

OCR for page R1
--> PANEL ON STATISTICAL METHODS FOR TESTING AND EVALUATING DEFENSE SYSTEMS JOHN E. ROLPH (Chair), Marshall School of Business, University of Southern California MARION BRYSON, North Tree Management, Monterey, Marina, California HERMAN CHERNOFF, Department of Statistics, Harvard University JOHN D. CHRISTIE, Logistics Management Institute, McLean, Virginia LOUIS GORDON, Filoli Information Systems, Palo Alto, California KATHRYN B. LASKEY, Department of Systems Engineering and Center of Excellence in C3I, George Mason University ROBERT C. MARSHALL, Department of Economics, Pennsylvania State University VIJAYAN N. NAIR, Department of Statistics, University of Michigan ROBERT T. O'NEILL, Division of Biometrics, Food and Drag Administration, U.S. Department of Health and Human Services, Rockville, Maryland STEPHEN M. POLLOCK, Department of Industrial and Operations Engineering, University of Michigan JESSE H. POORE, Department of Computer Science, University of Tennessee FRANCISCO J. SAMANIEGO, Division of Statistics, University of California, Davis DENNIS E. SMALLWOOD, Department of Social Sciences, U.S. Military Academy, West Point, New York MICHAEL L. COHEN, Study Director DUANE L. STEFFEY, Consultant ANU PEMMARAZU, Research Assistant ERIC M. GAIER, Consultant CANDICE S. EVANS, St. Project Assistant

OCR for page R1
--> COMMITTEE ON NATIONAL STATISTICS 1996-1997 NORMAN M. BRADBURN (Chair), National Opinion Research Center, University of Chicago JULIE DAVANZO, The RAND Corporation, Santa Monica, California WILLIAM F. EDDY, Department of Statistics, Carnegie Mellon University JOHN F. GEWEKE, Department of Economics, University of Minnesota, Minneapolis JOEL B. GREENHOUSE, Department of Statistics, Carnegie Mellon University ERIC A. HANUSHEK, W. Allen Wallis Institute of Political Economy, Department of Economics, University of Rochester RODERICK J.A. LITTLE, Department of Biostatistics, University of Michigan CHARLES F. MANSKI, Department of Economics, University of Wisconsin WILLIAM NORDHAUS, Department of Economics, Yale University JANET L. NORWOOD, The Urban Institute, Washington, District of Columbia EDWARD B. PERRIN, School of Public Health and Community Medicine, University of Washington PAUL ROSENBAUM, Department of Statistics, Wharton School, University of Pennsylvania KEITH F. RUST, Westat, Inc., Rockville, Maryland FRANCISCO J. SAMANIEGO, Division of Statistics, University of California, Davis MIRON L. STRAF, Director

OCR for page R1
--> Contents     Preface   vii     Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing Eric M. Gaier, Logistics Management Institute Robert C. Marshall, Pennsylvania State University   1     On the Performance of Weibull Life Tests Based on Exponential Life Testing Designs Francisco J. Samaniego and Yun Sam Chong University of California, Davis   41     Application of Statistical Science to Testing and Evaluating Software Intensive Systems Jesse H. Poore, University of Tennessee, Knoxville Carmen J. Trammell, CTI PET Systems, Inc   124

OCR for page R1
This page in the original is blank.

OCR for page R1
--> Preface The Panel on Statistical Methods for Testing and Evaluating Defense Systems had a broad mandate—to examine the use of statistics in conjunction with defense testing. This involved examining methods for software testing, reliability test planning and estimation, validation of modeling and simulation, and use of modem techniques for experimental design. Given the breadth of these areas, including the great variety of applications and special issues that arise, making a contribution in each of these areas required that the Panel's work and recommendations be at a relatively general level. However, a variety of more specific research issues were either brought to the Panel's attention by members of the test and acquisition community, e.g., what was referred to as Dubin's challenge (addressed in the Panel's interim report), or were identified by members of the panel. In many of these cases the panel thought that a more in-depth analysis or a more detailed application of suggestions or recommendations made by the Panel would either be useful as input to its deliberations or could be used to help communicate more individual views of members of the Panel to the defense test community. This resulted in several research efforts. Given various criteria, especially immediate relevance to the test and acquisition community, the Panel has decided to make available three technical or background papers, each authored by a Panel member jointly with a colleague. These papers are individual contributions and are not a consensus product of the Panel; however, the Panel has drawn from these papers in preparation of its final report: Statistics, Testing, and Defense Acquisition. The Panel has found each of these papers to be extremely useful and they are strongly recommended to readers of the Panel's final report. The remainder of this preface provides the reason for including each paper and a short introduction.

OCR for page R1
--> "Strategic Information Generation and Transmission: The Evolution of Institutions in DoD Operational Testing" by Eric Gaier and Robert Marshall This paper examines the historical evolution of operational testing in the Department of Defense (DoD) through use of the theory of principal agent games. In this model of defense test and acquisition, test information is both strategically generated and strategically conveyed. This game theoretical model is used to better understand various incentives and hence the behavior of key participants in defense acquisition, especially the program manager for a defense system in development, DOT&E, and Congress. While the model is an oversimplification of many aspects of defense test and acquisition, it may be profitably used to indicate possible detrimental impacts of the incongruent incentives of the various participants in defense acquisition and to suggest methods for their avoidance. "On the Performance of Weibull Life Tests Based on Exponential Life Testing Designs" by Frank Samaniego and Yun Sam Chong This paper examines the consequences of using the model of exponential times to first failure during the design of a test, when the times to first failure instead follow the two-parameter Weibull distribution. When this assumption obtains, improvements to the hypothesis tests that continue to use the assumption of exponential times to first failure from use of a Weibull assumption are demonstrated. In addition, the benefits, especially the possible reduction of time on test, from making use of the Weibull assumption at the test planning stage are explored. There are two reasons to consider this paper. First, in situations when Weibull model's have been identified in the past, there are major advantages to use of the methods and tables for test design and evaluation. Further, since the Weibull model is one of several alternatives to the exponential model in a variety of reliability contexts, e.g., for testing repairable systems and systems with dependent failure rates, the gains from use of a more appropriate model can be

OCR for page R1
--> generalized to several other possibilities that need to be explored when the exponential model is deficient. "Application of Statistical Science to Testing and Evaluating Software Intensive Systems" by Jesse Poore and Carmen Trammell This paper examines a method for statistical testing of software. Statistical approaches to testing enable the efficient collection of empirical data that limit and measure uncertainty about the behavior of the software intensive system, and support decisions regarding the benefits of further testing, deployment, maintenance, and evolution of the software. In statistical software testing, the population is the set of all scenarios of use that are organized through an operational use model. The states of use of the system and the allowable transitions among those states are identified, represented in the form of one or more Markov chains. The methods have the advantages of being based on software architecture and are readily validated. Usage models can be represented as the solution to a linear program. Experimental design, e.g., combinatorial designs and partition testing, can be used in conjunction with this approach to achieve efficient coverage of all states. The method also provides an economic stopping criterion. This novel approach to testing software intensive systems therefore has a number of advantages over current methods used by DoD and should be considered as an alternative. JOHN E. ROLPH, CHAIR PANEL ON STATISTICAL METHODS FOR TESTING AND EVALUATING DEFENSE SYSTEMS

OCR for page R1
This page in the original is blank.