Evaluation Center, in relevant areas such as experimental design and reliability theory. The service test agencies, particularly the Army Operational Test and Evaluation Command, make use of statistical consultants. Also, DOT&E has access to expert statistical assistance at the Institute for Defense Analyses. All the service test agencies have military staff who operate with both dedication and professionalism, but they operate in the context of statistical training that prepares them to apply standard methodology, rather than to produce customized solutions as needed. Finally, the service test agencies make use of and occasionally develop statistical software to help in test design and evaluation. RAPTOR, developed at the Air Force Operational Test and Evaluation Center and used to evaluate the reliability of component systems, is a particularly relevant and impressive example of this.

However, the DoD test community generally has limited access to, and makes little use of, individuals who have highly advanced training in statistics, specifically, the level of training that is typical of a doctorate from a graduate program in statistics. The panel knows of only one Ph.D.-level statistician who is a full-time employee in any of the three largest service test agencies at this time. The service test agencies also do not make enough use of the statistical expertise at the Naval Postgraduate School, the Center for Naval Analysis, the Institute for Defense Analyses (through DOT&E), Aerospace, RAND, and other similar institutions. It appears that the Army, Navy, and Air Force test agencies rarely consult with academic statisticians, even for test design and evaluation issues concerning multibillion dollar systems.

CONSTRAINTS ON ACCESS TO STATISTICAL EXPERTISE

Some of the limited interaction with statistical experts is understandable. First, as stated above, the problems in design and evaluation are heavily constrained by various factors, including test designs that are constrained by budgets to have small sample sizes, test facility scheduling constraints, and test facility limitations. Navy tests seem particularly constrained. Evaluations are often limited by time, and they are focused on the calculation of means and percentages and, sometimes, significance tests for identified measures to be used in the decision regarding whether to enter into full-rate production; this focus reduces incentives for more thorough and sophisticated analyses. These constraints can at times limit the utility gained through interaction with statistical experts. (We note, however, that many of the constraints would disappear with adoption of a test and acquisition strategy recommended in Chapter 3.) Yet these constraints can also increase the value of interactions with statistical experts, since constraints present non-standard design problems; budgetary limitations make efficient test design even more important; and evaluations can be expedited by



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement