The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report
little evidence seen for use of statistical methods to help interpret results from simulations; (3) the literature on the use of simulations is deficient in its statistical content; and (4) simulations cannot identify the “unknown unknowns.”
A number of positions can be taken on the question of the use of simulation for operational testing. These range from (1) simulation is the future of operational testing; (2) simulation, when properly validated, can play a role in assessing effectiveness, but not suitability; (3) simulation can be useful in helping to identify the scenarios most important to test; (4) simulation can be useful in planning operational tests only with respect to effectiveness; and (5) simulation in its current state is relatively useless in operational testing. The panel is not ready to express its position on this question.
Simulations typically do not consider reliability, availability, and maintainability issues; do not control carefully for human factors; and are not “consolidative models” (Bankes, 1993), that is, do not consolidate known facts about a system and cannot, for the purpose at hand, safely be used as surrogates for the system itself. Simulations often are not entirely based on physical, engineering, or chemical models of the system or system components. Clearly the absence of these features reduces the utility of the simulations for operational testing. This does raise questions: At what level should a simulation focus in order to replicate, as well as possible, the operational behavior of a system? Should the simulation model the entire system or individual components? The panel is more optimistic about simulation of system components rather than entire systems.
As our work goes forward, we need to expand our understanding of current practice in the Navy and Air Force, especially with respect to their validation of simulations. We will also examine the use of distributed interactive simulation in operational testing to determine the particular application of statistics in that area. Key issues requiring further investigation because of their complexity include the proper role of simulation in operational testing, the combination of information from field tests and results from simulations, and proper use of probabilistic and statistical methodology in simulation.
To accomplish the above, we intend to meet with experts on simulation from the Navy Operational Test and Evaluation Force and the Air Force Operational Test and Evaluation Center to determine their current procedures, and to examine the procedures used to validate simulations used or proposed for use in operational testing. We will also meet with experts from the Institute of Defense Analyses and DoD on simulation to solicit their views on the proper role of simulation in operational testing.