The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Statistical Methods for Testing and Evaluating Defense Systems: Interim Report
TABLE 5-1 Reliability Assessment of Command Launch Unit of Javelin in Several Testing Situations
Mean Time Between Operational Mission Failures (in hours)
Reliability Qualification Test I
Reliability Qualification Test II
Reliability Development Growth Test
Preproduction Qualification Test
Dirty Battlefield Test
Force Development Test and Experimentation
Initial Operational Test
morale, state of alerting or fear, and anticipation of the scoring rules to be used), and require extra-statistical analysis.
Simulations Cannot Identify the “Unknown Unknowns”
Information one gains from simulation is obviously limited by the information put into the simulation. While simulations can be an important adjunct to testing when appropriately validated for the purpose for which they are used, no simulation can discover a system problem that arises from factors not included in the models on which the simulation is built. As an example, one system experienced unexpected reliability problems in field tests because soldiers were using an antenna as a handle, causing it to break. This kind of problem would rarely be discovered by means of a simulation.
As another example, consider Table 5-1, which represents the mean time between operational mission failures for the command launch unit of the Javelin (a man-portable anti-tank missile). Note that as troop handling grows to become typical of use in the field, the mean time between operational mission failure decreases. It is therefore reasonable to assume that the failure modes differ for the various test situations (granting that some were removed during the development process). However, since a simulation, designed to incorporate reliability, would most likely include failures typical of developmental testing (rather than operational testing), such a simulation could never replace operational tests. Thus the challenge is to identify the most appropriate ways simulation can be used in concert with field tests in an overall cost-effective approach to testing.
The panel is concerned that (1) rigorous validation of models and simulations for operational testing is infrequent, and external validation is at times used to overfit a model to field experience; (2) there is