The operational test design, or shot matrix, in the Test and Evaluation Master Plan (see Table 2-1) lists eight test events that vary according to such factors as range of engagement, target location error, logic of targeting software, type of tank formation, aimpoint, time of day, tank speed and spacing, and threat environment. Three levels are specified for the range of engagement: near minimum, near maximum, and a medium range specified as either “2/3s” or “ROC,” the required operational capability. Target location error has two levels: median and one standard deviation (“1 sigma ”) above the median level. (The distinction between centralized and decentralized is unimportant in this context.) The logic of targeting software (primary vs. alternate) and type of tank formation (linear vs. dispersed) are combined into a single, two-level factor. Aimpoint distance is either short or long, and aimpoint direction is either left or right. The aimpoint factors are not expected to be important in affecting system performance. (The payload inventory is also unimportant in this context.) Tanks are expected to travel at lower speeds and in denser formations during night operations; therefore, tank speed and spacing are combined with time of day into a single two-level factor (day vs. night). Three different threat environments are possible: benign, Level 1, and Level 2 (most severe).
Clearly, in view of the limited sample size, many potentially influential factors are not represented in the shot matrix.
One approach to operational testing for the ATACMS/BAT system would be to design a large fractional factorial experiment for those factors thought to have the greatest influence on the system performance. The number of effective replications can be increased if the assumption that all of the included design factors are influential turns out to be incorrect. Assuming that the aimpoint factors are inactive, a complete factorial experiment for the ATACMS/BAT system would require 23 × 32 = 72 design points. However, fractional factorial designs with two- and three-level factors could provide much information while using substantially fewer replications than a complete factorial design. Of course, these designs are less useful when higher-order interactions among factors are significant. (For a further discussion of factorial designs, see Appendix B, as well as Box and Hunter, 1961.)
Another complication is that environment (or scenario) is a factor with more than two settings (levels). In the extreme, the ATACMS/BAT operational test results might be regarded as samples from several different populations representing test results from each environment. Since it will not be possible to evaluate the test in several unrelated settings, some consolidation of scenarios is needed. It is necessary to understand how to consolidate scenarios by identifying the underlying physical characteristics that have an impact on the performance measures, and to relate the performance of the system, possibly through use of a parametric model, to the underlying characteristics of those environments. This is essentially the issue discussed in Appendix C.
While the above fractional factorial approach has advantages with respect to understanding system performance equally in each scenario, we can see some benefits of the current OPTEC approach if we assume that the majority of interest is focused on the “central” scenario, or the scenario of most interest. In the current OPTEC approach, the largest number of test units are allocated to this scenario, while the others are used to study one-factor-at-a-time perturbations around this scenario, such as going from day to night or from linear to dispersed formation. This approach could be well suited to gathering information on such issues while not losing too much efficiency at the scenario of most interest. And if it turns out that changing one or more factors has no effect, the information from these settings can be pooled to gain further efficiency at the scenario of most interest.