• Provide flexibility for identifying new UGV requirements.

  • Be clearly aligned with an objective.

  • Be clearly defined, including the data that is to be collected for each metric.

  • Identify a clear cause and effect or audit trail for the data being collected and the technology being evaluated.

  • Minimize the number of variables being measured, with a process identified for deconflicting data collected from and perhaps impacted by more than one variable.

  • Identify nonquantifiable effects (e.g., leadership, training) and impacts of system wash-out (i.e., a technology’s individual performance is lost in the host system performance), and control (or reduce) them as much as possible.

  1. Documentation of each measure.

Some examples of UGV-specific metric-generating issues may include:

  • Example operational issues

    • Command and control issues may include the impact of UGVs on command efficiency (e.g., timeliness of orders, understanding of the enemy situation and intentions) of a small unit leader or unit commander and his staff

      • What is the cognitive workload (e.g., all critical events observed, accuracy of orders) of a small unit leader or unit commander and staff?

      • What can an array of UGVs do that a single UGV cannot?

      • What information needs to be shared to support collaboration among systems (manned and unmanned)?

  • Example technical issues

    • Mobility issues such as those enumerated in the High Mobility Robotic Platform study (U.S. Army, 1998) will provide a basis for UGV mobility metrics. UGV-specific mobility issues, such as the mobility metrics used in the DARPA Tactical Mobile Robot program should be considered. The measures should be based on specific applications being evaluated (e.g., logistics follower on structured roads, soldier robot over complex terrain).

    • Other supporting technology issues

      • How do data compression techniques impact UGV performance?

      • What is the optimal trade-off between local processing and bandwidth?

      • How much of a role should ATR play in perception?

There are many more issues and metrics that need to be defined for Army UGV assessments. The process must be to identify UGV objectives first, then the issues generated by each objective, then the hypotheses for each issue, and finally the measures needed to prove or disprove the hypotheses.

A major goal going forward must be a science-based program for the collection of data sufficient to develop predictive models for UGV performance. These models would have immediate payoff in support of system engineering and would additionally provide a sound basis for developing concepts of operation (reflecting real vehicle capabilities) and establishing requirements for human operators, e.g., how many might be required in a given situation. Uncertainties with regard to these last represent major impediments to eventual operational use.

MODELING AND SIMULATION

Modeling and simulation (M&S) is an essential tool for analyzing and designing the complex technologies needed for UGVs. Much has been written on the use of simulations to aid in system design, analysis, and testing. The DOD has also developed a process for simulation-based acquisition. However, little work has been done to integrate models and simulations into the system engineering process to assess the impact of various technologies on system performance and life-cycle costs.

To fully realize the benefits of M&S the use of M&S tools must begin in the conceptual design phase, where S&T initiatives have the most impact (Butler, 2002). For example, early in the conceptual design phase M&S can be used to evaluate a technology’s impact on the effectiveness of a UGV concept, determine whether all the functional design specifications are met, and improve the manufacturability of a UGV. By using simulations in this fashion, S&T programs can support significant reductions in design cycle time and the overall lifetime cost of future UGVs. Simulations provide a capability to perform experiments that cannot be realized in the real world due to physical, environmental, economic, or ethical restrictions. Because of this, they should play a very important role in implementing the assessment methodology used to assess UGV concepts and designs.

Just as with training, UGV system experimentation could be supported with any one or mix of the following types of simulations:

  • Live simulations—real people operating real systems.

  • Virtual simulations—real people operating simulated systems.

  • Constructive simulations—simulated people operating simulated systems (note: real people stimulate,



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement