The robot had a planner for path planning and obstacle avoidance. The perception system used an interest algorithm (Moravec, 1977) that selected features that were highly distinctive from their surroundings. It could track and match about 30 image features, creating a sparse range map for navigation. The robot required about 10 minutes for each 1-meter move, or about 5 hours to go 30 meters, using a 1-MIP processor. Outdoors, using the same algorithms, the Cart could only go about 15 meters before becoming confused, most frequently defeated by shadows that created false edges to track. Moravec’s work demonstrated the utility of a three-dimensional representation of a robot’s environment and further highlighted the difficulties in perception for outdoor mobility.

By the end of 1981, the limited ventures into the real world had highlighted the difficulties posed by complex shapes, curved surfaces, clutter, uneven illumination, and shadows. With some understanding of the issues, research had begun to develop along several lines and laid the groundwork for the next generation of perception systems. This work included extensive research on image segmentation4 and classification with particular emphasis on the use of color (Laws, 1982; Ohlander et al., 1978; Ohta, 1985), improved algorithms for edge detection5 (Canny, 1983), and the use of models for road following (Dickmanns and Zapp, 1986). At this time there was less work on the use of three-dimensional data. While Shakey (rangefinder) and Moravec (stereo) demonstrated stereo’s value to path planning (more dense range data), stereo remained computationally prohibitive and the range finders of the time, which required much less computation, were too slow.6

THE ALV ERA 1984–1991

In 1983, DARPA started an ambitious computer research program, the Strategic Computing program. The program featured several integrating applications to focus the research projects and demonstrate their value to DARPA’s military customers. One of these applications was the Autonomous Land Vehicle (ALV),7 and it would achieve the next major advance in outdoor autonomous mobility.

The ALV had two qualitative objectives: (1) It had to operate exclusively outdoors, both on-road and off-road across a spectrum of conditions: improved and unimproved roads, variable illumination and shadows, obstacles both on-and off-road, and in terrain with varying roughness and vegetative background. (2) It had to be entirely self-contained and all computing was to be done onboard. Quantitatively, the Strategic Computing Plan (DARPA, 1983) called for the vehicle to operate on-road beginning in 1985 at speeds up to 10 km/h, increasing to 80 km/h by 1990, and off-road beginning in 1987 at speeds up to 5 km/h, increasing to 10 km/h by 1989. On-road obstacle avoidance was to begin in 1986 with fixed polyhedral objects spaced about 100 meters apart.

The ALV was an eight-wheeled all-terrain vehicle with an 18-inch ground clearance and good off-road mobility (Figure D-1). A fiberglass shelter sufficient to enclose generators, computers, and other equipment was mounted on the chassis. The navigation system included an inertial land navigation system that provided heading and distance traveled, ultrasonic sensors to provide information about the pose of the vehicle with respect to the ground for correcting the scene model geometry, and a Doppler radar to provide ground speed from both sides of the ALV to correct for wheel slippage errors in odometry.

The ALV had two sensors for perception: a color video camera (512[h] by 485[v], 8 bits of red, blue, green data) and a two-mirror custom laser scanner that had a 30°-vertical by 80°-horizontal field of view, produced a 256(h) × 64(v) range image with about 3-inch resolution, had a 20-meter range, a 2-Hz frame rate,8 and an instantaneous field of view of 5-8.5 mrad. The scanner also yielded a 256 × 64 reflectance image. The laser scanner used optics and signal processing to produce a real-time depth map suitable for obstacle avoidance that was impractical to obtain from stereo by calculation. The ALV was the first mobile robot to use both these sensors.

The first algorithm used for road following (Turk et al., 1988) projected a red, green, blue (RGB) color space into two dimensions and used a linear discriminant function to differentiate road from nonroad in the image. This was a fast algorithm, but it had to be tuned by hand for different roads and different conditions. Using this algorithm, the ALV made a 1-km traverse in 1985 at an average speed of 3 km/h,

4  

One of the fundamental steps in image understanding is to segment or partition a scene into a set of regions, each of which is homogeneous with respect to a set of properties that correspond to semantically meaningful entities in the scene. Classification (sometimes called interpretation) can be thought of as labeling the regions. The division is fuzzy. Depending on how segmentation is done, the desired degree of classification might be obtained as part of segmentation or additional processing may be required. For example, three-dimensional information could be used with geometric criteria to segment objects into obstacle-like or not. Classification as to their characteristics (e.g., a rock or bush) is required to determine if the objects are traversable. Segmentation and classification can be based on colors (all the way to spectra), on ratios of colors, edges, geometry, texture, or other features or combinations of features.

5  

Edges worked well to segment the prismatic objects in the “blocks world” and were useful, but not sufficient, for natural scenes, nor were the edge-finding algorithms developed for the straight edges and controlled illumination of the stylized, less complex “blocks world” appropriate for the natural scenes that contained curved shapes and where there were many opportunities for false alarms because of changes in illumination. For example, the edges of shadows and sun-lit spots are typically much stronger than those of roads.

6  

The range finder was repositioned to acquire each point in turn.

7  

Martin-Marietta was the integrator. Participants included SRI, CMU, University of Maryland, Advanced Decision Systems, Hughes Research Laboratories, U.S. Army Engineering Topographic Laboratory, and ERIM.

8  

This solved the slow data acquisition problem with the range finders. A laser scanner, sometimes referred to as a LADAR (laser detection and ranging), is a scanning laser range finder.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement