For more information, purchase options, and for other versions (if available) please visit http://www.nap.edu/catalog.php?record_id=10060
Chapter 16: Diverse Geospatial Information Integration | Data for Science and Society: The Second National Conference on Scientific and Technical Data | U.S. National Committee for CODATA | National Research Council

U.S. National Committee for CODATA
National Research Council
Promoting Data Applications for Science and Society: Technological Challenges and Opportunities
 


16

Diverse Geospatial Information Integration

Daniel Gordon




     It's a pleasure to be here. I am president of Autometric, Inc., which is a visualization company that specializes in handling geospatial data sets. Autometric has been around for 42 years, and we were involved in the very early space-based imaging systems, remote sensing, photogrammetry, and tying the imagery to Earth so we could extract the coordinates from it. In the 1960s, we were also involved in mapping the Moon for the Apollo missions to create the three-dimensional data sets that supported the lunar landings.

     I am going to talk about some issues associated with bringing together diverse geospatial data sets.1 I will frame this presentation in the way that we do it at Autometric, which is in the context of current and future industry trends. I would like to start at the beginning, from our point of view, which is the creation of an underlying geometric database. At a global scale you would have a model of Earth. The global model of the Earth requires ground elevation information. I think it's exciting that the recent shuttle mission collected a fantastic three-dimensional data set of the whole Earth using radar imagery (synthetic aperture radar), especially if you think of the attributes associated with that model, the imagery and the scale. We start by creating an underlying infrastructure that is highly attributed and has lots of "geometry" like the radar data. Once the global model information foundation is established, then we get ready to start the clock to show change with time. And in the course of starting the clock, we have to understand what the "objects" (or information) are going to do next, and so we have to begin to think about their behavior.

     There are two categories that we work with in the course of trying to bring different sets of information together. The first would be connections to real-time data, where we are just monitoring objects as they exist. The second would be connections to models and other tools that predict impacts, future states, and so forth. In the National Oceanic and Atmospheric Administration (NOAA) area, the objects we're talking about modeling are gridded data fields that may represent forecasts of tomorrow's weather or changes in the ocean currents, for example.

     In each industry segment, there are organizations that attempt to define standards. The Open Geographic Information Systems (GIS) Consortium is trying to define methods for handling the underlying data. In the area of the real-time data feeds, there are Department of Defense and commercial communications standards. The issue that we try to address at Autometric is not necessarily associated just with handling information in a particular discipline, but with what we have to do to begin to solve larger problems that span disciplines.

     For example, Autometric had a program that focused on supporting modeling and simulation of satellite systems. We had customers who wanted to look at Earth as a whole, and customers who wanted to put satellites in motion around Earth. The interesting thing that happened, which caused us to go through a period of fantastic growth, is that even though these customer bases were extremely different and did not communicate with one another, they were pulling us in the same direction. That direction was to integrate dissimilar types of information to provide a more complete and comprehensive picture.

     The folks that were looking at Earth as a whole and doing space modeling simulation were interested in increasing the resolution of Earth, adding high-resolution mountain ranges and even coarse models of urban areas to get a context of the size and location of buildings. They wanted to add higher-resolution data to the very basic Earth model that they had. The people that were flying around valleys were interested in taking a step back and looking at areas as large as countries, hundreds of miles on a side, but they were also interested in understanding where the satellite systems they needed to access were located in order to communicate with them. They were fundamentally more interested in the regional scale, but they also were moving in the direction where they wanted to think globally.

     We had two completely separate customer bases that wanted the same thing. They wanted a high-resolution model of Earth. They wanted to be able to move around it interactively, and they also wanted to be able to do modeling and simulation in those environments.

     So what was the "integrator" or the "context" that we found to bring the information together? If we have diverse geospatial information, or information from many different sources, we need a common environment to bring these things together. We forced the integration of these two capabilities and created what we call a whole-Earth environment. Once you have the whole-Earth context to work in at increasing levels of resolution, everybody wants to move objects around in it when the clock starts. The key thing was to create an open interface that people could access either to inject real-time data into or to connect to their models.

     We started the effort in the early 1990s and, at that time, did not have the advantage of things that Microsoft has created now or that are widely used, such as component technology. The component technology has enabled us to make it very easy for people who have their own data, their own models, and their own real-time feeds to plug into this environment where the integration occurs.

     Another line of thought that we tried to follow is one of scale: global, regional, local, and personal scales. The change in scale also has an effect on the timeliness of the data, or the requirement for timeliness. For real-time data, obviously you want access to them as soon as possible. However, if you think about imagery on a global scale--if you are looking at 1-kilometer or 4-kilometer data, or even the old Multispectral Scanner data on the Landsats, they are not going to change very much from one image to the next. The data will change with respect to season, but you can't pick up things that people are doing in the landscape. With the launch of the Ikonos satellite, 1-meter data are becoming available. You see a lot of changes that can be associated with natural variability but also human impacts from construction, for example. The changes occur from an elevation or altitude point of view; also you can see certain "scales of change" based on your altitude and the technology of the sensor.

     These differences in scale also represent areas of the industry that have their own standards associated with them. A local scale, in which you are moving around urban environments, is usually associated with organizations that are concerned about real-time movement. Some good examples are flight models, perspective viewing, and transportation or logistics problems. In the course of trying to bring together these different scales in a global environment, with connections to models and to real-time data feeds, the issue really isn't so much at an individual discipline level, but working between disciplines.

     A key issue related to data is the timely availability and perishability of data. We are very excited about the launch of the new satellite systems, although this complicates things. The new imagery impacts not only resolution and viewing detail, but also the timeliness of the data. You're not going to get a lot of 1-meter data that are updated very frequently to show changes. We are working a lot on the Web with different sets of companies. One in particular is ORBIMAGE, which has a site called Terraserver.com, where a lot of Russian data are made available.

     Another key data issue is accessibility and the need to provide continuous access to data. Accessibility is increasing, but it is not really an issue that we are focused on so much. We are more concerned that everybody complains that there are too many data. Our focus is, What are we going to do when the data are available when there are a lot of data, and can we get access to them?

     Finally, two other issues related to data integration include defining data format standards and providing better data compression and communications. It is important that standards for data exchange are established. Autometric has focused on using self-defining data sets that allow users to read the data based on information in the first file headers. These self-defining data formats make data of various sorts more easily accessible and allow users to establish information structures that are most convenient for them. In addition to data format, the volume of data is important because they must be transported from one user to another and then accessed. One of the most difficult problems to solve is the issue of data volume. Today's primary method of dealing with data volume is new hardware that is faster or has better communication or transmission schemes. Autometric has faced this issue daily since "imagery" is one of the largest data sets to handle and it's the mainstay of our core remote sensing business. Autometric supports the development of better lossy and lossless data compression techniques to provide more compact data transfers and improved data archives. The images provide the focus for our visual displays.

     Autometric's focus has always been on visualization and therefore on the fidelity of the visualization. We are not as interested in providing real-time movement through these high-resolution data sets. Our concern is more to give someone a look at the highest-quality data. More often than not, we get in trouble when we try to throw away data as a result of compression, whether they involve imagery or geometrical polygonal data. Consequently, our interest in better lossless compression techniques is high on our list of desired capabilities.

     So how do we provide an integrated environment to support visualization and analysis? Everything that we do starts with the whole-Earth environment. It's a multiresolution environment. It supports global, regional, and local analysis, and we are interested in standards being formed in different disciplines. For example, we support forums like the Open GIS Consortium. But we are more interested in extending the focus in that organization, one that we know the best, into areas that at first glance might be viewed as being outside its domain. For instance, how do we get the consortium to think more about temporal factors? The GIS industry is descended from mapmakers. Consequently, there is very little thought given to time and how to model the behavior of objects moving around in a rich GIS environment. It is not clear how to attack the problem. We know the GIS community the best, so that's where we are going to try to expand the focus. We want to provide an analysis capability that supports data that come from different disciplines.

     We are going to run a short video that demonstrates Autometric's visual computing capability. The interesting thing about the video is that there is no magic. Everything you see is available in our commercial products and is shipped to interested customers. [Video shown at this point.]

     The video showed the analysis of imagery to generate two-dimensional and three-dimensional information. The whole-Earth concept was portrayed with satellites moving about Earth. The notion of changing scales was captured as the three-dimensional view slowly approached Earth and increasing levels of detail were shown in the imagery. Also, weather forecast models from NOAA were analyzed and animated to show how current and future weather conditions could be displayed. Three-dimensional clouds created from NOAA weather satellite imagery were also shown to animate over time. As the altitude of the viewpoint over Earth continued to decrease, the video showed scenes where the viewer was skimming across mountains and valleys with extremely good detail.

     I think it's interesting to note that the video has features that looked like the modeling of a gridded data field that you've seen in weather and environmental applications. You also saw modeling of communication systems from a different industry. You were moving interactively around a three-dimensional environment, which is representative of what you have seen in a visual simulation application. You also saw very rich geographic data sets, which you would see in a GIS system. All of these things are beginning to be brought together now, and for us the complication isn't so much what's going on in GIS, but how we bring all of it together. We are in the business of supporting decision making in many different ways, and data fusion is critical to providing information quickly and in context. This is consistent with our corporate motto, which expresses our basic objective: Autometric is "changing the way you view the world."



Notes

1 For additional information, see D. Gordon, A. Powell, and P. Zuzolo, "Diverse Geospatial Information Integration," paper submitted to the U.S. National Committee for CODATA's Conference on Data for Science and Society, March 13-14, 2000, Washington, D.C.



Copyright 2001 the National Academy of Sciences

PreviousNext