Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 362
Virtual Reality: Scientific and Technological Challenges 10 Networking and Communications Computer networks offer real-time interaction among people and processes without regard to their location. This capability coupled with virtual environments (VE) makes telepresence applications, distance learning, distributed simulation, and group entertainment possible. It has been suggested that the most promising use of VE and networks will be in applications in which people at different locations need to jointly discuss a three-dimensional object, such as radiologists using a VE representation of a CT (computer-assisted tomography) scan (Pausch, 1991) or aeronautical engineers using a distributed virtual wind tunnel (Bryson and Levit, 1992). Another exciting concept is that of virtual libraries, like the one being developed at the Microelectronics Center of North Carolina. This project, Cyberlib, will allow patrons to enter "the information space independently" or go to a "virtual reference desk" from anywhere across the United States via the Internet (Johnson, 1992). We already have virtual newspapers. The San Jose Mercury News publishes its entire text (including classified advertisements) via the American Online Service using graphically based software for the Macintosh and IBM personal computers. VE and high-speed networks are the tools that will allow us to explore Mars and the earth's oceans. The National Aeronautics and Space Administration's Ames Research Center is examining the use of VE to control robot explorers. Scientists at the Monterey Bay Research Institute are integrating such diverse technologies as computer simulations, robotics, and VE with a sophisticated undersea local-area network to explore the nation's newest marine sanctuary.
OCR for page 363
Virtual Reality: Scientific and Technological Challenges The Department of Defense's Advanced Research Projects Agency (ARPA) has also recognized the importance of networks with regards to VEs in one of its seven science and technology thrusts. Since 1984 ARPA has funded the simulation network (SIMNET) system, which enables simultaneous participation of approximately 300 players in simulated aircraft and ground combat vehicles located in Europe and the United States on the same virtual battlefield. ARPA is currently working on a much larger technology demonstration to "create a synthetic theater of war" using the Defense Simulation Internet (DSI). This network will link thousands of real and simulated forces all across the United States (Reddy, 1992). It is anticipated that, in the future, high-speed networks will allow VE systems to take advantage of distributed resources, including shared databases, multimedia sources, and processors, by providing the required computational power for building the most demanding applications. High-speed networks will provide VE applications with access to huge datasets generated by space probes, dynamic climatic information from weather models, and real-time imaging systems such as ultrasound. One less serious but rather lucrative combination of VE and networking that is currently in use is multi-user interactive VE games offered by Genie and Sierra Online services. These games are provided over slow telephone lines with limited graphics. Other upcoming cooperative arrangements are planned between VE and cable television. Some examples of these are the promise of multi-user games by Sega and Nintendo and the development of virtual Walmart department stores by the Home Shopping Network. STATUS OF THE TECHNOLOGY Distributed VE will require enormous bandwidth to support multiple users, video, audio, and possibly the exchange of three-dimensional graphic primitives and models in real time. Moreover, new protocols and techniques are required to appropriately handle the mix of data over a network link. The technologies providing these gains in performance blur the traditional distinction between local-area and wide-area networks (LANs and WANs). There is also a convergence between networks that have traditionally carried only voice and video over point-to-point links (circuit-switching) and those that have handled only data such as electronic mail and file transfers (packet-switching). Wide-Area Networks The fabric of our national telecommunications infrastructure is being radically altered by the rapid installation of fiber optic cabling capable of
OCR for page 364
Virtual Reality: Scientific and Technological Challenges operating at gigabit speeds for long-haul traffic. The change has come so fast that AT&T wrote off $3 billion of equipment in a single year while replacing its analog plant with digital systems. Long-distance carriers have been quietly installing synchronous optical network (Sonet) switches to support those speeds. Sonet is a U.S. and international standard for optical signals; it is the synchronous frame structure for multiplexed digital traffic and the operating procedure for fiber optic switching and transmission systems. Sonet allows lower-speed channels such as OC-3 (155 Mbit/s) to be inserted and extracted from the main rate. Also, Sonet defines data transmission speeds to 2.4 Gbit/s and the major carriers believe that this can be extended to 10 Gbit/s for a single fiber link (Ballart and Ching, 1989). It is likely that data rates will go much higher in the next century. Japan's telephone company, NT&T, has announced transmission of a 20 Gbit/s data stream over 600 miles of fiber, and it is working to increase throughput to 100 Gbit/s using soliton technology. Both MCI and Sprint have announced that they will have Sonet completely deployed by 1995. The new switches will also incorporate asynchronous mode technology (ATM). ATM (2.4 Gbit/s) provides fast variable-rate packet-switching using fixed-length 53 byte cells. This permits ATM networks to carry both asynchronous and isochronous (video and voice) transmissions at Sonet speeds. ATM also supports multicasting, and the Consultative Committee on International Telegraph and Telephony has written a standard for interfacing ATM with Sonet. AT&T has announced that it will provide WAN ATM services in 1994. Two other high-speed services are being offered today: switched multimegabit data service (SMDS) and frame relay. SMDS, based on the Institute for Electrical and Electronics Engineers 802.6 MAN standard, is connectionless, uses frames and fixed-length cells, and offers speeds up to 34 Mbit/s with plans to upgrade to 155 Mbit/s. It is currently being offered only in local metropolitan areas. Frame relay is connection-oriented (dial-up) and offers speeds up to 1.544 Mbit/s. Although neither of the services are considered well suited for voice or video applications, they are likely to reduce the data transmission cost of WAN services. The major carriers are not alone in this effort to push wide area networking to faster speeds. The backbone of the Internet, NSFnet, has been completely upgraded to T-3 (45 Mbit/s) and will transition to OC-3 (155 Mbit/s) by 1994. The backbone rate is expected to go to OC-12 (622 Mbit/s) by 1996. The National Science Foundation is responsible for this effort as part of the overall National Research and Education Network (NREN) project, which is one of the four components in the U.S. High Performance
OCR for page 365
Virtual Reality: Scientific and Technological Challenges Computing and Communications (HPCC) Program, established by Vice President Gore. Part of the project involves the installation of OC-12 networks at several regional test beds: Aurora, Casa, Blanca, Nectar, and Vistanet. These test beds will be pursuing ''Grand Challenge" applications ranging from medical imaging to interactive visualization using ATM, SMDS, and Sonet technologies (Johnson, 1992). The Internet Engineering Task Force, foreseeing future increases in WAN speeds, has been conducting experiments over the Internet multicast backbone network to develop a standard for sending multimedia applications over packet-switching networks. The Real-Time Transport protocol that is currently being tested supports packet video and audio and could be used with ATM as it becomes more pervasive. At the local loop (the telephone line between the central office and customers), intense competitive pressures in the cable and telephone industries are spurring the development of new technologies to allow the currently installed copper lines to operate at megabit speeds without expensive repeaters. The high-bit-rate digital subscriber loop is an encoding scheme being used now to deliver duplex T1 service. Another scheme that is in the trial stage, asymmetric digital subscriber loop, provides 1.5 Mbit/s in one direction and 16 kbit/s in the other. With the use of new compression standards such as those of the Consultative Committee on International Telegraph and Telephony (H.261—P × 64) and the Motion Pictures Experts Group (MPEG), ADSL-II, a follow-on technology with 3 to 4 Mbit/s transport capability, could carry real-time video, audio, and VE data (Hsing, 1993). The rewiring of the local loop has also begun. Tele-Communications Inc. (TCI), the nation's largest cable company, has announced that it intends to upgrade by 1996 the broadband lines to more than 90 percent of its customers with fiber. TCI is doing this in order to support increased channel capacity, high-definition television (HDTV—which, when uncompressed, requires 1.2 Gbit/s bandwidth), and VE services such as games from Sega/Genesis. TCI also will try to counter the threat of the telephone companies entering the lucrative market. AT&T has had a test bed for developing the fiber optic local loop for several years near Pittsburgh. Bell Atlantic, a regional phone carrier, is conducting tests in its employees' homes of a system that delivers movies over the telephone line. The Regional Bell Operating Companies recently proposed to the Clinton administration a plan to rewire the local loop with fiber optic cable within 10 years in exchange for permission to enter the information services market and manufacture telecommunications equipment. A bill that failed to pass Congress in 1993 would have permitted the regional Bell operating companies to enter the cable business.
OCR for page 366
Virtual Reality: Scientific and Technological Challenges Local-Area Networks The connection from a three-dimensional graphics workstation to high-speed WANs is most likely to come from a LAN. (Table 10-1 presents LAN capabilities.) Most LANs use Ethernet (10 Mbit/s), which is inadequate for the high-performance demands of VE and multimedia. Several companies have endorsed the proposal of standards for 100 Mbit/s Ethernet. In addition, through the use of switching hubs (which are collapsed backbones with gigabit-speed backplanes), a workstation can use all the bandwidth available on an Ethernet segment. FDDI (Fiber Distributed Data Interface—100 Mbit/s) is used extensively in supercomputer centers. However, most host interfaces operate in the 20-50 Mbit/s range. A new standard for FDDI over unshielded twisted pair wiring should make FDDI more affordable for general computing. Unfortunately, both FDDI and Ethernet technologies are not ideal for isochronous data because there is no guaranteed data rate or prioritizing in the protocols. The American National Standards Institute (ANSI) has developed FDDI-II to address this problem by dynamically allocating bandwidth to isochronous applications. ANSI is working on FDDI Follow-On with plans for completion to be finished in the middle of the decade. FDDI Follow-On is likely to be designed for speeds up to 1.25 Gbit/s. In addition IEEE has established a working group that issued a final draft of a standard, the Integrated Services LAN Interface, which defines a LAN that carries voice, data, and video traffic over unshielded twisted pair wires. The high-performance parallel interface (HPPI) is an ANSI standard supporting 32 and 64 bit interfaces that run at rates of 800 and 1,600 Mbit/s, respectively. It is a switched architecture and operates over a distance of 25 m on copper cables connecting supercomputers and their peripheral devices. A serial version of HPPI for fiber optic cables has been proposed to extend the range to 10 km. The NREN Casa test bed researchers from TABLE 10-1 Local Area Network (LAN) Capabilities LAN Technology Capacity Mbit/s Year of Final Standard Status Ethernet 10 1985 In use FDDI 100 1989 In use HPPI 800/1600 1992 In use Fiber Channel 132.8 - 1064.2 1993 Some products available ATM 45-622 1993 Some products available FDDI-FO 1250 Approx. 1995 Not available
OCR for page 367
Virtual Reality: Scientific and Technological Challenges Los Alamos, California Institute of Technology, the Jet Propulsion Laboratory, the San Diego Supercomputing Center, and the University of California, Los Angeles, are developing HPPI-Sonet interfaces to connect supercomputers over multiple OC-3 circuits, providing 1.2 to 2.5 Gbit/s bandwidth (Cattlett, 1992). Fiber Channel is a proposed ANSI standard for very-high-speed LANs. It is designed to connect more than 4,000 computers and peripherals over several kilometers at data rates up to 1,062.4 Mbit/s. Fiber Channel will provide a number of upper-layer network services that HPPI does not, and it has the backing of IBM and Sun Microsystems. Another proposed standard, Scalable Coherent Interface, has a potential speed of 8 Gbit/s (Cattlett, 1992). ATM has also been deployed for local area networks. Several vendors, including Fore Systems Inc. and Adaptive Corp., are selling ATM switches for LANs. Fore Systems sells interfaces card for SGI, DEC, and Sun workstations. Each workstation is linked via fiber optic cable to a switch at 140 Mbit/s. The Aurora and Nectar test beds are investigating the use of ATM host interfaces for supercomputers (Cattlett, 1992). The allure of ATM is that it might eliminate the distinction between wide-area and local-area networks, providing high-speed connectivity from desk-tops across the United States. Issues to be Addressed Despite the apparent momentum in the development of network technology, there are still many problems that impede its use for VE. First, the hardware technology is emerging so rapidly that standards are in flux. For example, ATM is not completely specified because the standards bodies representing the system users and developers have not agreed on all the protocol requirements. How ATM maps to the upper-layer software protocols for transport services and routing is not yet clear (Cavanaugh and Salo, 1992). Other protocols, like FDDI-II, probably will not be implemented widely unless ATM does not succeed. Operating at gigabit speeds presents a new set of problems for networking. New methods of handling congestion are required because of the high ratio of propagation time to cell transmission time (Habib and Saadawi, 1991). By the time a computer in New York sends a message telling a host in San Francisco to stop sending data, a gigabit of information will have been transmitted. Latency also becomes a major concern. Much as a jet aircraft can be severely damaged by a small bird, short delays can cause major disruptions for high-speed networks and VE applications that demand real-time performance. The most likely bottlenecks identified in the Aurora project at the
OCR for page 368
Virtual Reality: Scientific and Technological Challenges University of Pennsylvania will be in the network interfaces, memory architectures, and operating systems of the computers on either end. For example, the Fore Systems ATM interface for the SGI Indigo can handle only 20 Mbit/s of data, even though the media can deliver 140 Mbit/s. The slow progress in increasing the interface performance of FDDI is an example of the lag in technologies we may see as high-speed networks are fully deployed. Nor have memory speeds kept up with the strides made in central processing unit and network performance. At the operating system level, most VE applications are built on commercial versions of UNIX—a system that is not designed for real-time performance. Additional problems are introduced by questions surrounding the adequacy of current transport protocols, like Transmission Control Protocol (TCP), that provide an interface between the operating system and the network. Furthermore, there is strong debate concerning the efficiency of the new generation of interface protocols, like Versatile Message Transaction Protocol (VMTP) and Xpress Transfer Protocol (XTP—designed to be implemented in silicon) (Rudin and Williamson, 1989). Other protocols, such as ST-II developed by Bolt Beranek and Newman (BBN) for the Defense Simulations Internet and the OSI suite, are challenging the Internet Protocol (IP) as networked applications like VE demand a host of integrated network services, such as multicast support and resource information for dynamic bandwidth allocation. BBN has also developed dead-reckoning techniques to abstract data from simulators, thus reducing the communications loads on the network (Miller et al., 1989). Developers of distributed VE should follow this example and examine how to balance network bandwidth requirements with better efforts at preprocessing the data. Finally, perhaps the greatest impediment to distributed VE is the lack of overall standards in the field, extending from file formats for three-dimensional models, to graphic and video images, to audio, to application interfaces. The IEEE standard for Distributed Interactive Simulation applications is in its infancy and is incomplete. Although it does not define a standard applicable to the diverse requirements of VE, it marks a milestone because it shows a widening of understanding of the relationship between VE and networking technology. A related concern is networking architecture that will accommodate the connection of large numbers of communicating devices of different kinds that are delivering and using different kinds of services. Standards are needed for implementing architectural concepts as well as for achieving interoperability. A recent report of the National Research Council (1994) points out the importance of an open data network architecture (based on shared technical standards that enable users and providers to interact freely and that permit technology upgrades) in fulfilling the promise
OCR for page 369
Virtual Reality: Scientific and Technological Challenges of the National Information Infrastructure. This lesson also is applicable to distributed VE. COMMUNICATIONS SOFTWARE Communications software passes changes in the virtual world model to other players on the network and allows the entry of previously undescribed players into the system. It is quite easy to exceed the capabilities of a single workstation when constructing a virtual world, especially if one expects multiple players in that world. As we move to a networked environment, we go beyond graphics and interface software issues to a much more complicated system involving database consistency issues. A standard message protocol between workstations is needed to communicate changes to the world. For small systems, it is important to ensure that all players on the network have the same world models and descriptions as time moves forward in the VE action. In systems with fewer than 500 players, each node in the virtual world has a complete model of the world. The current SIMNET system looks like this and uses Ethernet and T1 links. For systems with more players (1,000 to 300,000, as envisioned for the Defense Department's Louisiana Maneuvers project), it is not reasonable to propagate complete models of the world but rather to consider rolling in the world model just as aircraft simulators roll in terrain. Although only a few researchers are examining this problem, there is at least an abstraction that might be relevant in the work of Gelernter (1991), entitled "Mirror Worlds." Mirror Worlds presents the notion of information tuples (like a distributed blackboard) and tuple operations: publish, read, and consume. Such an abstraction allows flexibility in communicating any type of information throughout a large, distributed system; flexibility that is necessary for constructing large virtual worlds. Although the abstraction appears appropriate, efficient and real-time implementations are an open research problem. SIMNET, which is a simulation of entities on a battlefield, is currently the largest communications network in any VE (Pope, 1989; Thorpe, 1987). It is a standard for distributed interactive simulations developed under ARPA auspices that began running in 1988. Distributed means that the processing of the simulation can take place on different hosts on a network. Interactive means that the simulation can be dynamic and guided by human operators. SIMNET is also a network protocol with a well-defined set of communication packets called Protocol Data Units (PDUs). In addition to packet definitions, SIMNET also defines algorithms for the dead reckoning of vehicles whose velocities and directions can be predicted. The purpose of the dead reckoning is to minimize traffic on the computer network. SIMNET currently uses T1 links for long distance and
OCR for page 370
Virtual Reality: Scientific and Technological Challenges Ethernet for local communications. The protocol does not use TCP/IP, multicasting or any other network services. SIMNET sits on top of the device driver/link layer and, consequently, it requires that processes reading/writing SIMNET packets run as root privileged. SIMNET is limited in that it is only capable of engagements of a maximum of 300 players. Also the SIMNET protocol does not allow for generalized information transfer. Distributed interactive simulation (DIS) is the latest standard for communicating Defense Department simulations (IST, 1991), with a latency of less than 100 ms in situations in which players are expected to interact. This is to minimize human perception of lag, which can induce simulator sickness. It has common goals with SIMNET but, as its replacement, is far more ambitious. DIS is intended to overcome the limitations imposed by SIMNET and to include packet definitions not present in SIMNET. DIS uses the Defense Simulation Internet (DSI) as its support network. DSI is currently being installed in more than 150 sites throughout the United States. The NPSNET project is connected to this network and currently displays records propagated by the DIS 2.0.3 draft standard. DIS uses IP multicasting services and does not require that its processes run under root privilege. DIS is planned to grow, with the eventual communicating player load expanding to 10,000 to 300,000 players. A DIS-compliant version of NPSNET, NPSNET-IV, was recently demonstrated at SIGGRAPH '93; approximately 50 players located in Washington, D.C.; Anaheim, California; Dayton, Ohio; and San Diego and Monterey, California, were connected. The hardware communications medium was Ethernet and a gateway machine to a T1 WAN. There are additional players in the development of networked virtual environments; a good source is the Networked Virtual Environments and Teleoperation special issue of Presence (1994). RESEARCH NEEDS Advances in network hardware and communication software are key to the full realization of virtual environments. The following paragraphs detail some of the key network and communications needs generated by virtual environments. Hardware Although high-speed networks will provide the required computational power to build large-scale VE systems, there are several problems to be addressed. First, the actual cost of WANs may present a problem for the development of large scale VEs. The current price of a one-year T1 (1.5 Mbit/s) link is beyond the budget of most VE research groups, and we already know T1 is too slow. This is an issue of accessibility. Establishing
OCR for page 371
Virtual Reality: Scientific and Technological Challenges a subsidized, nationwide, open VE network may be one way to eliminate this bottleneck to networked VE development. Second, network host interface slowdown problems are another significant problem for the large-scale VEs of the future. We have high-speed interfaces today, but the layers of UNIX system software make these interfaces hard to utilize at their full rates. Third, because latency across long distances is a permanent problem for VE systems (i.e., the speed of light), there is a need to build software mechanisms to cope with latency—such as dead reckoning or predictive modeling. Single-packet DIS transfers require approximately 300 ms between Monterey, California, and Washington, D.C., today using the NPSNET and UNIX software layers and Ethernet/T1 links. While NPSNET dead-reckoning algorithms for vehicles provide some predictive capabilities, these algorithms are not generalizable to other player paradigms. In addition, there is a long way to go to reach the 10 ms necessary for two participants on opposite ends of a long-distance network to work together cooperatively in real time. Fourth, network hardware technology is evolving so rapidly that standards are in flux, and we are currently at risk of having them set by the large entertainment conglomerates. At the present time, there is only one governmental effort in network standard setting, the DIS packet format, IEEE standard 1278—and this standard can be considered a stopgap. DIS was developed as a standard of communication for military vehicle simulators and operates with Ethernet/T1 links and a particular software architecture for virtual environments. It was not designed for generalized information exchange between large-scale virtual environments; it is not even general enough to handle the articulations necessary for an animated, walking human figure. Packet-switching standards should be set that take into consideration the requirements of distributed VE systems with huge databases and potentially large numbers of participants. Software The problem of generalized information distribution in a large-scale, virtual environment of more than 300 participants is not yet solved. There are interesting abstractions but no real solutions (Gelernter, 1991). For example, the solutions offered by the Department of Defense's DIS standard are limited to 300 participants or less and are too specific and complex to be useful to the general VE development community. In the near future, both the military and civilian communities will have requirements for environments capable of handling as many as 10,000 to 300,000 simultaneous participants. There are several fundamental infrastructure and software problems associated with the research to develop the communications
OCR for page 372
Virtual Reality: Scientific and Technological Challenges software needed to solve the technical issue of interaction among thousands of participants. One of the primary infrastructure problems is that only a few university research groups are working in networking large-scale distributed VEs, and they are constrained in at least two ways. First, the networks are operating within extremely limited bounds—the Department of Defense (DIS) bounds; and second, they are very expensive to run. DIS is an applications protocol developed under ARPA and U.S. Army contract as the networking protocol for Department of Defense simulators. Although DIS is known to have significant problems (i.e., it is limited in capability and too large for what it needs to do), it has been made an IEEE standard. The Department of Defense is now putting all of its development resources into using this protocol for moving from the SIMNET-sized limit of approximately 300 participants (using Ethernet/T1 links) toward the 10,000 to 300,000 participant level. What is needed is a major research initiative to investigate DIS alternatives—alternatives that will allow a generalized exchange of information between the distributed participants of large-scale virtual environments. The new protocol needs to be extensible, a feature that does not appear to be a part of DIS. A second infrastructure issue is the high cost of research into large-scale networked virtual environments. There are very few universities that can afford to dedicate the T1 lines (with installation expenses of $40,000 and operating costs of $140,000 per year) needed to support these activities. At the present time, only two universities in the United States have such dedicated resources: the University of Central Florida and the Naval Postgraduate School. Various approaches, such as an open VE network and the necessary applications protocol, should be considered for providing research universities with access to the needed facilities. Unless costs are significantly reduced, a concerted development effort on software solutions for networked VE cannot begin. A critical ingredient in the development of large-scale networks for VE is the interest of the entertainment industry in introducing telecomputer and interactive video games into the home. To date, cooperative financial arrangements have been made between manufacturers of video games and large corporations already in the telecommunications business. The focus of these arrangements is to provide low-end, relatively inexpensive systems for large numbers of participants with an eye to making a profit. Like the Department of Defense, the video game industry is not interested in generalizability of information transfer, nor is it interested in openness and accessibility. The danger is that the video game industry will set the networking protocol standards at the low end and the Defense/DIS community will set the standards at the high end. Neither of these standards is general enough for the widespread VE application development we would like to see.
Representative terms from entire chapter: