Skip to main content

Currently Skimming:

2 COMPUTATION
Pages 28-62

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 28...
... But these applications are usually limited and specialized to the physics of the specific situation, whereas digital processing offers almost unlimited flexibility of application through its software programmability. Computation has a wide range of digital hardware support from specialpurpose application-specific integrated circuits (ASICs)
From page 29...
... Desktop computers support the fusion of information and data from multiple sources; provide the capabilities for identifying, extracting, and displaying the meaningful information contained in the masses of data available; and supply the tools for planning and implementing naval force and logistics functions. Long-range sensors, such as radar and sonar, are thoroughly dependent on embedded signal processors to transform the raw data into detailed information
From page 30...
... In other scenarios, embedded computers in missiles support the tracking sensors and provide the automatic target recognition, aim point selection, and detailed weapons guidance commands required to accurately counter hostile threats. Applications of computation are indicated in Box 2.1.
From page 31...
... Most parameters that characterize microelectronics, such as increase in clock speed, decrease in size and cost, components per chip manufacturable with high yield, and the like, have been growing exponentially for the last two or three decades. These component-level technology growth patterns combine with computer architectural and algorithm and software advances to produce the explosive growth of computational power we are witnessing today.
From page 32...
... SOURCE: Computation from data in Conrad, Michael, 1986, "The Lure of Molecular Computing," IEEE Spectrum, 23~10~:55-60, October. See also information in Figures 4.2 and 4.7 in Chapter 4 of this report.
From page 33...
... Chapter 4 of this report, dealing with sensors, provides more detailed discussions of these issues and gives a more complete discussion of silicon and other semiconductor technologies relevant to present and future micro- and nanoelectronics. Computer Performance With the vacuum tubes replaced by transistors, computers have exploited the ever-smaller, faster, and cheaper development of microelectronic integrated circuits to provide a similar exponential growth in available computational power.
From page 34...
... The costs for high-performance computers have continued to rise with time as the supercomputers have grown more powerful from the $4 million IBM 704 in 1956 to the Cray T-family of supercomputers that sell in the range of $10 million to $12 million each today. On the other hand, the costs for the functional/ affordable computers have fallen dramatically as the processors have soared exponentially in performance a DEC PDP-8 sold for several tens of thousand dollars in 1965, but a 200-MHz Pentium Pro PC can be purchased today almost anywhere for only a few thousand dollars, with a performance more than 1,000 times that of the PDP-8.
From page 35...
... The Alpha chip, which powers the DEC workstation, clocks well above the PC state of the art at about 500 MHz and is a very capable microprocessor. There is no difficulty in envisioning this kind of chip migrating into PCs, and so the projected growth curves for functional/affordable computing will easily be met for the next decade or so.
From page 36...
... It seems likely that the architectures will continue to evolve throughout the window of this study, with completely new concepts appearing from time to time. In terms of how instructions and data interact, multiprocessor architectures are broadly classified into two categories single instruction multiple data (SIMD)
From page 37...
... Hardware Technology Microprocessors Complex Instruction Set Computing Original microprocessor CPU designs were developed and incorporated into the hardware mechanisms for implementing all potentially useful elementary instructions as the assembly languages. Because the combinations of instructions that a programmer could employ were very large and unconstrained, designers added hardware mechanisms to deal with all potential on-board hardware conflicts.
From page 38...
... By limiting the instruction set supported directly by the hardware, the instruction decode logic could be reduced to 20 to 30 percent of the chip real estate versus 75 to 80 percent for CISC chips. A reduced instruction set with limited pipelining and instruction decode logic was first seen on the Cray-2 series of computers and achieved outstanding performance, exploiting the resulting small chip sizes and the fast clock speeds that the small chips permitted.
From page 39...
... The Intel Pentium chips resemble superscalar processors in terms of on-chip functionality except that they have relatively large, CISC-like instruction sets. The performance is measured separately for integer operations (SPECint95)
From page 40...
... A laboratory curiosity today, free-space optical interconnects between boards and chips will soon find their place in computer technology. Although the optical componentry is still lacking in some respects in diode threshold power and reliability, in particular as the clocks speeds increase, the optical interconnects will become more acceptable than electncal.
From page 41...
... Computers, from PCs to multiprocessor supercomputers, store data in a hierarchical form, with very-fast-access, small on-chip cache memories at the top, and very large, slow, off-line mass data storage at the bottom. Figure 2.4 illustrates this hierarchy.
From page 42...
... In the middle are the off-board hard disks and solid-state devices with access times measured in milliseconds. As discussed above, many CPUs, particularly the superscalar RISCs, incorporate significant amounts of cache memory directly on the processor chip, while increasing amounts of discrete random access memory (RAM)
From page 43...
... Today, practical area densities of magnetic hard disks can exceed those of comparable optical CDs by as much as an order of magnitude. The optical CD drive is very similar in performance to the magnetic hard drive, in that it provides fast random access to any data recorded on the disk at rates and access times that approach those of the magnetic drives.
From page 44...
... The parallel readout enables continuous data readout rates of about 3 megabytes/s today and can be expected to reach 10 megabytes/s by 2000. As is characteristic of all tape storage concepts, random-access times are on the order of seconds (for tens of meters of tape running at tens of meters per second)
From page 45...
... Optical Disk (130 mm) 1993 1994 1995 1996 1997 1998 1999 2000 FIGURE 2.5 Comparison of optical and magnetic data storage.
From page 46...
... , i.e., a specific block of code. The function point, a useful metric introduced in the 1970s by Allan Albrecht of IBM, measures both the content of computer programs and the productivity of the programmer by counting the number of steps (input, output, memory access, and general processing)
From page 47...
... Software Growth As computational resources have grown, the size of the tasks undertaken have grown correspondingly. Unpleasantly, this exponential growth in the magnitude of software systems has serious consequences for cost and reliability.
From page 48...
... SOURCE: Adapted from Buckley, CAPT B.W., USN, 1996, "Software Production Efficiency Lags Computer Performance Growth," a viewgraph included in the presentation "Future Technologies for Naval Aviation," Naval Research Laboratory, Washington, D.C., July 22. 91 -100 81-90 71 -80 61 -70 51 -60 41 -50 31 -40 21 -30 11-20 0-10 0 500 1,000 1,500 2,000 2,500 Program size (statements)
From page 49...
... One possible solution to the software development cost, schedule, and quality problem is to develop software logic synthesis capabilities. This should be an ongoing topic of R&D for the future.
From page 50...
... Supported by several high-order languages (e.g., Smalltalk and C++) , OOP promises to be a major factor in future software developments.
From page 51...
... Many statistical models are available that have been created to address these scenarios. These should be routinely applied to the software development process to improve and control practices and the quality of the final products.
From page 52...
... Applications The applications of computation are practically limitless, ranging from explicit mathematical calculations for the solution of equations describing physical phenomena and engineering analysis or the interpretation of sensor data, to the production and management of text files for documents and communication, and the generation and display of graphics, from simple line drawings to full-color, high-resolution, natural-looking synthetic scenes for virtual reality. The panel has chosen to classify these applications into three categories in terms of the computer hardware required to support them, as listed below: 1.
From page 53...
... The multiprocessor supercomputers currently under development generally consist of distributed arrays of cooperating CPU chips that are identical to those that power the desktop computers, e.g., PowerPC or Pentium Pro chips. These same chips may find their way into embedded applications as well.
From page 54...
... Modeling of complex engineering structures for the simulation-based design of ocean platforms, weapons, and other naval structures and systems; and 3. Real-time virtual reality support for viewing large, situational awareness databases, for training, and perhaps for control of complex systems.
From page 55...
... CFD analysis will permit speedup and reduced costs of design and manufacturing and will improve prediction of performance. The goal of solving the inverse problem, developing an optimized design based on specification of performance parameters, will be brought closer to practicality through computational fluid dynamics and advanced visualization techniques.
From page 56...
... Mathematical calculations, word processing, spreadsheets, database management, graphics, CAD, finite element analysis, thermal or mechanical analysis, use of the Internet, and so on are all commonplace and well supported by the available hardware. In addition to all of these, and of particular interest to the Navy and Marine Corps, are those applications related to achieving complete situational awareness, that is, data fusion and information mining.
From page 57...
... capabilities will improve, digital beamforming of radar, sonar, and optical beams will become commonplace, and the all-digital radar will emerge with significant size and weight reductions and superior performance through extensive use of digital processing. As computer capabilities continue to increase, while simultaneously decreasing in size and cost, the computational support for many sensors will merge with the sensing element; that is, both components will be implemented on the same semiconductor chip, giving rise to miniature smart sensors or sensors-on-a-chip.
From page 58...
... Molecular computing differs fundamentally from digital computation in that information is processed as physical patterns rather than bit by bit, and the system learns dynamically as it adapts to its environment. Since the human brain is a form of molecular computer that is small in size, uses very little power, and performs extremely well particularly on recognition tasks and ill-defined memory queries it has often been conjectured that, ultimately, molecular rather than digital computation will be the more effective computing paradigm.
From page 59...
... , the number and size of the weapons needed to counter any given target can be reduced, thereby offering large future savings in acquisition costs and logistics. At the next level, functional and affordable desktop computers will facilitate the netting of resources, the fusion of data from multiple sources, the extraction of the meaningful information contained therein, and its display to provide the degree of situational awareness and real-time control needed to optimize the effectiveness of future naval forces operations.
From page 60...
... For embedded computing, several families of DSP chips have been created and could grow in the future at exponential rates, if pursued. Generally, computer hardware components with ever-increasing capabilities will be supplied by the commercial sector, but the government and the Navy Department will still need to fund basic research, especially the research needed to advance the fundamental underlying technologies.
From page 61...
... The simple exponential extrapolations for both highperformance computing and functional and affordable computing can be relied on to represent the order of magnitude of computational capabilities available in any given time frame. This will be true for the near term, i.e., the next 5 to 15 years, and perhaps throughout the whole of the next 40 years.
From page 62...
... On the other hand, growth in computational throughput is accomplished not only through faster hardware, but also through the use of more parallel units in multiprocessor architectures. Any slowing of the growth rate of computational power owing to lithography limitations could be compensated for by parallel hardware.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.