National Academies Press: OpenBook

Getting Up to Speed: The Future of Supercomputing (2005)

Chapter: 3 Brief History of Supercomputing

« Previous: 2 Explanation of Supercomputing
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

3
Brief History of Supercomputing

This chapter touches on the role, importance, and special needs of supercomputing.1It outlines the history of supercomputing, the emergence of supercomputing as a market, the entry of the Japanese supercomputing manufacturers, and the impact of supercomputing on the broader computer market and on progress in science and engineering. It focuses on hardware platforms and only touches on other supercomputing technologies, notably algorithms and software. A more detailed discussion of current supercomputing technologies is provided in Chapter 5.

THE PREHISTORY OF U.S. SUPERCOMPUTING

The development of computer technology in the United States was inextricably linked to U.S. government funding for research on cryptanalysis, nuclear weapons, and other defense applications in its first several decades.2 Arguably, the first working, modern, electronic, digital computer was the Colossus machine, put into operation at Bletchley Park,

1  

An expanded version of much of the analysis in this chapter will be found in “An Economic History of the Supercomputer Industry,” by Kenneth Flamm, 2004.

2  

In Chapter 3, “Military Roots,” of Creating the Computer: Government, Industry, and High Technology (Brookings Institution Press, 1988), Kenneth Flamm lays out the entire panorama of government-funded projects in the late 1940s and 1950s that essentially created the early U.S. computer industry. Another good but less comprehensive source ends in the very early 1950s, when high-volume production was 20 machines: N. Metropolis, J. Howlett, and Gian-Carlo Rota, A History of Computing in the Twentieth Century (Academic Press, 1980).

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

in the United Kingdom, in 1943. Although it was designed and employed to break a specific German cipher system, this machine was in fact a true electronic computer and could be used, in principle, on a range of problems. The existence of this machine was classified until the 1970s.

U.S. personnel working with Bletchley Park during World War II played a major role in creating the early U.S. computer industry in the decade following the war. In particular, U.S. engineers at the Naval Computing Machinery Laboratory (a National Cash Register plant in Dayton, Ohio, deputized into the war effort) were building copies or improved versions of Bletchley Park electronic cryptanalysis machines, as well as computers of their own design. American engineers involved in this effort included William Norris and Howard Engstrom—Norris later founded Engineering Research Associates (ERA), then Control Data; Engstrom was later deputy director of the National Security Agency (NSA)—and Ralph Palmer who was principal technical architect of IBM’s move into electronic computers in the 1950s. Of the 55 people in the founding technical group at ERA, where Seymour Cray had his first design job in computers, 40 came from Navy communications intelligence in Washington, 5 from the Navy lab in Dayton, and 3 from the Naval Ordnance Laboratory.3

The ENIAC, built in 1945 at the University of Pennsylvania and often credited as the first functioning electronic computer, was a larger, plug-programmable computer designed to compute artillery ballistics tables.4 Ironically, it came into existence, indirectly, as a result of the code-breaking efforts of the U.S. intelligence community. The U.S. Army’s Ballistic Research Laboratory (BRL) had originally funded a ballistics computer project at National Cash Register and had turned down a competing proposal from J. Presper Eckert and John Mauchly at the University of Pennsylvania. BRL reconsidered this decision after the National Cash Register Dayton group was drafted into producing cryptanalysis machines for the Navy and finally decided to fund the ENIAC project.

3  

See Flamm, 1988, pp. 36-41, 43-45.

4  

As is the case for many other technologies, there has been a heated debate about who should be credited as the inventor of the first digital computer. In addition to the Colossus and the ENIAC, the following are worth mentioning: Konrad Zuse, working in Germany, built a relay-based automatic digital computer in Germany in 1939-1941. A similar system, the Automatic Sequence Controlled Calculator (ASCC), also called the Mark I, was conceived by Howard Aiken and designed and built by IBM in 1939-1944. John Vincent Atanasoff and Clifford Berry started building an electronic digital computer at Iowa State University in 1937-1942. Although the project was not completed, Atanasoff and Berry won a patent case against Eckert and Mauchly in 1973, invalidating the patent of the latter on ENIAC as the first automatic electronic computer.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

Princeton mathematician and War Department consultant John von Neumann heard about the existence of the ENIAC project at the BRL and involved himself in the project.5 It is reported that some of the early atomic bomb calculations (in which von Neumann was involved) made use of the ENIAC even before it was formally delivered to the Army. The link between both cryptanalytical and nuclear design applications and high-performance computing goes back to the very first computers.

ENIAC’s designers, Eckert and Mauchly, built the first working stored program electronic computer in the United States in 1949 (the BINAC) and delivered it to Northrop Aircraft, a defense contractor. A number of advanced machines had been built in Britain by that time—Britain was actually leading in the construction of working electronic computers in the late 1940s. A massive U.S. government investment in computer technology in the 1950s was critical to the rapid rise of U.S. companies as the undisputed leaders in the field.

The second and third computers in the United States were the SEAC (built for the National Bureau of Standards, now renamed NIST) and the ERA 1101 (built for predecessors to the National Security Agency). Both went into operation in 1950, runners-up in the United States to the Eckert-Mauchly BINAC.

The first Eckert and Mauchly-designed computer targeting a commercial market, the UNIVAC, was delivered to the Census Bureau in 1951. The experimental MIT Whirlwind computer, built with Navy and later Air Force funding, also went into operation in 1951.

Von Neumann, who had brought British computing theoretician Alan Turing to Princeton in the 1930s and was much influenced by this contact, began work on the conceptual design of a general-purpose scientific computer for use in calculations of military interest in 1946, but a working machine was not completed until 1951. This machine was intended to be a tool for scientists and engineers doing numerical calculations of the sort needed in nuclear weapons design. Versions of the first machine installed at the Institute of Advanced Studies in Princeton, the IAS machine, were built and installed at Los Alamos (the MANIAC I) in 1952 and Oak Ridge (the ORACLE) in 1953; these were the first computers installed at the nuclear weapons laboratories.6 The nuclear weapons labs-sponsored IAS design was highly influential. But the laboratories were so pressed for computing resources before these machines were delivered that they did

5  

Nancy Stern. 1981. From ENIAC to UNIVAC: An Appraisal of the Eckert-Mauchly Computers. Digital Press.

6  

The Argonne National Laboratory built AVIDAC (Argonne’s Version of the Institute’s Digital Automatic Computer), which was operational prior to IAS.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

their calculations on the SEAC at the National Bureau of Standards and ran thermonuclear calculations on the floor of the UNIVAC factory in Philadelphia.

Volume computer production did not begin until 1953. In that year, the first ERA 1103 was delivered to the cryptanalysts in the intelligence community, as was the first IBM 701 Defense Calculator. Twenty ERA 1103s and 19 IBM 701s were built; all were delivered to DoD customers.

NSA was the primary sponsor of high-performance computing through most of the post-1103 1950s era. It sponsored the Philco 210 and the Philco 211 and cosponsored the IBM 7030 Stretch as part of its support for the Harvest system. DoD supported the development of the IBM 7090 for use in a ballistic missile early warning system.

Energy lab-sponsored computers did not play a leading role at the frontiers of high-performance computing until the late 1950s. The Atomic Energy Commission (AEC) set up a formal computer research program in 1956 and contracted with IBM for the Stretch system and with Sperry Rand (which acquired both the Eckert-Mauchly computer group and ERA in the 1950s) for the Livermore Advanced Research Computer (LARC). The cosponsorship of the Stretch system by NSA and AEC required IBM to meet the needs of two different customers (and applications) in one system. It was said that balancing those demands was an important factor in the success of IBM’s system 360.

SUPERCOMPUTERS EMERGE AS A MARKET

With the emergence of specific models of computers built in commercial volumes (in that era, the double digits) in the 1950s, and the dawning realization that computers were applicable to a potentially huge range of scientific and business data processing tasks, smaller and cheaper computers began to be produced in significant numbers. In the early 1950s, machines produced in volume were typically separated by less than an order of magnitude in speed. By the late 1950s, the fastest, most expensive computers were three to four orders of magnitude more powerful than the smallest models sold in large numbers. By the early 1970s, that range had widened even further, with a spread now exceeding four orders of magnitude in performance between highest performance machines and small business or scientific computers selling in volume (see Figure 3.1).

In the late 1950s, the U.S. government, motivated primarily by national security needs to support intelligence and nuclear weapons applications, institutionalized its dominant role in funding the development of cutting-edge high-performance computing technology for these two sets of military applications. Arguably, the first supercomputers explicitly intended as such, designed to push an order of magnitude beyond the fast-

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.1 Early computer performance. Included in this figure are the best-performing machines according to value of installations, number of installations, and millions of operations per second (MOPS). SOURCE: Kenneth Flamm. 1988. Creating the Computer: Government, Industry, and High Technology. Washington, D.C.: Brookings Institution Press.

est available commercial machines, were the IBM 7030 Stretch and Sperry Rand UNIVAC LARC, delivered in the early 1960s.7

These two machines established a pattern often observed in subsequent decades: The government-funded supercomputers were produced in very limited numbers and delivered primarily to government users. But the technology pioneered in these systems would find its way into the industrial mainstream a generation or two later in commercial systems. For example, one typical evaluation holds that “while the IBM 7030 was not considered successful, it spawned many technologies incorporated in future machines that were highly successful. The transistor logic was the basis for the IBM 7090 line of scientific computers, then the 7040 and 1400 lines. Multiprogramming, memory protection, generalized interrupts, the

7  

The term “supercomputer” seems to have come into use in the 1960s, when the IBM 7030 Stretch and Control Data 6600 were delivered.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

8-bit byte were all concepts later incorporated in the IBM 360 line of computers as well as almost all third-generation processors and beyond. Instruction pipelining, prefetch and decoding, and memory interleaving were used in later supercomputer designs such as the IBM 360 Models 91, 95, and 195, as well as in computers from other manufacturers. These techniques are now used in most advanced microprocessors, such as the Intel Pentium and the Motorola/IBM PowerPC.”8 Similarly, LARC technologies were used in Sperry Rand’s UNIVAC III.9

Yet another feature of the supercomputer marketplace also became established over this period: a high mortality rate for the companies involved. IBM exited the supercomputer market in the mid-1970s. Sperry Rand exited the supercomputer market a few years after many of its supercomputer designers left to found the new powerhouse that came to dominate U.S. supercomputers in the 1960s—the Control Data Corporation (CDC).

CONTROL DATA AND CRAY

From the mid-1960s to the late 1970s, the global U.S. supercomputer industry was dominated by two U.S. companies: CDC and its offspring, Cray Research. Both companies traced their roots back to ERA, which had been absorbed by Sperry Rand in 1952. A substantial portion of this talent pool (including Seymour Cray) left to form a new company, CDC, in 1957. CDC was to become the dominant manufacturer of supercomputers from the mid-1960s through the mid-1970s. Government users, particularly the intelligence community, funded development of CDC’s first commercial offering, the CDC 1604. In 1966 CDC shipped its first full-scale supercomputer, the CDC 6600, a huge success. In addition to offering an order of magnitude jump in absolute computational capability (see Figure 3.1), it did so very cost effectively. As suggested by Figure 3.2, computing power was delivered by the 6600 at a price comparable to or lower than that of the best cost/performance in mainstream commercial machines.10

8  

Historical information on the IBM 7030 is available online from the Wikipedia at <http://en.wikipedia.org/wiki/IBM_7030>.

9  

See <http://en.wikipedia.org/wiki/IBM_7030>; G. Gray, “The UNIVAC III Computer,” Unisys History Newsletter 2(1) (revised 1999), <http://www.cc.gatech.edu/gvu/people/randy.carpenter/folklore/v2n1.html>.

10  

The benchmarks, the performance metrics, and the cost metrics used for that figure are considerably different from those used today, but the qualitative comparison is generally accepted.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.2 Cost/performance over time. Based on data collected by John McCallum at <http://www.jcmit.com/cpu-performance.htm>. NEC Earth Simulator cost corrected from $350 million to $500 million. Note that “normalized” MIPS (millions of instructions per second) is constructed by combining a variety of benchmarks run on these machines over this 50-year period, using scores on multiple benchmarks run on a single machine to do the normalization.

At this point, there was no such thing as a commodity processor. All computer processors were custom produced. The high computational performance of the CDC 6600 at a relatively low cost was a testament to the genius of its design team. Additionally, the software tools that were provided by CDC made it possible to efficiently deliver this performance to the end user.

Although the 6600 gave CDC economic success at the time, simply delivering theoretical computational power at a substantially lower price per computation was not sufficient for CDC to dominate the market. Then, as now, the availability of applications software, the availability of specialized peripherals and storage devices tailored for specific applications, and the availability of tools to assist in programming new software were just as important to many customers.

The needs of the government users were different. Because the specific applications and codes they ran for defense applications were often secret, frequently were tied to special-purpose custom hardware and peripherals built in small numbers, and changed quickly over time, the avail-

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

ability of low-cost, commercially available peripherals and software were often unimportant. The defense agencies typically invested in creating the software and computing infrastructure they needed (for example, NASTRAN11 and DYNA12). When some of that software became available to commercial customers after it had been made available to the first government customers, these supercomputers became much more attractive to them.

In 1972, computer designer Seymour Cray left CDC and formed a new company, Cray Research. Although CDC continued to produce high-performance computers through the remainder of the 1970s (e.g., STAR100), Cray quickly became the dominant player in the highest performance U.S. supercomputer arena.13 The Cray-1, first shipped to Los Alamos National Laboratory in 1976, set the standard for contemporary supercomputer design. The Cray-1 supported a vector architecture in which vectors of floating-point numbers could be loaded from memory into vector registers and processed in the arithmetic unit in a pipelined manner at much higher speeds than were possible for scalar operands.14 Vector processing became the cornerstone of supercomputing. Like the CDC 6600, the Cray-1 delivered massive amounts of computing power at a price competitive with the most economical computing systems of the day. Figure 3.2 shows that the cost of sustained computing power on the Cray-1 was roughly comparable to that of the cost/performance champion of the day, the Apple II microcomputer.

During this period, IBM retreated from the supercomputer market, instead focusing on its fast-growing and highly profitable commercial computer systems businesses. Apart from a number of larger companies flirting with entry into the supercomputer business by building experimental machines (but never really succeeding) and several smaller com-

11  

NASTRAN (NASA Structural Analysis) was originally developed at Goddard Space Flight Center and released in 1971 (see <http://www.sti.nasa.gov/tto/spinoff2002/goddard.html>). There are now several commercial implementations.

12  

DYNA3D was originally developed in the 1970s at the Lawrence Livermore National Laboratory to simulate underground nuclear tests and determine the vulnerability of underground bunkers to strikes by nuclear missiles. Its successor, LS-DYNA, which simulates vehicle crashes, is commercially available.

13  

CDC ultimately exited the supercomputer business in the 1980s, first spinning off its supercomputer operations in a new subsidiary, ETA, and then shutting down ETA a few years later, in 1989.

14  

Vector processing first appeared in the CDC STAR100 and the Texas Instruments ASC, both announced in 1972. Much of the vector processing technology, including vectorizing compilers, originated from the Illiac IV project, developed at Illinois.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

panies that successfully pioneered a lower-end, cost-oriented “mini-supercomputer” market niche, U.S. producers CDC and Cray dominated the global supercomputer industry in the 1970s and much of the 1980s.

Although it was not widely known or documented at the time, in addition to using the systems from CDC and Cray, the defense community built special-purpose, high-performance computers. Most of these computers were used for processing radar and acoustic signals and images. These computers were often “mil-spec’ed” (designed to function in hostile environments). In general, these systems performed arithmetic operations on 16- and 32-bit data. Fast Fourier transforms and digital filters were among the most commonly used algorithms. Many of the commercial array processor companies that emerged in the late 1970s were spin-offs of these efforts.

The commercial array processors, coupled with minicomputers from Digital Equipment Corporation and Data General, were often used as supercomputers. The resulting hybrid system combined a commodity host with a custom component. Unlike most other supercomputers of the period, these systems were air-cooled.

The 1970s also witnessed the shipment of the first simple, single-chip computer processor (or microprocessor) by the Intel Corporation, in November 1971. By the early 1980s, this technology had matured to the point where it was possible to build simple (albeit relatively low-performance) computers capable of “serious” computing tasks. The use of low-cost, mass-produced, high-volume commodity microprocessors was to transform all segments of the computer industry. The highest performance segment of the industry, the supercomputer, was the last to be transformed by this development.

ENTER JAPAN

By the mid-1980s, with assistance from a substantial government-subsidized R&D program launched in the 1970s and from a history of trade and industrial policy that effectively excluded foreign competitors from Japanese markets, Japanese semiconductor producers had pushed to the technological frontier in semiconductor manufacturing. Historically, the rationale for Japanese government support in semiconductors had been to serve as a stepping-stone for creating a globally competitive computer industry, since the semiconductor divisions of the large Japanese electronics companies had also produced computers sold in a protected Japanese market. Aided by their new capabilities in semiconductors and a successful campaign to acquire key bits of IBM’s mainframe technology, by the mid-1980s Japanese computer companies were ship-

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

ping cost-effective commercial computer systems that were competitive with, and often compatible with, IBM’s mainframes.15

Thus it was that the United States viewed with some concern Japan’s announcement of two government-funded computer R&D programs in the early 1980s explicitly intended to put Japanese computer producers at the cutting edge in computer technology. One was the Fifth Generation Computer System project, which was primarily focused on artificial intelligence and logic programming. The other was the High Speed Computing System for Scientific and Technological Uses project, also called the SuperSpeed project, which focused on supercomputing technology.16 At roughly the same time, the three large Japanese electronics companies manufacturing mainframe computers began to sell supercomputers at home and abroad. The Japanese vendors provided good vectorizing compilers with their vector supercomputers. Although the Fifth Generation project ultimately would pose little threat to U.S. computer companies, it stimulated a substantial government effort in the United States to accelerate the pace of high-performance computing innovation. In the 1980s this effort, led by DARPA, funded the large Strategic Computing Initiative (SCI), which transformed the face of the U.S. supercomputer industry.

The prospect of serious competition from Japanese computer companies in mainstream markets also led to a series of trade policy responses by U.S. companies and their supporters in the U.S. government (see the discussion of trade policies in Chapter 8, Box 8.1). By the 1980s, Fujitsu, Hitachi, and NEC were all shipping highly capable supercomputers competitive with Cray’s products, dominating the Japanese market and beginning to make inroads into European and American markets. The vast majority of Japanese supercomputers were sold outside the United States. There were some minimal sales to the United States in areas such as the petroleum industry but few sales to U.S. government organizations. Significant obstacles faced the sales of U.S.-made supercomputers in Japan as well. Responding to these market limitations in the 1980s, U.S. trade negotiators signed agreements with the Japanese government designed to open up government procurement in Japan to U.S. supercomputer producers. (In Japan, as in the United States, the government dominated the market for supercomputers.) In the mid-1990s, the U.S. government also supported U.S. supercomputer makers in bringing an antidumping case

15  

A good reference for the survey of supercomputer development in Japan is Y. Oyanagi, 1999, “Development of Supercomputers in Japan: Hardware and Software,” Parallel Computing 25:1545-1567.

16  

D.K. Kahaner. 1992. “High Performance Computing in Japan: Supercomputing.” Asian Technology Information Program. June.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

against Japanese supercomputer makers in the U.S. market. That case ultimately forced Japanese companies out of the U.S. market until 2003, when a suspension agreement was signed.

INNOVATION IN SUPERCOMPUTING

While one part of the U.S. government reacted by building walls around the U.S. market, DARPA and its Strategic Computing Initiative (SCI), in concert with other government agencies and programs, took the opposite tack, attempting to stimulate a burst of innovation that would qualitatively alter the industry.17 Computing technology was regarded as the cornerstone of qualitative superiority for U.S. weapons systems. It was argued that the United States could not regain a significant qualitative lead in computing technology merely by introducing faster or cheaper computer components, since Japanese producers had clearly achieved technological parity, if not some element of superiority, in manufacturing them. Furthermore, many technologists believed that continued advances in computer capability based on merely increasing the clock rates of traditional computer processor designs were doomed to slow down as inherent physical limits to the size of semiconductor electronic components were approached. In addition, Amdahl’s law was expected to restrict increases in performance due to an increase in the number of processors used in parallel.18

The approach to stimulating innovation was to fund an intense effort to do what had not previously been done—to create a viable new architecture for massively parallel computers, some of them built around commodity processors, and to demonstrate that important applications could benefit from massive parallelism. Even if the individual processors were less efficient in delivering usable computing power, as long as the parallel architecture was sufficiently scalable, interconnecting a sufficient number

17  

Investments in high-performance computing were only one area funded by the SCI, which funded over $1 billion in R&D from 1983 to 1993. There are no available data that break out this investment by technology area. Other areas were electronic components, artificial intelligence and expert systems, and large-scale prototype development of advanced military systems intended to explore new technology concepts. The committee is not aware of any objective assessment of the success and utility of the program as a whole. An excellent history of the program may be found in Alex Roland and Phillip Shiman, 2002, Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993, Cambridge, Mass.: MIT Press.

18  

Amdahl’s law states that if a fraction of 1/s of an execution is sequential, then parallelism can reduce execution time by at most a factor of s. Conventional wisdom in the early 1980s was that for many applications of interest Amdahl’s law will restrict gains in performance from parallelism to factors of tens or low hundreds.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

of processors might potentially provide a great deal of computing capability. Once the hardware architectural details of how to scale up these systems were determined, very large parallel machines could be put to work, and supercomputers that were orders of magnitude faster would give the government agencies charged with national security new qualitative technological advantages. It was assumed that appropriate software technology would follow.

This was the dream that motivated the architects of the U.S. government’s supercomputer technology investments in the late 1980s. Dozens of new industrial flowers bloomed in DARPA’s Strategic Computing hothouse from the mid-1980s through the early 1990s. Old players and new ones received substantial support for experiments with new, parallel architectures.19

It has become commonplace to point to the high mortality rate among U.S. supercomputer manufacturers in the 1990s and the large amount of resources invested by the U.S. government in now defunct massively parallel supercomputer makers. Many critics believe that the DARPA program was a failure that harmed the market.20 Most of these start-up companies went bankrupt or were sold.

Over this period, however, some important lessons were learned. One was the importance of node performance; another was the importance of high-bandwidth, low-latency, scalable interconnects. The evolution of the Thinking Machines products from the CM-1 (with bit serial processors and a relatively low-performing, single-stage bit-serial network) to the CM-5 (with a powerful SPARC node enhanced with a vector unit and a powerful, scalable multistage network) is a typical example. Over time,

19  

Gordon Bell’s list of experiments includes ATT/Columbia (Non Von), BBN Labs, Bell Labs/Columbia (DADO), CMU (Production Systems), CMU Warp (GE and Honeywell), Encore, ESL, GE (like connection machine), Georgia Tech, Hughes (dataflow), IBM (RP3), MIT/Harris, MIT/Motorola (Dataflow), MIT Lincoln Labs, Princeton (MMMP), Schlumberger (FAIM-1), SDC/Burroughs, SRI (Eazyflow), University of Texas, Thinking Machines (Connection Machine). See Gordon Bell, “PACT 98,” a slide presentation available at <http://www.research.microsoft.com/barc/gbell/pact.ppt>.

20  

A list of failed industrial ventures in this area, many inspired by SCI, includes Alliant, American Supercomputer, Ametek, AMT, Astronautics, BBN Supercomputer, Biin, CDC/ ETA Systems, Chen Systems, Columbia Homogeneous Parallel Processor, Cogent, Cray Computer, Culler, Cydrome, Denelcor, Elxsi, Encore, E&S Supercomputers, Flexible, Goodyear, Gould/SEL, Intel Supercomputer Division, IPM, iP-Systems, Kendall Square Research, Key, Multiflow, Myrias, Pixar, Prevec, Prisma, Saxpy, SCS, SDSA, Stardent (Stellar and Ardent), Supercomputer Systems Inc., Suprenum, Synapse, Thinking Machines, Trilogy, VItec, Vitesse, Wavetracer (E. Strohmaier, J.J. Dongarra, and H.W. Meuer, 1999, “Marketplace of High-Performance Computing,” Parallel Computing 25(13):1517-1544.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

DARPA shifted its emphasis from hardware alone to complementary investments in software that would make the newly developed parallel hardware easier to program and use in important applications. These investments included modest support for the port of industrial codes to the new scalable architectures.

In the commercial supercomputing arena, there continued to be vector architectures as well as the increasing presence of scalable systems based on commodity processors. There were many common attributes among the supercomputers of this period. Among them were these:

  • Device technology shifted to complementary metal oxide semiconductor (CMOS), both for commodity-based systems and for custom systems. As a result, custom systems lost the advantage of faster technology.

  • The increase in clock and memory speeds coincided with Moore’s law.

  • The reduction of the size of the processor resulted in small-scale multiprocessor systems (two to four processors) being used as nodes in scalable systems; larger shared-memory configurations appeared as high-end technical servers.

  • Vendors began supplying vectorizing and (in some cases) parallelizing compilers, programming tools, and operating systems (mostly UNIX-based), which made it easier to program.

The common architectural features—vector processing, parallel shared memory, and, later, message passing—also encouraged third parties to develop software for this class of computers. In particular, standard numerical libraries such as the BLAS21 evolved to supply common high-level operations, and important scientific and engineering applications such as NASTRAN appeared in vectorized and parallelized versions. The development of this software base benefited all supercomputer manufacturers by expanding the total market for machines. Similarly, the availability of common software and a shared programming model benefited the entire user community, both government and commercial.

By accident or by design, the course correction effected by SCI had some important and favorable economic implications for the U.S. supercomputer industry. Suppose that technology were available to per-

21  

C.L. Lawson, R.J. Hanson, D.R. Kincaid, and F.T. Krogh, 1979, “Basic Linear Algebra Subprograms for Fortran Usage,” ACM Transactions on Mathematical Software 5:308-325; J.J. Dongarra, J. Du Croz, S. Hammarling, and R.J. Hanson, 1988, “An Extended Set of Fortran Basic Linear Algebra Subprograms,” ACM Transactions on Mathematical Software 14(1):1–17; J.J. Dongarra, J. Du Croz, S. Hammarling, and I.S. Duff, “A Set of Level 3 Basic Linear Algebra Subprograms,” ACM Transactions on Mathematical Software 16(1):1-17.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

mit large numbers of inexpensive, high-volume commodity microprocessors to divide up the work of a given computing task. Then the continuing steep declines in the cost of commodity processors would eventually make such a system a more economic solution for supplying computing capability than a system designed around much smaller numbers of very expensive custom processors that were falling in cost much less rapidly. If a richer and more portable software base became available for these systems, the cost of their adoption would be reduced. If so, the difference in price trends between custom and commodity processors would eventually make a parallel supercomputer built using commodity components a vastly more economically attractive proposition than the traditional approach using custom processors.

In the late 1980s and early 1990s, DARPA shifted more of its supercomputer investments into systems based on commercially available processors, at Thinking Machines (what was to become the CM-5, using SPARC processors), at Intel (what was to become its Paragon supercomputer line, using Intel’s iPSC processor), and at Cray (its T3D system, using DEC’s Alpha processor).22 The net impact of this shift benefited the development and sales of commodity-based systems. This outcome was particularly important given the increasing and highly competent Japanese competition in the market for traditional vector supercomputers. Rather than backing an effort to stay ahead of the competition in an established market in which competitors had seized the momentum, research-and experience-rich U.S. companies threw the entire competition onto a whole new battlefield, where they had a substantial advantage over their competitors.

Some of these hardware and software characteristics also found their way into a new generation of supercomputers, called “mini-supercomputers” (e.g., Convex, Alliant, Multiflow). Unlike the products from Cray and CDC, the mini-supercomputers were air-cooled, had virtual memory operating systems, and sold for under $1 million. The mini-supercomputer systems included UNIX operating systems and automatic vectorizing/parallelizing compilers. This new generation of software systems was based on prior academic research. With UNIX came a wealth of development tools and software components (editors, file systems, etc). The systems also made extensive use of open standards used for I/O bus-

22  

The Myrinet commodity interconnects used in a number of commodity supercomputer systems were also developed with DARPA support at about this time (Alex Roland and Philip Shiman, 2002. Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993, Cambridge, Mass.: MIT Press. pp. 308-317; DARPA, Technology Transition, 1997, pp. 42, 45.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

ses, peripheral devices, and networking (e.g., TCP/IP). This standardization made it easier for users and independent software vendors (ISVs) to move from one platform to another. Additionally, the client/server model evolved through the use of Ethernet and TCP/IP. The NSF-funded supercomputer centers helped promote the adoption of UNIX for supercomputers; the Cray systems at those centers were required to run UNIX. Also, DARPA required the use of UNIX as a standard operating system for many of the supercomputing projects it funded. Ultimately every supercomputer platform supported UNIX. That in turn increased the use of the programming language C, which became widely used to write numerically intense applications. The newer generation of compilers enabled applications written in standard Fortran and C to be optimized and tuned to the contemporary supercomputers. This led to the widespread conversion of ISV-developed software and consequently the widespread adoption of supercomputing by the commercial (non-government-sponsored) marketplace.

Moore’s law continued to hold, and to a large degree it changed the face of supercomputing. The systems built in the 1980s were all built from CMOS or from ECL gate arrays. As the density of CMOS increased, it became possible to put an entire processor on one die, creating a microprocessor. This led to the attack of “killer micros.”23 The killer micro permitted multiple microprocessors to be coupled together and run in parallel. For applications that could be parallelized (both algorithmically and by localizing data to a particular processor/memory system), a coupled system of killer micros could outperform a custom-designed supercomputer. Just as important, the single-processor scalar performance of a killer micro often exceeded the single-processor scalar performance of a supercomputer. This next generation of supercomputer resulted in a change of architectures. High-performance vector systems began to be replaced by parallel processing, often massive—hundreds and thousands of microprocessors.

Thus, although it is true that there was an extraordinarily high mortality rate among the companies that developed parallel computer architectures in the 1980s and early 1990s, much was learned from the technical failures as well as the successes. Important architectural and conceptual problems were confronted, parallel systems were made to work at a much larger scale than in the past, and the lessons learned were

23  

The term “killer micro” was popularized by Eugene Brooks in his presentation to the Teraflop Computing Panel, “Attack of the Killer Micros,” at Supercomputing 1989 in Reno, Nev. (see also <http://jargon.watson-net.com/jargon.asp?w=killer%20micro>).

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

absorbed by other U.S. companies, which typically hired key technical staff from defunct parallel supercomputer pioneers. Subsequently, there were five major new U.S. entrants into the high-performance computing (HPC) market in the 1990s—IBM, SGI, Sun, DEC/Compaq (recently merged into Hewlett-Packard), and Convex/HP—which today have survived with the lion’s share (as measured in numbers of systems) of the HPC marketplace.

Though dreams of effortless parallelism seem as distant as ever, the fact is that the supercomputer marketplace today is dominated by a new class of useful, commodity-processor-based parallel systems that—while not necessarily the most powerful high-performance systems available—are the most widely used. The commercial center of gravity of the supercomputer market is today dominated by U.S. companies marketing commodity-processor parallel systems that capitalize on technology investments made by the U.S. government in large-scale parallel hardware (and to a lesser extent, software) technology in the 1980s and 1990s.

RECENT DEVELOPMENTS IN SUPERCOMPUTING

To some extent, the reasons for the dominance of commodity-processor systems are economic, as illustrated by the hardware costs shown in Figure 3.2. Contemporary distributed-memory supercomputer systems based on commodity processors (like Linux clusters) appear to be substantially more cost effective—by roughly an order of magnitude—in delivering computing power to applications that do not have stringent communication requirements. However, there has been little progress, and perhaps even some regress, in making scalable systems easy to program. Software directions that were started in the early 1980s (such as CM-Fortran and High-Performance Fortran) were largely abandoned. The payoff to finding better ways to program such systems and thus expand the domains in which these systems can be applied would appear to be large.

The move to distributed memory has forced changes in the programming paradigm of supercomputing. The high cost of processor-to-processor synchronization and communication requires new algorithms that minimize those operations. The structuring of an application for vectorization is seldom the best structure for parallelization on these systems. Moreover, despite some research successes in this area, without some guidance from the programmer, compilers are not generally able to detect enough of the necessary parallelism or to reduce sufficiently the interprocessor overheads. The use of distributed memory systems has led to the introduction of new programming models, particularly the message passing paradigm, as realized in MPI, and the use of parallel loops in shared memory subsystems, as supported by OpenMP. It also has forced

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

significant reprogramming of libraries and applications to port onto the new architectures. Debuggers and performance tools for scalable systems have developed slowly, however, and even today most users consider the programming tools on parallel supercomputers to be inadequate.

THE U.S. HIGH-PERFORMANCE COMPUTING INDUSTRY TODAY

Today, the current tension in the industry is between a large number of applications that make acceptable use of relatively inexpensive supercomputers incorporating large numbers of low-cost commodity processors and a small number of highly important applications (predominantly the domain of government customers) in which the required performance can currently be provided only by highly tuned systems making use of expensive custom components. This tension is at the root of many of the policy issues addressed by this report.

Despite the apparent economic weakness of the sole remaining U.S. vector-based supercomputer maker, Cray (which makes supercomputers based on custom processors), available data on the supercomputer marketplace (based on the TOP500 list of June 2004) show it is dominated by U.S. companies today. International Data Corporation (IDC) numbers paint a similar picture: In 2000, U.S. vendors had 93 percent of the high-performance technical computing market (defined to include all technical servers) and 70 percent of the capability market (defined as systems purchased to solve the largest, most performance-demanding problems). In 2002 the numbers were 95 percent and 81 percent and in 2003 they were 98 percent and 88 percent, respectively, showing a continued strengthening of U.S. vendors. Ninety-four percent of technical computing systems selling for more than $1 million in 2003 were U.S. made.24 It may be a legitimate matter of concern to U.S. policymakers that the fastest computer in the world was designed in Japan and has been located there for the last 2 years. But it would be inaccurate to assert that the U.S. supercomputer industry is in trouble. Indeed, the competitive position of U.S. supercomputer producers is as strong as it has been in decades, and all signs point to continued improvement.

To characterize the current dynamics of the U.S. industry, the committee turned to a detailed analysis of the TOP500 data (using the June 2004 list), which are available for the period 1993-2003.25 While the TOP500

24  

Source: Earl Joseph, Program Vice President, High-Performance Systems, IDC; email exchanges, phone conversations, and in-person briefings from December 2003 to October 2004.

25  

For details and the data used in the analysis that follows, see <http://www.top500.org>.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

lists are the best publicly available source of information on supercomputing trends, it is important to keep in mind the limitations of this source of information.

The Rmax Linpack metric used by the TOP500 ranking does not correlate well with performance on many real-life workloads; this issue is further discussed in the section on metrics in Chapter 5. While no one number can characterize the performance of a system for diverse workloads, it is likely that the Rmax metric exaggerates by at least a factor of 2 the real performance of commodity platforms relative to custom platforms. Similarly, custom high-performance systems are significantly more expensive than commodity systems relative to their performance as measured by Rmax. Thus, if Rmax is used as a proxy for market share, then the TOP500 list greatly exaggerates the dollar value of the market share of commodity systems. The TOP500 data merit analyzing because the changes and the evolution trends identified in the analysis are real. However, one should not attach too much significance to the absolute numbers.

Some large deployed systems are not reported in the TOP500 list. In some cases, organizations may not want to have their computer power known, either for security or competitiveness reasons. Thus, companies that sell mainly to classified organizations may see their sales underreported in the TOP500 lists. In other cases, organizations may not see value in a TOP500 listing or may consider that running a benchmark is too burdensome. This is especially true for companies that assemble clusters on their own and need to provide continuous availability. Although Web search or Web caching companies own the largest clusters, they do not usually appear in the TOP500 lists. Many large clusters used by service companies and some large clusters deployed in academia are also missing from the TOP500 list, even though they could be there. It is reasonable to assume that this biases the TOP500 listing toward under-reporting of commercial systems and overreporting of research systems, supporting the argument that use of high-performance computing platforms in industry does not seem to be declining. While custom systems will be underreported because of their heavier use in classified applications, clusters will be underreported because of their use in large service deployments. It is not clear whether these two biases cancel each other out.

TOP500 provides information on the platform but not on its usage. The size of deployed platforms may not be indicative of the size of parallel applications run on these platforms. Industry often uses clusters as capacity systems; large clusters are purchased to consolidate resources in one place, reducing administration costs and providing better security and control. On the other hand, computing tends to be less centralized in academia, and a cluster often serves a small number of top users. Thus, a

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.3 TOP500 Linpack performance.

good penetration of TOP500 platforms in industry does not necessarily indicate that applications in industry have scaled up in proportion to the scaling of TOP500 platforms over the years; the size of academic platforms is a better indicator of the scale of applications running on them.

Keeping those caveats in mind, many things can be learned from studying the TOP500 data.

There has been continuing rapid improvement in the capability of high-performance systems over the last decade (see Figure 3.3).26 Mean Linpack performance has improved fairly steadily, by roughly an order of magnitude every 4 years (about 80 percent improvement annually). The performance of the very fastest machines (as measured by the Rmax27of the machine) has shown much greater unevenness over this period but on average seems roughly comparable. Interestingly, the performance of the least capable machines on the list has been improving more rapidly than

26  

ASCI White and ASCI Red are two supercomputers installed at DOE sites as part of the ASC strategy. Information on all of the ASC supercomputers is available at <http://www.llnl.gov/asci/platforms/platforms.html>.

27  

The Rmax is the maximal performance achieved on the Linpack benchmark—for any size system of linear equations.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

mean performance, and the ratio between the least capable and the list mean is substantially smaller now than it was back in 1993. This reflects the fact that performance improvement in low-cost commodity microprocessors (used in lower-end TOP500 systems) in recent years has exceeded the already impressive rates of performance improvement in custom processors used in the higher-end systems; it also reflects the fact that in recent years, the average size of commonly available clusters has increased more rapidly than the size of the most powerful supercomputers.

There is no evidence of a long-term trend to widening performance gaps between the least and most capable systems on the TOP500 list (see Figure 3.4). One measure of this gap is the relative standard deviation of Rmax of machines on this list, normalized by dividing by mean Rmax in any given year. There was a significant jump in this gap in early 2002, when the Earth Simulator went operational, but it has since diminished to prior levels as other somewhat less fast machines made the list and as the least capable machines improved faster than the mean capability. Essentially the same story is told if one simply measures the ratio between greatest performance and the mean.

FIGURE 3.4 Rmax dispersion in TOP500.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

Increasingly, a larger share of high-end systems is being used by industry and a smaller share by academia. There has been a rapid increase in the share of TOP500 machines installed in industrial locations (see Figure 3.5). In the last several years, roughly 40 to 50 percent of the TOP500 systems (number of machines) have been installed in industry, as compared with about 30 percent in 1993. This contrasts with the situation in academia, which had a substantially smaller share of TOP500 systems in the late 1990s than in the early 1990s. There has been some increase in academic share in the last several years, accounted for mainly by Linux cluster-type systems, often self-built. It is tempting to speculate that the proliferation of relatively inexpensive, commodity-processor-based HPC systems is driving this development. There is one qualification to this picture of a thriving industrial market for high-end systems, however: The growing qualitative gap between the scale and types of systems used by industry and by cutting-edge government users, with industry using less and less of the most highly capable systems than it used to. There have been no industrial users in the top 20 systems for the last 3 years, contrast-

FIGURE 3.5 TOP500 by installation type.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.6 Top 20 machines by installation type.

ing with at least one industrial user in the top 20 at least once in each of the previous 9 years (see Figure 3.6).

U.S. supercomputer makers are performing strongly in global supercomputer markets. Their global market share has steadily increased, from less than 80 percent to more than 90 percent of TOP500 units sold (see Figure 3.7). Measuring market share by share of total computing capability sold (total Rmax) is probably a better proxy for revenues and presents a more irregular picture, but it also suggests a significant increase in market share, by about 10 percentage points (see Figure 3.8). The conclusion also holds at the regional level. U.S. computer makers’ share of European and other (excluding Japan) supercomputer markets also increased significantly, measured by either machines (Figure 3.9) or the capability proxy for revenues (Figure 3.10).

U.S. supercomputer capabilities are strong and competitive in the highest performing segment of the supercomputer marketplace (see Figure 3.11). Even if we consider only the 20 fastest computers in the world every year, the share manufactured by U.S. producers has been increasing steadily since the mid-1990s and is today about where it was in 1993—

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.7 Share of TOP500 machines by country of maker.

FIGURE 3.8 Rmax share of TOP500 machines by maker.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.9 U.S. maker share of TOP500 installations in each geographical area.

FIGURE 3.10 U.S. maker share of total TOP500 Rmax in each geographical area.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.11 Top 20 machines by maker and country of installation.

with 17 of the top 20 machines worldwide made by U.S. producers. This trend reverses a plunge in the U.S. maker share of the fastest machines that took place in 1994 to 8 of the top 20 machines. Japanese producer performance is a mirror image of the U.S. picture, rising to 12 of the top 20 in 1994 and then falling steadily to 2 in 2003. The Japanese Earth Simulator was far and away the top machine from 2002 through mid-2004, but most of the computers arrayed behind it were American-made, unlike the situation in 1994.

A similar conclusion holds if we consider access by U.S. users to the fastest computers (Figure 3.11). Of the top 20, 14 or 15 were installed in the United States in the last 3 years, compared with lows of 7 or 8 observed earlier in the 1990s (a sharp drop from 1993, the initial year of the TOP500, when 16 or 17 of the TOP500 were on U.S. soil). Again, Japan is a mirror image of the United States, with 1 or 2 of the top 20 machines installed in Japan in 1993, peaking at 10 in 1994, then dropping fairly steadily, to 2 or 3 over the last 3 years.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

There are indications that national trade and industrial policies may be having impacts on behavior in global markets. U.S. supercomputer makers now have effectively 100 percent of their home market, measured by machines (Figure 3.9) or capability (Figure 3.10). No machines on the TOP500 have been sold by Japan (the principal technological competitor of the United States) to this country since 2000, and only a handful on the list were sold in prior years going back to 1998. This contrasts with between 2 and 5 machines on the lists in years prior to 1998. (About half of the TOP500 systems currently installed in Japan are U.S. made.)

These data coincide with a period in which formal and informal barriers to purchases of Japanese supercomputers were created in the United States. Conversely, U.S. producer market share in Japan, measured in either units or capability, began to fall after the same 1998 watershed in trade frictions. While this analysis does not so prove, one might suspect a degree of retaliation, official or not, in Japan. Given that U.S. producers have been doing so well in global markets for these products, it is hard to argue that policies encouraging the erection of trade barriers in this sector would have any beneficial effect on either U.S. producers or U.S. supercomputer users. This is a subject to which the committee will return.

An Industrial Revolution

From the mid-1960s to the early 1980s, the supercomputer industry was dominated by two U.S. firms—first CDC, then Cray. The product these companies produced—highly capable, very expensive, custom-de-signed vector supercomputers, with individual models typically produced in quantities well under 100—was easily identified and categorized. This small, largely American world underwent two seismic shifts in the late 1980s.

Figure 3.12 sketches out the first of these changes. As described earlier, capable Japanese supercomputer vendors for the first time began to win significant sales in international markets. The Japanese vendors saw their share of vector computer installations double, from over 20 percent to over 40 percent over the 6 years from 1986 to 1992.28

The second development was the entry of new types of products—for example, non-vector supercomputers, typically massively parallel ma-

28  

These data are taken from H.W. Meuer, 1994, “The Mannheim Supercomputer Statistics 1986-1992,” TOP500 Report 1993, J.J. Dongarra, H.W. Meuer, and E. Strohmaier, eds., University of Mannheim, pp. 1-15. See also Erich Strohmaier, Jack J. Dongarra, Hans W. Meuer, and Horst D. Simon, 1999, “The Marketplace of High Performance Computing,” Parallel Computing 25(13-14):1517-1544.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.12 Share of vector supercomputers installed.

chines built using large numbers of processors interconnected within a single system. One impetus for the development of these systems was DARPA’s Strategic Computing Initiative in the 1980s, in part a reaction to the data depicted in Figure 3.12, discussed earlier, and other U.S. government initiatives that coordinated with and followed this initial effort. These new forms of supercomputing systems are not tracked in Figure 3.12.

The new types of supercomputing systems were initially built entirely from custom-designed and manufactured components used only in these proprietary supercomputer architectures. In the early 1990s, however, reacting to the high fixed costs of designing and manufacturing specialized processors that were only going to be used in machines to be built, in the most wildly optimistic estimate, in volumes in the hundreds, some of these machines began to make use of the most capable commercially available microprocessors and confined the proprietary elements of these supercomputer designs to the overall system architecture and interconnection components. To engineer a system that could be offered with more attractive cost/performance characteristics, there was a shift from a purely custom approach to building a high-performance machine, to a hybrid approach making use of COTS processor components.

Over the last 4 years, the high-end computing marketplace has un-

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

dergone another fairly radical transformation, leaving makers of traditional supercomputers in an increasingly weakened position economically. The impetus for this transformation has been the growing availability of commodity high-performance interconnections, which, coupled to mass-produced, high-volume commodity microprocessors, are now being used to build true commodity supercomputers: systems built entirely from COTS hardware components. Although not commonly appreciated, over the last several years such commodity supercomputers have rapidly come to dominate the supercomputer marketplace.

To see this, the committee has categorized high-end systems into three groups. First, there are the so-called commodity systems, systems built using COTS microprocessors and COTS interconnections. The first such commodity system appeared on the TOP500 list in 1997.29 Second, there are machines using custom interconnections linking COTS microprocessors, or machines making use of customized versions of COTS microprocessor chips. These systems are labeled as hybrid systems. Finally, there are machines using both custom processors and custom interconnects. These are labeled as full custom systems. All traditional vector supercomputers fall into this category, as do massively parallel systems using custom processors and interconnects.

Using this taxonomy, all supercomputers on the TOP500 list from June of 1993 through June of 2004 were categorized.30 The results are summarized in Figure 3.13, which shows changes in mean Rmax for each of these system types from 1993 to 2004. Commodity systems showed the greatest

29  

This was the experimental University of California at Berkeley network of workstations (NOW).

30  

The categorization used the following rules: All AP1000, Convex, Cray, Fujitsu, Hitachi, Hitachi SR8000, IBM 3090, Kendall Square, MasPar, Ncube, NEC, and Thinking Machines CM2 processor-based systems were categorized as custom. All AMD processor-based systems were categorized as commodity. All Alpha processor systems were commodity except those made by Cray, DEC/HP Alphaserver 8400 systems, and Alphaserver 8400, 4100, and 300 clusters, which were categorized as hybrid. All Intel processor-based systems were commodity, except those made by Intel (Sandia ASC Red, Delta, XP, other iPSC 860), Meiko, Cray, HP Superdome Itanium systems, and SGI Altix systems, which were categorized as hybrid. All Power processor systems were categorized as hybrid except IBM pSeries, for which use of commodity connections was noted in the TOP500 database, and the Param Padma cluster, which were categorized as commodity. All SPARC processor systems were hybrid except those that were “self-made” and categorized as commodity. All Hewlett-Packard processor systems were categorized as hybrid. All MIPS-based systems were hybrid, except for SGI Origin systems, for which the use of Ethernet interconnects was noted. The IBM Blue Gene system using the Power PC processor was hybrid; self-made and eServer blade systems using this processor were commodity.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.13 Mean Rmax by system type.

annual growth rates in performance; hybrid systems showed the least growth in Linpack performance. Trend lines fitted to Figure 3.13 have slopes yielding annual growth rates in Rmax of 111 percent for commodity systems, 94 percent for custom systems, and 73 percent for hybrid systems.31 This is considerably faster than annual growth rates in single-processor floating-point performance shown on other benchmarks, suggesting that increases in the number of processors and improvements in the interconnect performance yielded supercomputer performance gains significantly greater than those due to component processor improvement alone for both commodity and custom systems. Hybrid system performance improvement, on the other hand, roughly tracked single-processor performance gains.

Nonetheless, the economics of using much less expensive COTS microprocessors was compelling. Hybrid supercomputer systems rapidly replaced custom systems in the early 1990s. Custom supercomputer sys-

31  

A regression line of the form ln Rmax = a + b Time was fit, where Time is a variable incremented by one every half year, corresponding to a new TOP500 list. Annualized trend growth rates were calculated as exp(2b)– 1.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

tems, increasingly, were being used only in applications where software solutions making use of massively parallel hybrid systems were unsatisfactory or unavailable, or where the need for very high performance warranted a price premium.

Commodity high-performance computing systems first appeared on the TOP500 list in 1997, but it was not until 2001-2002 that they began to show up in large numbers. Since 2002, their numbers have swelled, and today commodity systems account for over 60 percent of the systems on the list (see Figure 3.14). Just as hybrid systems replaced many custom systems in the late 1990s, commodity systems today appear to be displacing hybrid systems in acquisitions. A similar picture is painted by data on Rmax, which, as noted above, is probably a better proxy for systems revenues. Figure 3.15 shows how the distribution of total TOP500 system performance between these classes of systems has changed over time.

Furthermore, the growing marketplace dominance of commodity supercomputer systems is not just at the low end of the market. A similar

FIGURE 3.14 Share of TOP500 by system type.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.15 Share of TOP500 Rmax by system type.

pattern has also been evident in the very highest performance systems. Figure 3.16 shows how the numbers of TOP20 systems in each of these categories has changed over time. A commodity system did not appear in the top 20 highest performing systems until mid-2001. But commodity supercomputers now account for 12 of the 20 systems with the highest Linpack scores. As was true with the entire TOP500 list, custom systems were replaced by hybrid systems in the 1990s in the top 20, and the hybrid systems in turn have been replaced by commodity systems over the last 3 years.

This rapid restructuring in the type of systems sold in the marketplace has had equally dramatic effects on the companies selling supercomputers. In 1993, the global HPC marketplace (with revenues again proxied by total Rmax) was still dominated by Cray, with about a third of the market, and four other U.S. companies, with about another 40 percent of the market (three of those four companies have since exited the industry). The three Japanese vector supercomputer makers accounted for another 22 percent of TOP500 performance (see Figure 3.17).

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.16 Share of top 20 machines by system type.

Of the five U.S. companies with significant market share on this chart, two (Intel and Thinking Machines, second only to Cray) were building hybrid systems and three (Cray, Hewlett-Packard, and Kendall Square Research) were selling custom systems.32 The makers of traditional custom vector supercomputers (Cray and its Japanese vector competitors) have about half of the market share shown if only vector computers are considered (compare to Figure 3.12). Clearly, the HPC marketplace was undergoing a profound transformation in the early 1990s.

A decade later, after the advent of hybrid systems and then of commodity high-end systems, the players have changed completely (see Figure 3.18). A company that was not even present on the list in 1993 (IBM, marketing both hybrid and commodity systems) now accounts for over half of the market, Hewlett-Packard (mainly hybrid systems) now has

32  

Although some of the Thinking Machines systems counted here were using older proprietary processors, most of the Thinking Machines supercomputers on this chart were newer CM-5 machines using commodity SPARC processors.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

FIGURE 3.17 TOP500 market share (Rmax) by company, June 1993.

FIGURE 3.18 TOP500 market share (Rmax) by company, June 2004.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

roughly the same market share as all three Japanese producers did back in 1993, and other entirely new, pure-commodity U.S. vendors in this product space (Dell, Linux Networks) are now larger than two of the three traditional Japanese supercomputer vendors. The most successful Japanese producer, NEC, has about half of the TOP500 market share it had in 1993. Cray is a shadow of its former market presence, with only 2 percent of installed capability. Two other U.S. HPC vendors (Sun and SGI), which grew significantly with the flowering of hybrid systems in the late 1990s, have ebbed with the advent of commodity systems and now have shares of the market comparable to the pure commodity supercomputer vendors and self-made systems.

Over the last 15 years, extraordinary technological ferment has continuously restructured the economics of this industry and the companies surviving within its boundaries. Any policy designed to keep needed supercomputing capabilities available to U.S. government and industrial users must recognize that the technologies and companies providing these systems are living through a period of extremely rapid technological and industrial change.

IMPACTS

Throughout the computer age, supercomputing has played two important roles. First, it enables new and innovative approaches to scientific and engineering research, allowing scientists to solve previously unsolvable problems or to provide superior answers. Often, supercomputers have allowed scientists, engineers, and others to acquire knowledge from simulations. Simulations can replace experiments in situations where experiments are impossible, unethical, hazardous, prohibited, or too expensive; they can support theoretical experiments with systems that cannot be created in reality, in order to test the prediction of theories; and they can enhance experiments by allowing measurements that might not be possible in a real experiment. During the last decades, simulations on high-performance computers have become essential to the design of cars and airplanes, turbines and combustion engines, silicon chips or magnetic disks; they have been extensively used in support of petroleum exploration and exploitation. Accurate weather prediction would not be possible without supercomputing. According to a report by the Lawrence Berkeley National Laboratory (LBNL) for DOE, “Simulation has gained equal footing to experiments and theory in the triad of scientific process.”33 In-

33  

LBNL. 2002. DOE Greenbook—Needs and Directions in High-Performance Computing for the Office of Science. Prepared for the U.S. Department of Energy. April, p. 1.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

deed, a significant fraction of the articles published in top scientific journals in areas such as physics, chemistry, earth sciences, astrophysics, and biology, depend for their results on supercomputer simulations.

The second major effect supercomputing technology has had on computing in general takes place through a spillover effect. Today’s desktop computer has the capability of the supercomputers of a decade ago.

Direct Contributions

Supercomputers continue to lead to major scientific contributions. Supercomputing is also critical to our national security. Supercomputing applications are discussed in detail in Chapter 4. Here the committee highlights a few of the contributions of supercomputing over the years.

The importance of supercomputing has been recognized by many reports. The 1982 Lax report concluded that large-scale computing was vital to science, engineering, and technology.34 It provided several examples. Progress in oil reservoir exploitation, quantum field theory, phase transitions in materials, and the development of turbulence were all becoming possible by combining supercomputing with renormalization group techniques (p. 5). Aerodynamic design using a supercomputer resulted in the design of an airfoil with 40 percent less drag than the design using previous experimental techniques (p. 5). Supercomputers were also critical for designing nuclear power plants (p. 6). The Lax report also praised supercomputers for helping to find new phenomena through numerical experiments, such as the discovery of nonergodic behavior in the formation of solitons and the presence of strange attractors and universal features common to a large class of nonlinear systems (p. 6). As supercomputers become more powerful, new applications emerge that leverage their increased performance. Recently, supercomputer simulations have been used to understand the evolution of galaxies, the life cycle of supernovas, and the processes that lead to the formation of planets.35 Such simulations provide invaluable insight into the processes that shaped our universe and inform us of the likelihood that life-friendly planets exist. Simulations have been used to elucidate various biological mechanisms, such

34  

National Science Board. 1982. Report of the Panel on Large Scale Computing in Science and Engineering. Washington, D.C., December 26 (the Lax report).

35  

“Simulation May Reveal the Detailed Mechanics of Exploding Stars,” ASC/Alliances Center for Astrophysical Thermonuclear Flashes, see <http://flash.uchicago.edu/website/home/>; “Planets May Form Faster Than Scientists Thought,” Pittsburgh Supercomputer Center, see <http://www.psc.edu/publicinfo/news/2002/planets_2002-12-11.html>; J. Dubinski, R. Humble, U.-L. Pen, C. Loken, and P. Martin, 2003, “High Performance Commodity Networking in a 512-CPU Teraflops Beowulf Cluster for Computational Astrophysics,” Paper submitted to the SC2003 conference.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

as the selective transfer of ions or water molecules through channels in cellular membranes or the behavior of various enzymes.36 Climate simulations have led to an understanding of the long-term effects of human activity on Earth’s atmosphere and have permitted scientists to explore many what-if scenarios to guide policies on global warming. We have now a much better understanding of ocean circulation and of global weather patterns such as El Niño.37 Lattice quantum chromodynamics (QCD) computations have enhanced our basic understanding of matter by exploring the standard model of particle physics.38Box 3.1 highlights the value of having a strong supercomputing program to solve unexpected critical national problems.

Codes initially developed for supercomputers have been critical for many applications, such as petroleum exploration and exploitation (three-dimensional analysis and visualization of huge amounts of seismic data and reservoir modeling), aircraft and automobile design (computational fluid mechanics codes, combustion codes), civil engineering design (finite element codes), and finance (creation of a new market in mortgage-backed securities).39

Much of the early research on supercomputers occurred in the laboratories of DOE, NASA, and other agencies. As the need for supercomputing in support of basic science became clear, the NSF supercomputing centers were initiated in 1985, partly as a response to the Lax report. Their mission has expanded over time. The centers have provided essential supercomputing resources in support of scientific research and have driven important research in software, particularly operating systems, compilers, network control, mathematical libraries, and programming languages and environments.40

Supercomputers play a critical role for the national security community according to a report for the Secretary of Defense.41 That report iden-

36  

Benoit Roux and Klaus Schulten. 2004. “Computational Studies of Membrane Channels.” Structure 12 (August): 1.

37  

National Energy Research Scientific Computing Center. 2002. “NERSC Helps Climate Scientists Complete First-Ever 1,000-Year Run of Nation’s Leading Climate-Change Modeling Application.” See <http://www.lbl.gov/Science-Articles/Archive/NERSC-1000-Year-climate-model.html>.

38  

D. Chen, P. Chen, N.H. Christ, G. Fleming, C. Jung, A. Kahler, S. Kasow, Y. Luo, C. Malureanu, and C.Z. Sui. 1998. “3 Lattice Quantum Chromodynamics Computations.” This paper, submitted to the SC1998 conference, won the Gordon Bell Prize in the category Price-Performance.

39  

NRC. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation’s Information Infrastructure. Washington, D.C.: National Academy Press, p. 35.

40  

Ibid., p. 108.

41  

Office of the Secretary of Defense. 2002. Report on High Performance Computing for the National Security Community.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

BOX 3.1
Sandia Supercomputers Aid in Analysis of Columbia Disaster

Sandia National Laboratories and Lockheed Martin offered Sandia’s technical support to NASA immediately after the February 1, 2003, breakup of the space shuttle Columbia. Sandia personnel teamed with analysts from four NASA Centers to provide timely analysis and experimental results to NASA Johnson Space Center accident investigators for the purpose of either confirming or closing out the possible accident scenarios being considered by NASA. Although Sandia’s analysis capabilities had been developed in support of DOE’s stockpile stewardship program, they contained physical models appropriate to the accident environment. These models were used where they were unique within the partnership and where Sandia’s massively parallel computers and ASC code infrastructure were needed to accommodate very large and computationally intense simulations. Sandia external aerodynamics and heat transfer calculations were made for both undamaged and damaged orbiter configurations using rarefied direct simulation Monte Carlo (DSMC) codes for configurations flying at altitudes above 270,000 ft and continuum Navier-Stokes codes for altitudes below 250,000 ft. The same computational tools were used to predict jet impingement heating and pressure loads on the internal structure, as well as the heat transfer and flow through postulated damage sites into and through the wing. Navier-Stokes and DSMC predictions of heating rates were input to Sandia’s thermal analysis codes to predict the time required for thermal demise of the internal structure and for wire bundle burn-through. Experiments were conducted to obtain quasi-static and dynamic material response data on the foam, tiles, strain isolation pad, and reinforced carbon-carbon wing leading edge. These data were then used in Sandia finite element calculations of foam impacting the thermal protection tiles and wing leading edge in support of accident scenario definition and foam impact testing at Southwest Research Institute.

The supercomputers at Sandia played a key role in helping NASA determine the cause of the space shuttle Columbia disaster. Sandia researchers’ analyses and experimental studies supported the position that foam debris shed from the fuel tank and impacting the orbiter wing during launch was the most probable cause of the wing damage that led to the breakup of the Columbia.

NOTE: The committee thanks Robert Thomas and the Sandia National Laboratories staff for their assistance in drafting this box.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

tified at least 10 defense applications that rely on high-performance computing (p. 22): comprehensive aerospace vehicle design, signals intelligence, operational weather/ocean forecasting, stealthy ship design, nuclear weapons stockpile stewardship, signal and image processing, the Army’s future combat system, electromagnetic weapons, geospatial intelligence, and threat weapon systems characterization.

Spillover Effects

Advanced computer research programs have had major payoffs in terms of technologies that enriched the computer and communication industries. As an example, the DARPA VLSI program in the 1970s had major payoffs in developing timesharing, computer networking, workstations, computer graphics, windows and mouse user interface technology, very large integrated circuit design, reduced instruction set computers, redundant arrays of inexpensive disks, parallel computing, and digital libraries.42 Today’s personal computers, e-mail, networking, data storage all reflect these advances. Many of the benefits were unanticipated.

Closer to home, one can list many technologies that were initially developed for supercomputers and that, over time, migrated to mainstream architectures. For example, vector processing and multithreading, which were initially developed for supercomputers (Illiac IV/STAR100/TI ASC and CDC 6600, respectively), are now used on PC chips. Instruction pipelining and prefetch and memory interleaving appeared in early IBM supercomputers and have become universal in today’s microprocessors. In the software area, program analysis techniques such as dependence analysis and instruction scheduling, which were initially developed for supercomputer compilers, are now used in most mainstream compilers. High-performance I/O needs on supercomputers, particularly parallel machines, were one of the motivations for Redundant Array of Inexpensive Disks (RAID)43 storage, now widely used for servers. Scientific visualization was developed in large part to help scientists interpret the results of their supercomputer calculations; today, even spreadsheets can display three-dimensional data plots. Scientific software libraries such as LAPACK that were originally designed for high-performance platforms are now widely used in commercial packages running on a large range of

42  

NRC. 1995. Evolving the High Performance Computing and Communications Initiative to Support the Nation’s Information Infrastructure. Washington, D.C.: National Academy Press, pp. 17-18.

43  

RAID is a disk subsystem consisting of many disks that increases performance and/or provides fault tolerance.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×

platforms. In the application areas, many application packages that are routinely used in industry (e.g., NASTRAN) were initially developed for supercomputers. These technologies were developed in a complex interaction involving researchers at universities, the national laboratories, and companies. The reasons for such a spillover effect are obvious and still valid nowadays: Supercomputers are at the cutting edge of performance. In order to push performance they need to adapt new hardware and software solutions ahead of mainstream computers. And the high performance levels of supercomputers enable new applications that can be developed on capability platforms and then used on an increasingly broader set of cheaper platforms as hardware performance continues to improve.

Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 28
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 29
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 30
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 31
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 32
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 33
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 34
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 35
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 36
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 37
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 38
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 39
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 40
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 41
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 42
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 43
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 44
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 45
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 46
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 47
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 48
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 49
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 50
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 51
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 52
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 53
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 54
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 55
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 56
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 57
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 58
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 59
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 60
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 61
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 62
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 63
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 64
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 65
Suggested Citation:"3 Brief History of Supercomputing." National Research Council. 2005. Getting Up to Speed: The Future of Supercomputing. Washington, DC: The National Academies Press. doi: 10.17226/11148.
×
Page 66
Next: 4 The Demand for Supercomputing »
Getting Up to Speed: The Future of Supercomputing Get This Book
×
Buy Paperback | $48.00 Buy Ebook | $38.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Supercomputers play a significant and growing role in a variety of areas important to the nation. They are used to address challenging science and technology problems. In recent years, however, progress in supercomputing in the United States has slowed. The development of the Earth Simulator supercomputer by Japan that the United States could lose its competitive advantage and, more importantly, the national competence needed to achieve national goals. In the wake of this development, the Department of Energy asked the NRC to assess the state of U.S. supercomputing capabilities and relevant R&D. Subsequently, the Senate directed DOE in S. Rpt. 107-220 to ask the NRC to evaluate the Advanced Simulation and Computing program of the National Nuclear Security Administration at DOE in light of the development of the Earth Simulator. This report provides an assessment of the current status of supercomputing in the United States including a review of current demand and technology, infrastructure and institutions, and international activities. The report also presents a number of recommendations to enable the United States to meet current and future needs for capability supercomputers.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!