shown by the work of Dr. Jorgenson and a number of other economists, the single biggest driver of productivity growth.2


Projecting a satellite photograph of the United States taken during the electrical blackout of August 2003, Dr. Raduchel noted that a software bug had recently been revealed as the mishap’s primary driver. When “one line of code buried in an energy management system from GE failed to work right,” a vast physical infrastructure had been paralyzed, one example among many of how the U.S. economy “is so dependent in ways that we don’t understand.” Recalling the STEP Board’s February 2003 workshop “Deconstructing the Computer,” Dr. Raduchel described an incident related by a speaker from a company involved in systems integration consulting: When the credit card systems went down at a major New York bank, the chief programmer was brought in. After she had spent several minutes at the keyboard and everything had begun working again, “she pushed her chair slowly away from the desk and said, ‘Don’t touch anything.’

“ ‘What did you change?’ the head of operations asked.

“ ‘Nothing,’ she said. ‘I don’t know what happened.’


Throughout the 1970s and 1980s, Americans and American businesses regularly invested in ever more powerful and cheaper computers, software, and communications equipment. They assumed that advances in information technology—by making more information available faster and cheaper—would yield higher productivity and lead to better business decisions. The expected benefits of these investments did not appear to materialize—at least in ways that were being measured. Even in the first half of the 1990s, productivity remained at historically low rates, as it had since 1973. This phenomenon was called “the computer paradox,” after Robert Solow’s casual but often repeated remark in 1987: “We see the computer age everywhere except in the productivity statistics.” (See Robert M. Solow, “We’d Better Watch Out,” New York Times Book Review, July 12, 1987.) Dale Jorgenson resolved this paradox, pointing to new data that showed that change at a fundamental level was taking place. While growth rates had not returned to those of the “golden age” of the U.S. economy in the 1960s, he noted that new data did reveal an acceleration of growth accompanying a transformation of economic activity. This shift in the rate of growth by the mid-1990s, he added, coincided with a sudden, substantial, and rapid decline in the prices of semiconductors and computers; the price decline abruptly accelerated from 15 to 28 percent annually after 1995. (See Dale W. Jorgenson and Kevin J. Stiroh, “Raising the Speed Limit: U.S. Economic Growth in the Information Age,” in National Research Council, Measuring and Sustaining the New Economy, Dale W. Jorgenson and Charles W. Wessner, eds., Washington, D.C.: National Academies Press, 2002, Appendix A.) Relatedly, Paul David has argued that computer networks had to be sufficiently developed in order for IT productivity gains to be realized and recognized in the statistics. See Paul A. David, Understanding the Digital Economy, Cambridge, MA: MIT Press, 2000. Also see Erik Brynjolfsson and Lorin M. Hitt, “Computing Productivity: Firm-Level Evidence,” Review of Economics and Statistics 85(4):793-808, 2003, where Brynjolfsson and Hitt argue that much of the benefit of IT comes in the form of improved product quality, time savings and convenience, which rarely show up in official macroeconomic data. Of course, as Dr. Raduchel noted at this symposium, software is necessary to take advantage of hardware capabilities.

The National Academies of Sciences, Engineering, and Medicine
500 Fifth St. N.W. | Washington, D.C. 20001

Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement