I
INTRODUCTION



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium I INTRODUCTION

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium This page intentionally left blank.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium Software and the New Economy The New Economy refers to a fundamental transformation in the United States economy as businesses and individuals capitalize on new technologies, new opportunities, and national investments in computing, information, and communications technologies. Use of this term reflects a growing conviction that widespread use of these technologies makes possible a sustained increase in the productivity and growth of the U.S. economy.1 Software is an encapsulation of knowledge in an executable form that allows for its repeated and automatic applications to new inputs.2 It is the means by which we interact with the hardware underpinning information and communications technologies. Software is increasingly an integral and essential part of most goods and services—whether it is a handheld device, a consumer appliance, or a retailer. The United States economy, today, is highly dependent on software with 1   In the context of this analysis, the New Economy does not refer to the boom economy of the late 1990s. The term is used in this context to describe the acceleration in U.S. productivity growth that emerged in the mid-1990s, in part as a result of the acceleration of Moore’s Law and the resulting expansion in the application of lower cost, higher performance information technologies. See Dale W. Jorgenson, Kevin J. Stiroh, Robert J. Gordon, and Daniel E. Sichel, “Raising the Speed Limit: U.S. Economic Growth in the Information Age,” Brookings Papers on Economic Activity, (1):125-235, 2000. 2   See the presentation by Monica Lam, summarized in the Proceedings section of this volume.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium businesses, public utilities, and consumers among those integrated within complex software systems. Almost every aspect of a modern corporation’s operations is embodied in software. According to Anthony Scott of General Motors, a company’s software embodies a whole corporation’s knowledge into business processes and methods—“virtually everything we do at General Motors has been reduced in some fashion or another to software.” Much of our public infrastructure relies on the effective operation of software, with this dependency also leading to significant vulnerabilities. As William Raduchel observed, it seems that the failure of one line of code, buried in an energy management system from General Electric, was the initial source leading to the electrical blackout of August 2003 that paralyzed much of the northeastern and midwestern United States.3 Software is also redefining the consumer’s world. Microprocessors embedded in today’s automobiles require software to run, permitting major improvements in their performance, safety, and fuel economy. And new devices such as the iPod are revolutionizing how we play and manage music, as personal computing continues to extend from the desktop into our daily activities. As software becomes more deeply embedded in most goods and services, creating reliable and robust software is becoming an even more important challenge. Despite the pervasive use of software, and partly because of its relative immaturity, understanding the economics of software presents an extraordinary challenge. Many of the challenges relate to measurement, econometrics, and industry structure. Here, the rapidly evolving concepts and functions of software as well as its high complexity and context-dependent value makes measuring software difficult. This frustrates our understanding of the economics of software—both generally and from the standpoint of action and impact—and impedes both policy making and the potential for recognizing technical progress in the field. While the one-day workshop gathered a variety of perspectives on software, growth, measurement, and the future of the New Economy, it of course could not (and did not) cover every dimension of this complex topic. For example, workshop participants did not discuss the potential future opportunities in leveraging software in various application domains. This major topic considers the potential for major future opportunities for software to revolutionize key sectors of the U.S. economy, including the health care industry. The focus of the meeting was on developing a better understanding of the economics of software. Indeed, as Dale Jorgenson pointed out in introducing the National Academies conference on Software and the New Economy, “we don’t have a very 3   “Software Failure Cited in August Blackout Investigation,” Computerworld, November 20, 2003.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium clear understanding collectively of the economics of software.” Accordingly, a key goal of this conference was to expand our understanding of the economic nature of software, review how it is being measured, and consider public policies to improve measurement of this key component of the nation’s economy, and measures to ensure that the United States retains its lead in the design and implementation of software. Introducing the Economics of Software, Dr. Raduchel noted that software pervades our economy and society.4 As Dr. Raduchel further pointed out, software is not merely an essential market commodity but, in fact, embodies the economy’s production function itself, providing a platform for innovation in all sectors of the economy. This means that sustaining leadership in information technology (IT) and software is necessary if the United States is to compete internationally in a wide range of leading industries—from financial services, health care and automobiles to the information technology industry itself. MOORE’S LAW AND THE NEW ECONOMY The National Academies’ conference on software in the New Economy follows two others that explored the role of semiconductors and computer components in sustaining the New Economy. The first National Academies conference in the series considered the contributions of semiconductors to the economy and the challenges associated with maintaining the industry on the trajectory anticipated by Moore’s Law. Moore’s Law anticipates the doubling of the number of transistors on a chip every 18 to 24 months. As Figure 1 reveals, Moore’s Law has set the pace for growth in the capacity of memory chips and logic chips from 1970 to 2002.5 An economic corollary of Moore’s Law is the fall in the relative prices of semiconductors. Data from the Bureau of Economic Analysis (BEA), depicted in Figure 2, shows that semiconductor prices have been declining by about 50 percent a year for logic chips and about 40 percent a year for memory chips between 1977 and 2000. This is unprecedented for a major industrial input. According to Dale Jorgenson, this increase in chip capacity and the concurrent fall in price—the “faster-cheaper” effect—created powerful incentives for firms to substitute information technology for other forms of capital, leading to the productivity increases that are the hallmark of the New Economy.6 The second National Academies conference on the New Economy, “Deconstructing the Computer,” examined how the impact of Moore’s Law and 4   See William Raduchel, “The Economics of Software,” in this volume. 5   For a review of Moore’s Law on its fortieth anniversary, see The Economist, “Moore’s Law at 40” March 26, 2005. 6   Dale W. Jorgenson “Information Technology and the U.S. Economy,” The American Economic Review, 91(1):1-32, 2001.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 1 Transistor density on microprocessors and memory chips. its price corollary extended from microprocessors and memory chips to high technology hardware such as computers and communications equipment. While high-lighting the challenges of measuring the fast-evolving component industries, that conference also brought to light the impact of computers on economic growth based on BEA price indexes for computers (See Figure 3). These figures reveal that computer prices have declined at about 15 percent per year between 1977 and the new millennium, helping to diffuse modern information technology across a broad spectrum of users and applications. The New Economy is alive and well today. Recent figures indicate that, since the end of the last recession in 2001, information-technology-enhanced productivity growth has been running about two-tenths of a percentage point higher than

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 2 Relative prices of computers and semiconductors, 1977-2000. NOTE: All price indexes are divided by the output price index. FIGURE 3 Relative prices of computers, communications, and software, 1977-2000. NOTE: All price indexes are divided by the output price index.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium in any recovery of the post-World War II period.7 The current challenge rests in developing evidence-based policies that will enable us to continue to enjoy the fruits of higher productivity in the future. It is with this aim that the Board on Science, Technology, and Economic Policy of the National Academies has undertaken a series of conferences to address the need to measure the parameters of the New Economy as an input to better policy making and to highlight the policy challenges and opportunities that the New Economy offers. This volume reports on the third National Academies conference on the New Economy and Software.8 While software is generally believed to have become more sophisticated and more affordable over the past three decades, data to back these claims remains incomplete. BEA data show that the price of prepackaged software has declined at rates comparable to those of computer hardware and communications equipment (See Figure 3). Yet prepackaged software makes up only about 25 to 30 percent of the software market. There remain large gaps in our knowledge about custom software (such as those produced by SAP or Oracle for database management, cost accounting, and other business functions) and own-account software (which refers to special purpose software such as for airlines reservations systems and digital telephone switches). There also exists some uncertainty in classifying software, with distinctions made among prepackaged, custom, and own-account software often having to do more with market relationships and organizational roles rather than purely technical attributes.9 In all, as Dale Jorgenson points out, there is a large gap in our understanding of the sources of growth in the New Economy.10 Consequently, a major purpose of the third National Academies conference was to draw attention to the need to 7   Dale Jorgenson, Mun Ho, and Kevin Stiroh, “Will the U.S. Productivity Resurgence Continue?” FRBNY Current Issues in Economics and Finance, 10(13), 2004. 8   The National Academies series on the New Economy concluded with a conference on the role of telecommunications equipment (which relies heavily on software.) The “faster-better-cheaper” phenomenon—associated with Moore’s Law—is also evident in the telecom sector, which has seen enormous increases in capacity of telecommunications equipment combined with rapidly declining quality-adjusted prices. See National Research Council, The Telecommunications Challenge: Changing Technologies and Evolving Policies, Dale W. Jorgenson and Charles W. Wessner, eds., Washington, D.C.: National Academies Press, forthcoming. 9   One pattern, most typical for Lotus Notes for example, is a three-way relationship whereby a consumer organization acquires a vendor system and simultaneously hires a consultant organization to configure and manage that system, due to its complexity and the “capital cost” of the learning curve. Sometimes that consultant organization is a service-focused division of the vendor, and sometimes it is a third party with appropriate licenses and training certifications from the vendor. 10   One source of confusion is the vagueness of the definition of the software industry. For example, some believe that the financial sector spends more developing software than the software vendor sector. This suggests that IT-driven growth is caused by IT adoption generally and not just by the products provided specifically by the IT industry.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium address this gap in our ability to understand and measure the trends and contribution of software to the operation of the American economy. THE NATURE OF SOFTWARE To develop a better economic understanding of software, we first need to understand the nature of software itself. Software, comprising of millions of lines of code, operates within a stack. The stack begins with the kernel, which is a small piece of code that talks to and manages the hardware. The kernel is usually included in the operating system, which provides the basic services and to which all programs are written. Above this operating system is middleware, which “hides” both the operating system and the window manager. For the case of desktop computers, for example, the operating system runs other small programs called services as well as specific applications such as Microsoft Word and PowerPoint. Thus, when a desktop computer functions, the entire stack is in operation. This means that the value of any part of a software stack depends on how it operates within the rest of the stack.11 The stack itself is highly complex. According to Monica Lam of Stanford University, software may be the most intricate thing that humans have learned to build. Moreover, it is not static. Software grows more complex as more and more lines of code accrue to the stack, making software engineering much more difficult than other fields of engineering. With hundreds of millions of lines of code making up the applications that run a big company, for example, and with those applications resting on middleware and operating systems that, in turn, comprise tens of millions of lines of code, the average corporate IT system today is far more complicated than the Space Shuttle, says William Raduchel. The way software is built also adds to its complexity and cost. As Anthony Scott of GM noted, the process by which corporations build software is “somewhat analogous to the Winchester Mystery House,” where accretions to the stack over time create a complex maze that is difficult to fix or change.12 This complexity means that a failure manifest in one piece of software, when added to the stack, may not indicate that something is wrong with that piece of software per se, but quite possibly can cause the failure of some other piece of the stack that is being tested for the first time in conjunction with the new addition. 11   Other IT areas have their own idiomatic “stack” architectures. For example, there are more CPUs in industrial control systems, than on desktops, and these embedded systems do not have “window managers.” A similar point can be made for mainframe systems, distributed systems, and other non-desktop computing configurations. 12   The Winchester Mystery House, in San Jose, California, was built by the gun manufacturer heiress who believed that she would die if she stopped construction on her house. Ad hoc construction, starting in 1886 and continuing over nearly four decades with no master architectural plan, created an unwieldy mansion with a warren of corridors and staircases that often lead nowhere.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium Box A: The Software Challenge According to Dr. Raduchel a major challenge in creating a piece of code lies in figuring out how to make it “error free, robust against change, and capable of scaling reliably to incredibly high volumes while integrating seamlessly and reliably to many other software systems in real time.” Other challenges involved in a software engineering process include cost, schedule, capability/features, quality (dependability, reliability, security), performance, scalability/flexibility, and many others. These attributes often involve trade offs against one another, which means that priorities must be set. In the case of commercial software, for example, market release deadlines may be a primary driver, while for aerospace and embedded health devices, software quality may be the overriding priority. Writing and Testing Code Dr. Lam described the software development process as one comprising various iterative stages. (She delineated these stages for analytical clarity, although they are often executed simultaneously in modern commercial software production processes.) After getting an idea of the requirements, software engineers develop the needed architecture and algorithms. Once this high-level design is established, focus shifts to coding and testing the software. She noted that those who can write software at the kernel level are a very limited group, perhaps numbering only the hundreds worldwide. This reflects a larger qualitative difference among software developers, where the very best software developers are orders of magnitude—up to 20 to 100 times—better than the average software developer.13 This means that a disproportionate amount of the field’s creative work is done by a surprisingly small number of people. As a rule of thumb, producing software calls for a ratio of one designer to 10 coders to 100 testers, according to Dr. Raduchel.14 Configuring, testing, and tuning the software account for 95 to 99 percent of the cost of all software in operation. These non-linear complementarities in the production of software mean that simply adding workers to one part of the production process is not likely to make a software project finish faster.15 Further, since a majority of time in devel- 13   This is widely accepted folklore in the software industry, but one that is difficult to substantiate because it is very difficult to measure software engineering productivity. 14   This represents Dr. Raduchel’s estimate. Estimates vary in the software industry. 15   See Fred Brooks, The Mythical Man Month, New York: Addison-Wesley, 1975.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium oping a software program deals with handling exceptions and in fixing bugs, it is often hard to estimate software development time.16 The Economics of Open-source Software Software is often developed in terms of a stack, and basic elements of this stack can be developed on a proprietary basis or on an open or shared basis.17 According to Hal Varian, open-source is, in general, software whose source code is freely available for use or modification by users and developers (and even hackers.) By definition, open-source software is different from proprietary software whose makers do not make the source code available. In the real world, which is always more complex, there are a wide range of access models for open-source software, and many proprietary software makers provide “open”-source access to their product but with proprietary markings.18 While open-source software is a public good, there are many motivations for writing open-source software, he added, including (at the edge) scratching a creative itch and demonstrating skill to one’s peers. Indeed, while ideology and altruism provide some of the motivation, many firms, including IBM, make major investments in Linux and other open-source projects for solid market reasons. While the popular idea of a distributed model of open-source development is one where spontaneous contributions from around the world are merged into a functioning product, most successful distributed open-source developments take place within pre-established or highly precedented architectures. It should thus not come as a surprise that open source has proven to be a significant and successful way of creating robust software. Linux provides a major instance where both a powerful standard and a working reference for implementation have appeared at the same time. Major companies, including Amazon.com and Google, have chosen Linux as the kernel for their software systems. Based on this kernel, these companies customize software applications to meet their particular business needs. Dr. Varian added that a major challenge in developing open-source software is the threat of “forking” or “splintering.” Different branches of software can arise from modifications made by diverse developers in different parts of the 16   There are econometric models for cost estimation in specific domains. See for example, Barry Boehm, Chris Abts, A. Brown, Sunita Chulani, Bradford Clark, and Ellis Horowitz, Software Cost Estimation with Cocomo II, Pearson Education, 2005. 17   Not all software is developed in terms of a stack. Indeed, modern e-commerce frameworks have very different structures. 18   Sun, for example, provides source access to its Java Development Kit and now Solaris, but the code is not “open source.” Microsoft provides Windows source access to some foreign governments and enterprise customers.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium Box B: The Economist’s Challenge: Software as a Production Function Software is “the medium through which information technology expresses itself,” says William Raduchel. Most economic models miscast software as a machine, with this perception dating to the period 40 years ago when software was a minor portion of the total cost of a computer system. The economist’s challenge, according to Dr. Raduchel, is that software is not a factor of production like capital and labor, but actually embodies the production function, for which no good measurement system exists. which create new concepts, categories, and measurement challenges.30 Characterizing attempts made so far to deal with the issue measuring the New Economy as “piecemeal”—“we are trying to get the best price index for software, the best price index for hardware, the best price index for LAN equipment routers, switches, and hubs”—he suggested that a single comprehensive measure might better capture the value of hardware, software, and communications equipment in the national accounts. Indeed, information technology may be thought of as a “package,” combining hardware, software, and business-service applications. Tracking Software Prices Changes Another challenge in the economics of software is tracking price changes. Incorporating computer science and computer engineering into the economics of software, Alan White and Ernst Berndt presented their work on estimating price changes for prepackaged software, based on their assessment of Microsoft Corporation data.31 Dr. White noted several important challenges facing those seeking to construct measures of price and price change. One challenge lies in ascertaining which price to measure, since software products may be sold as full 30   For example, how is a distinction to be made between service provisioning (sending data to a service outsource) and the creation and use of a local organizational asset (sending data to a service application internally developed or acquired)? The user experience may be identical (e.g., web-based access) and the geographic positioning of the server (e.g., at a secure remote site, with geography unknown to the individual user) may also be identical. In other words, the technology and user experience both look almost the same, but the contractual terms of provisioning are very different. 31   Jaison R. Abel, Ernst R. Berndt, Alan G. White, “Price Indexes for Microsoft’s Personal Computer Software Products,” NBER Working Paper 9966, Cambridge, MA: National Bureau for Economic Research, 2003. The research was originally sponsored by Microsoft Corporation, though the authors are responsible for its analysis.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 5 Quality-adjusted prices for operating systems have fallen, 1987-2000. SOURCE: Jaison R. Abel, Ernst R. Berndt, Cory W. Monroe, and Alan G. White, “Hedonic Price Indexes for Operating Systems and Productivity Suite PC Software,” draft working paper, 2004. versions or as upgrades, stand-alones, or suites. An investigator has also to determine what the unit of output is, how many licenses there are, and when price is actually being measured. Another key issue concerns how the quality of software has changed over time and how that should be incorporated into price measures. Surveying the types of quality changes that might come into consideration, Dr. Berndt gave the example of improved graphical interface and “plug-n-play,” as well as increased connectivity between different components of a software suite. In their study, Drs. White and Berndt compared the average price level (computing the price per operating system as a simple average) with quality-adjusted price levels using hedonic and matched-model econometric techniques. They found that while the average price, which does not correct for quality changes, showed a growth rate of about 1 percent a year, the matched model showed a price decline of around 6 percent a year and the hedonic calculation showed a much larger price decline of around 16 percent. These quality-adjusted price declines for software operating systems, shown in Figure 5, support the general thesis that improved and cheaper information technologies contributed to greater information technology adoption leading to productivity improvements characteristic of the New Economy.32 32   Jorgenson et al., 2000, op cit.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium THE CHALLENGES OF THE SOFTWARE LABOR MARKET A Perspective from Google Wayne Rosing of Google said that about 40 percent of the company’s thousand plus employees were software engineers, which contributed to a company culture of “designing things.” He noted that Google is “working on some of the hardest problems in computer science…and that someday, anyone will be able to ask a question of Google and we’ll give a definitive answer with a great deal of background to back up that answer.” To meet this goal, Dr. Rosing noted that Google needed to pull together “the best minds on the planet and get them working on these problems.” Google, he noted is highly selective, hiring around 300 new workers in 2003 out of an initial pool of 35,000 resumes sent in from all over the world. While he attributed this high response to Google’s reputation as a good place to work, Google in turn looked for applicants with high “raw intelligence,” strong computer algorithm skills and engineering skills, and a high degree of self-motivation and self-management needed to fit in with Google’s corporate culture. Google’s outstanding problem, Dr. Rosing lamented, was that “there aren’t enough good people” available. Too few qualified computer science graduates were coming out of American schools, he said. While the United States remained one of the world’s top areas for computer-science education and produced very good graduates, there are not enough people graduating at the Masters’ or Doctoral level to satisfy the needs of the U.S. economy, especially for innovative firms such as Google. At the same time, Google’s continuing leadership requires having capable employees from around the world, drawing on advances in technology and providing language specific skills to service various national markets. As a result, Google hires on a global basis. A contributing factor to Google’s need to hire engineers outside the country, he noted, is the impact of U.S. visa restrictions. Noting that the H1-B quota for 2004 was capped at 65,000, down from approximately 225,000 in previous years, he said that Google was not able to hire people who were educated in the United States, but who could not stay on and work for lack of a visa. Dr. Rosing said that such policies limited the growth of companies like Google within the nation’s borders—something that did not seem to make policy sense. The Offshore Outsourcing Phenomenon Complexity and efficiency are the drivers of offshore outsourcing, according to Jack Harding of eSilicon, a relatively new firm that produces custom-made microchips. Mr. Harding noted that as the manufacturing technology grows more complex, a firm is forced to stay ahead of the efficiency curve through large recapitalization investments or “step aside and let somebody else do that part of

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 6 The Offshore Outsourcing Matrix. the work.” This decision to move from captive production to outsourced production, he said, can then lead to offshore-outsourcing—or “offshoring”—when a company locates a cheaper supplier in another country of same or better quality. Displaying an outsourcing-offshoring matrix (Figure 6) Mr. Harding noted that it was actually the “Captive-Offshoring” quadrant, where American firms like Google or Oracle open production facilities overseas, that is the locus of a lot of the current “political pushback” about being “un-American” to take jobs abroad.33 Activity that could be placed in the “Outsource-Offshore” box, meanwhile, was marked by a trade-off where diminished corporate control had to be weighed against very low variable costs with adequate technical expertise.34 Saving money by outsourcing production offshore not only provides a compelling business motive, it has rapidly become “best practice” for new companies. Though there might be exceptions to the rule, Mr. Harding noted that a software company seeking venture money in Silicon Valley that did not have a plan to base a development team in India would very likely be disqualified. It would not be seen as competitive if its intention was to hire workers at $125,000 a year in Silicon Valley when comparable workers were available for $25,000 a year in Bangalore. (See Figure 7, cited by William Bonvillian, for a comparison of annual salaries for software programmers.) Heeding this logic, almost every 33   For example, see the summary of remarks by Mr. James Socas, captured in the Proceedings section of this volume. 34   Although Mr. Harding distinguished between “captive offshoring” and “offshore outsourcing,” most speakers used the term “offshore outsourcing” or “offshoring” less precisely to refer to both phenomena in general. We follow the general usage in the text here.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 7 Averages of base annual salary for a programmer in various countries. SOURCE: Computerworld, April 28, 2003. software firm has moved or is in the process of moving its development work to locations like India, observed Mr. Harding. The strength of this business logic, he said, made it imperative that policy-makers in the United States understand that offshoring is irreversible and learn how to constructively deal with it. How big is the offshoring phenomenon? Despite much discussion, some of it heated, the scope of the phenomenon is poorly documented. As Ronil Hira of the Rochester Institute of Technology pointed out, the lack of data means that no one could say with precision, how much work had actually moved offshore. This is clearly a major problem from a policy perspective.35 He noted, however, that the effects of these shifts were palpable from the viewpoint of computer hardware engineers and electrical and electronics engineers, whose ranks had faced record levels of unemployment in 2003. SUSTAINING THE NEW ECONOMY: THE IMPACT OF OFFSHORING What is the impact of the offshoring phenomenon on the United States and what policy conclusions can we draw from this assessment? Conference participants offered differing, often impassioned views on this question. Presenters did 35   Nevertheless, Dr. Hira implied that job loss and downward wage pressures in the United States information technology sector were related to the employment of hardware and software engineers abroad. In his presentation, Dr. Hira noted that he is the chair of the Career and Workforce Committee of IEEE-USA, a professional society that represents 235,000 U.S. engineers.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium not agree with one another, even on such seemingly simple issues, such as whether the H1-B quota is too high or too low, whether the level for H1-B visas is causing U.S. companies to go abroad for talent or not, whether there is a shortage of talent within U.S. borders or not, or whether there is a short-term over-supply (or not) of programmers in the present labor market. Conference participants, including Ronil Hira and William Bonvillian, highlighted two schools of thought on the impact of offshore outsourcing—both of which share the disadvantage of inadequate data support. Whereas some who take a macroeconomic perspective believe that offshoring will yield lower product and service costs and create new markets abroad fueled by improved local living standards, others, including some leading industrialists who understand the micro implications, have taken the unusual step of arguing that offshoring can erode the United States’ technological competitive advantage and have urged constructive policy countermeasures. Among those with a more macro outlook, noted Dr. Hira, is Catherine Mann of the Institute for International Economics, who has argued that “just as for IT hardware, globally integrated production of IT software and services will reduce these prices and make tailoring of business-specific packages affordable, which will promote further diffusion of IT use and transformation throughout the U.S. Box C: Two Contrasting Views on Offshore Outsourcing “Outsourcing is just a new way of doing international trade. More things are tradable than were in the past and that’s a good thing…. I think that outsourcing is a growing phenomenon, but it’s something that we should realize is probably a plus for the economy in the long run.” N. Gregory Mankiwa “When you look at the software industry, the market share trend of the U.S.-based companies is heading down and the market share of the leading foreign companies is heading up. This x-curve mirrors the development and evolution of so many industries that it would be a miracle if it didn’t happen in the same way in the IT service industry. That miracle may not be there.” Andy Grove a   Dr. Mankiw made this remark in February 2004, while Chairman of the President’s Council of Economic Advisors. Dr. Mankiw drew a chorus of criticism from Congress and quickly backpedaled, although other leading economists supported him. See “Election Campaign Hit More Sour Notes,” The Washington Post, p. F-02, February 22, 2004.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium economy.”36 Cheaper information technologies will lead to wider diffusion of information technologies, she notes, sustaining productivity enhancement and economic growth.37 Dr. Mann acknowledges that some jobs will go abroad as production of software and services moves offshore, but believes that broader diffusion of information technologies throughout the economy will lead to an even greater demand for workers with information technology skills.38 Observing that Dr. Mann had based her optimism in part on the unrevised Bureau of Labor Statistics (BLS) occupation projection data, Dr. Hira called for reinterpreting this study in light of the more recent data. He also stated his disagreement with Dr. Mann’s contention that lower IT services costs provided the only explanation for either rising demand for IT products or the high demand for IT labor witnessed in the 1990s. He cited as contributing factors the technological paradigm shifts represented by such major developments as the growth of the Internet as well as Object-Oriented Programming and the move from mainframe to client-server architecture. Dr. Hira also cited a recent study by McKinsey and Company that finds, with similar optimism, that offshoring can be a “win-win” proposition for the United States and countries like India that are major loci of offshore outsourcing for software and services production.39 Dr. Hira noted, however, that the McKinsey estimates relied on optimistic estimates that have not held up to recent job market realities. McKinsey found that India gains a net benefit of at least 33 cents from every dollar the United States sends offshore, while America achieves a net benefit of at least $1.13 for every dollar spent, although the model apparently assumes that India buys the related products from the United States. These more sanguine economic scenarios must be balanced against the lessons of modern growth theorists, warned William Bonvillian in his conference presentation. Alluding to Clayton Christiansen’s observation of how successful companies tend to swim upstream, pursuing higher-end, higher-margin customers 36   Catherine Mann, “Globalization of IT Services and White Collar Jobs: The Next Wave of Productivity Growth,” International Economics Policy Briefs, PB03-11, December 2003. 37   Lael Brainard and Robert Litan have further underlined the benefits to the United States economy, in this regard, noting that lower inflation and higher productivity, made possible through offshore outsourcing, can allow the Federal Reserve to run a more accommodative monetary policy, “meaning that overall and over time the [U.S.] economy will grow faster, creating the conditions for higher overall employment.” See Lael Brainard and Robert E. Litan, “ ‘Offshoring’ Service Jobs: Bane or Boon and What to Do?” Brookings Institution Policy Brief 132, April 2004. 38   Challenging the mainstream economics consensus about the benefits of offshore outsourcing, Prof. Samuelson asserts that the assumption that the laws of economics dictate that the U.S. economy will benefit from all forms of international trade is a “popular polemical untruth.” See Paul Samuelson, “Where Ricardo and Mill Rebut and Confirm Arguments of Mainstream Economists Supporting Globalization” Journal of Economic Perspectives 18(3), 2004. 39   McKinsey Global Institute, “Offshoring: Is it a win-win game?” San Francisco: McKinsey Global Institute, 2003.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium Box D: Software, Public Policy, and National Competitiveness Information technology and software production are not commodities that the United States can potentially afford to give up overseas suppliers but are, as William Raduchel noted in his workshop presentation, a part of the economy’s production function (See Box B). This characteristic means that a loss of U.S. leadership in information technology and software will damage, in an ongoing way, the nation’s future ability to compete in diverse industries, not least the information technology industry. Collateral consequences of a failure to develop adequate policies to sustain national leadership in information technology is likely to extend to a wide variety of sectors from financial services and health care to telecom and automobiles, with critical implications for our nation’s security and the well-being of Americans. with better technology and better products, Mr. Bonvillian noted that nations can follow a similar path up the value chain.40 Low-end entry and capability, made possible by outsourcing these functions abroad, he noted, can fuel the desire and capacity of other nations to move to higher-end markets. Acknowledging that a lack of data makes it impossible to track activity of many companies engaging in offshore outsourcing with any precision, Mr. Bonvillian noted that a major shift was underway. The types of jobs subject to offshoring are increasingly moving from low-end services—such as call centers, help desks, data entry, accounting, telemarketing, and processing work on insurance claims, credit cards, and home loans—towards higher technology services such as software and microchip design, business consulting, engineering, architecture, statistical analysis, radiology, and health care where the United States currently enjoys a comparative advantage. Another concern associated with the current trend in offshore outsourcing is the future of innovation and manufacturing in the United States. Citing Michael Porter and reflecting on Intel Chairman Andy Grove’s concerns, Mr. Bonvillian noted that business leaders look for locations that gather industry-specific resources together in one “cluster.” 41 Since there is a tremendous skill set involved in 40   Clayton Christiansen, The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, Cambridge: Harvard Business School Press, 1997. 41   Michael Porter, “Building the Microeconomic Foundations of Prosperity: Findings from the Business Competitiveness Index,” The Global Competitiveness Report 2003-2004, X Sala-i-Martin, ed., New York: Oxford University Press, 2004.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium advanced technology, argued Mr. Bonvillian, losing a parts of that manufacturing to a foreign country will help develop technology clusters abroad while hampering their ability to thrive in the United States. These effects are already observable in semiconductor manufacturing, he added, where research and development is moving abroad to be close to the locus of manufacturing.42 This trend in hardware, now followed by software, will erode the United States’ comparative advantage in high technology innovation and manufacture, he concluded. The impact of these migrations is likely to be amplified: Yielding market leadership in software capability can lead to a loss of U.S. software advantage, which means that foreign nations have the opportunity to leverage their relative strength in software into leadership in sectors such as financial services, health care, and telecom, with potentially adverse impacts on national security and economic growth. Citing John Zysman, Mr. Bonvillian pointed out that “manufacturing matters,” even in the Information Age. According to Dr. Zysman, advanced mechanisms for production and the accompanying jobs are a strategic asset, and their location makes the difference as to whether or not a country is an attractive place to innovate, invest, and manufacture.43 For the United States, the economic and strategic risks associated with offshoring, noted Mr. Bonvillian, include a loss of in-house expertise and future talent, dependency on other countries on key technologies, and increased vulnerability to political and financial instabilities abroad. With data scarce and concern “enormous” at the time of this conference, Mr. Bonvillian reminded the group that political concerns can easily outstrip economic analysis. He added that a multitude of bills introduced in Congress seemed to reflect a move towards a protectionist outlook.44 After taking the initial step of collecting data, he noted that lawmakers would be obliged to address widespread public concerns on this issue. Near-term responses, he noted, include programs to retrain workers, provide job-loss insurance, make available additional venture financing for innovative startups, and undertake a more aggressive trade policy. Longer term responses, he added, must focus on improving the nation’s innovative capacity by investing in science and engineering education and improving the broadband infrastructure. 42   National Research Council, Securing the Future, Regional and National Programs to Support the Semiconductor Industry, Charles W. Wessner, ed., Washington, D.C.: National Academies Press, 2003. 43   Stephen S. Cohen and John Zysman, Manufacturing Matters: The Myth of the Post-Industrial Economy, New York: Basic Books, 1988. 44   Among several bills introduced in Congress in the 2004 election year was that offered by Senators Kennedy and Daschle, which required that companies that send jobs abroad report how many, where, and why, giving 90 days notice to employees, state social service agencies, and the U.S. Labor Department. Senator John Kerry had also introduced legislation in 2004 requiring call center workers to identify the country they were phoning from.

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium Box E: Key Issues from the Participants’ Roundtable Given the understanding generated at the symposium about the uniqueness and complexity of software and the ecosystem that builds, maintains, and manages it, Dr. Raduchel asked each member of the final Participants’ Roundtable to identify key policy issues that need to be pursued. Drawing from the experience of the semiconductor industry, Dr. Flamm noted that it is best to look ahead to the future of the industry rather than look back and “invest in the things that our new competitors invest in,” especially education. Dr. Rosing likewise pointed out the importance of lifelong learning, observing that the fact that many individuals did not stay current was a major problem facing the United States labor force. What the country needed, he said, was a system that created extraordinary incentives for people to take charge of their own careers and their own marketability. Mr. Socas noted that the debate over software offshoring was not the same as a debate over the merits of free-trade, since factors that give one country a relative competitive advantage over another are no longer tied to a physical locus. Calling it the central policy question of our time, he wondered if models of international trade and system of national accounting, which are based on the idea of a nation-state, continue to be valid in a world where companies increasingly take a global perspective. The current policy issue, he concluded, concerns giving American workers the skills that allow them to continue to command high wages and opportunities. Also observing that the offshoring issue was not an ideological debate between free trade and protectionism, Dr. Hira observed that “we need to think about how to go about making software a viable profession and career for people in America.” What is required, in the final analysis, is a constructive policy approach rather than name calling, noted Dr. Hira. He pointed out that it was important to think through and debate all possible options concerning offshoring rather than tarring some with a “protectionist” or other unacceptable label and “squelching them before they come up for discussion.” Progress on better data is needed if such constructive policy approaches are to be pursued. PROGRESS ON BETTER DATA Drawing the conference to a close, Dr. Jorgenson remarked that while the subject of measuring and sustaining the New Economy had been discovered by

OCR for page 1
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium “Wait a minute! We discovered this problem in 1999, and only five years later, we’re getting the data.” Dale Jorgenson economists only in 1999, much progress had already been made towards developing the knowledge and data needed to inform policy making. This conference, he noted, had advanced our understanding of the nature of software and the role it plays in the economy. It had also highlighted pathbreaking work by economists like Dr. Varian on the economics of open-source software, and Drs. Berndt and White on how to measure prepackaged software price while taking quality changes into account. Presentations by Mr. Beams and Ms. Luisi had also revealed that measurement issues concerning software installation, business reorganization, and process engineering had been thought through, with agreement on new accounting rules. As Dr. Jorgenson further noted, the Bureau of Economic Analysis had led the way in developing new methodologies and was soon getting new survey data from the Census Bureau on how much software was being produced in the United States, how much was being imported, and how much the country was exporting. As national accountants around the world adopted these standards, international comparisons will be possible, he added, and we will be able to ascertain what is moving where—providing the missing link to the offshore outsourcing puzzle.