Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 82
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium Panel III Software Measurement—What Do We Track Today? INTRODUCTION Kenneth Flamm University of Texas at Austin Dr. Flamm, welcoming the audience back from lunch, said that discussion of software’s technical aspects was to give way now to an examination of economic and accounting issues. He introduced Prof. Ernst Berndt of MIT’s Sloan School of Management, a pioneer in developing price indexes for high-tech goods, and Dr. Alan White of Boston’s Analysis Group, Inc., to jointly report on their analysis of prepackaged software prices.32 32 See Jaison R. Abel, Ernst R. Berndt, and Alan G. White, “Price Indexes for Microsoft’s Personal Computer Software Products,” NBER Working Paper No. 9966, Cambridge, MA: National Bureau for Economic Research, 2003 and Jaison R. Abel, Ernst R. Berndt, and Cory W. Monroe, “Hedonic Price Indexes for Personal Computer Operating Systems and Productivity Suites,” NBER Working Paper No. 10427, Cambridge, MA: National Bureau for Economic Research, 2004.
OCR for page 83
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium MEASURING PRICES OF PREPACKAGED SOFTWARE Alan G. White Analysis Group, Inc. Dr. White began by disclosing that Microsoft had funded most of the research on which he and Dr. Berndt had been collaborating, a pair of studies conducted over the previous 4 years. Counsel for Microsoft had retained Dr. Berndt in 2000 to work on price-measurement issues in the software industry, his essential tasks having been to demonstrate how to measure software prices, and to explain how they had been changing over time. Dr. White said that he and Dr. Berndt would not be speaking about the merits or otherwise of Microsoft’s actions, but rather would describe their own work in estimating price changes for prepackaged software over time. Although better estimates of price change existed for prepackaged than for own-account or custom software, Dr. White said, many of those studies were old, dating to the late 1980s or early 1990s. And, in any event, important challenges remained for those constructing measures of price and price change, even when their activity focused on prepackaged software. One such challenge, at the fundamental level, was ascertaining which price to measure, since software products may be sold as full versions or upgrades, as stand-alone applications or suites. Evoking Windows to demonstrate the complexity of this issue, Dr. White ran down a variety of options: buying a full version of Windows 98; upgrading to Windows 98 from Windows 95; or, in the case of a student, buying an academic version of Windows 98. Other product forms existed as well: An enterprise agreement differed somewhat from a standard full version or an upgrade in that it gave the user rights to upgrades over a certain period of time. The investigators had to determine what the unit of output was, how many licenses there were, and which price was actually being measured. Adding to the challenge was the fact that Microsoft sold its products through diverse channels of distribution. It was selling through original equipment manufacturers (OEMs) like Compaq, Dell, and Gateway, which bundled the software with the hardware, but also through distributors like Ingram and Merisel. Prices varied by channel, which also needed to be taken into account. Another issue, to be discussed by Dr. Berndt, was how the quality of software had changed over time and how that should be incorporated into price measures. These issues had to be confronted, because measuring prices matters for producing an accurate measure of inflation, which is used to deflate measures of GDP both at an aggregate level and by sector. Prices Received by Microsoft Declined Between 1993 and 2001 Dr. White said he would discuss one of two studies he and Dr. Berndt had done, both of which showed that software prices had been declining. The study
OCR for page 84
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium Dr. White would summarize used internal Microsoft transaction data and thus was situated “essentially at the first line of distribution,” taking into account both primary channels through which Microsoft was selling its products, the OEM channel and the finished-goods or distributor-wholesale channel. The prices he would be referring to would thus be those that Microsoft had received, and whose levels had declined between 1993 and 2001. In constructing measures of price change, Drs. White and Berndt needed to take into account not only such issues as full versions and upgrades, or academic and non-academic licenses, but also volume license agreements and the shift, which had begun in the 1990s, to selling word processors and spreadsheets as part of a suite rather than as stand-alone applications. In the early 1990s, about 50 percent of word processors were sold as stand-alone components, a percentage that had decreased considerably. Excel and Word were now more commonly sold through the Office suite, with stand-alone sales of the latter dropping to fewer than 10 percent in 2001 from over 50 percent in 1993. Volume licensing sales, representing sales to large organizations in the form of a 500-site license or a 1,000-site license, for example, had grown for Microsoft over time. As to the two channels of distribution through which Microsoft sold, operating systems were sold predominantly through the OEM channel, whereas applications were sold predominantly through distributors. The study employed matched-model price indexes generally consistent with Bureau of Labor Statistics (BLS) procedures that treated full versions and upgrades as separate products, or separate elementary units, in constructing measures of price change. Dr. White posted a chart demonstrating that price changes varied quite a bit depending on the product, although all Microsoft software product categories posted declines in 1993-2001 for an overall annual average growth rate of minus 4.26 percent during that period (See Figure 10). The rate of decline also varied somewhat within the period studied (See Figure 11). He stressed that the study, based exclusively on prices received by Microsoft, did not necessarily say anything directly about changes in the prices paid by final consumers. In addition, quality change was not explicitly incorporated into its measures of price change, but Dr. Berndt was about to deal with that subject in his talk. Ernst R. Berndt MIT Sloan School of Management Addressing quality change and price measurement in the mail-order channel, Dr. Berndt stated that since the mail-order channel included prices of products that competed with those of Microsoft, a study of it had its advantages over a study limited to the Microsoft transactions data. The disadvantage, however, was that the mail-order channel was becoming increasingly less important, as most current sales were going through the OEM channel and through the resellers or
OCR for page 85
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 10 Microsoft’s prepackaged software prices have declined at varying rates. FIGURE 11 Microsoft’s prepackaged software prices have declined, 1993-2001. NOTE: AAGRs are -7.79% (1993-1997), -0.60% (1997-2001), -4.26% (1993-2001). SOURCE: Jaison R. Abel, Ernst R. Berndt and Alan G. White, “Price Indexes for Microsoft’s Personal Computer Software Products,” NBER Working Paper No. 9966, Cambridge, MA: National Bureau for Economic Research, 2003.
OCR for page 86
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium distributors channel. Drs. Berndt and White had conducted this part of their study for two reasons: (1) because there had been a lot of previous work on the retail channel; and (2) because they had wanted to construct some measures of quality change, or hedonics, for operating systems and productivity suites that, to the best of their knowledge, had not been done before. Surveying the types of quality changes that might come into consideration, Dr. Berndt pointed to improved graphical user interface and plug-n-play, as well as increased connectivity between, for example, different components of the suite. Greater word length, embedded objects, and other sorts of quality change should be taken into account as well. Hedonic price indexes attempt to adjust for improvements in product quality over time using multivariate regression techniques in which the left-hand variables are prices and the right-hand variables are various measures of quality, and into which time is also incorporated. The product attributes for operating systems had been taken from various documents over the 13-year period between 1987 and 2000; a sample done for productivity suites using prices taken from mail-order ads in the magazine PC World covered a longer period, 1984-2000, and also included quality attributes and price measures. Different Computations, Different Curves Posting a graph showing the basic results for operating systems, Dr. Berndt explained the three curves plotted on it: “Average Price Level,” representing the price per operating system computed as a simple average, which showed an average annual growth rate of roughly 1 percent; “Matched-model,” mimicking BLS procedures by using a matched-model price-index methodology, which showed a decline of around 6 percent a year, “a considerably different picture”; and “Hedonic,” using a traditional approach of multivariate regressions, which showed a much larger rate of price decline, around 16 percent a year (See Figure 12). Splitting the sample into two periods, 1987-1993 and 1993-2000, highlighted considerable variability in price declines with some more recent acceleration. For productivity suites, the story was slightly different (See Figure 13). The “Average Price Level” had fallen very sharply in the final few years of the study, in part, because prices for WordPerfect and Lotus suites were slashed beginning around 1997. The “Matched-model” index showed a decline of not quite 15 percent per year with a marked difference between the first and second halves of the sample: zero and minus 27, respectively. “Hedonics” in this case had shown a rate of price decline that was on average a bit larger than that shown by “Matchedmodel” over the same period. Recapping the two studies, Dr. Berndt expressed regret at not being able to procure data on the rest of the market, saying that “remains a big hole,” but noted that even Microsoft was unable to get data on its competitors’ prices. He also pointed to an interesting methodological question arising from the studies: How
OCR for page 87
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 12 Quality-adjusted prices for operating systems have fallen, 1987-2000. SOURCE: Alan White, Jaison R. Abel, Ernst R. Berndt, and Cory W. Monroe, “Hedonic Price Indexes for Operating Systems and Productivity Suite PC Software” NBER Working Paper 10427, Cambridge, MA: National Bureau for Economic Research, 2004. can software price changes be measured and related to consumer-demand theory when software is sold bundled with hardware? The economic theory of bundling was well worked out only for cases in which consumers are very heterogeneous, he stated, adding, “And that’s why you bundle.” But a price index based on economic theory that is based on heterogeneous consumers raises a number of very difficult measurement issues, as well as theoretical issues. DISCUSSION Hugh McElrath of the Office of Naval Intelligence asked Dr. White whether Microsoft had shared its per-unit prices with him or the data had become public in conjunction with a court case. Dr. White said that he and Dr. Berndt had had access to Microsoft’s internal transactions data because it was part of litigation proceedings. He emphasized, however, that their study presented an index based on the per-unit prices they had received but did not disclose actual price levels.
OCR for page 88
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 13 Quality-adjusted prices for productivity suites have fallen, 1984-2000. SOURCE: Alan White, Jaison R. Abel, Ernst R. Berndt, and Cory W. Monroe, “Hedonic Price Indexes for Operating Systems and Productivity Suite PC Software” NBER Working Paper 10427, Cambridge, MA: National Bureau for Economic Research, 2004. Dr. Flamm pointed to the discrepancies between the MS sales matched-model and hedonic price index results, and the reasons that might be behind them, as an interesting aspect of these two presentations. He asked whether a decline in mail-order margins over time, perhaps with greater competition in the field, could account for them. Second, he wondered whether a matched-model price index could fully capture pricing points between generations of products and speculated that a hedonic index might be able to do so, offering as an example the movement downward of Office-suite prices from one generation to the next. Third, he asked whether it was correct that bundling was mandatory for most U.S. OEMs and, as such, not a decision point, saying he recalled that Microsoft had threatened to sue computer manufacturers if they did not license Windows when they shipped the box. While Drs. Berndt and White admitted that they could not answer Dr. Flamm’s last question with total certainty, Dr. Berndt said that he had been looking at a different question: how to put together a price index that was consistent with consumer-demand theory when bundling is occurring. And he reiterated that the pricing theory on bundling usually put forward was based on heterogeneous consumers.
OCR for page 89
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium Dr. Flamm responded that he was only commenting that, in this case, bundling might not have been entirely voluntary on the part of the manufacturers. He then introduced Shelly Luisi, the Senior Associate Chief Accountant in the Office of the Chief Accountant of the U.S. Securities and Exchange Commission (SEC), who was to talk in tandem with Greg Beams of Ernst & Young about accounting rules for software. ACCOUNTING RULES: WHAT DO THEY CAPTURE AND WHAT ARE THE PROBLEMS? Shelly C. Luisi Securities and Exchange Commission Ms. Luisi said that while she and Mr. Beams would both be addressing the financial reporting that affects software, she would speak more on a conceptual level, while Mr. Beams would speak more about the financial standards specific to software companies and to recognizing software development. Before beginning her talk, she offered the disclaimer that all SEC employees must give when they speak in public, that the views she would express were her own and did not necessarily represent those of the commissioners or other staff at the Commission. Beginning with a general rundown of the objectives of financial reporting, Ms. Luisi explained that the Financial Accounting Standards Board (FASB) has a set of concept statements that underlie all of its accounting standards and that the Board refers back to these statements and tries to comply with them when promulgating new standards. The Board had determined three objectives for financial reporting: furnishing information useful in investment and credit decisions; furnishing information useful in assessing cash flow prospects; and furnishing information about enterprise resources, claims to those resources, and changes in them. These objectives stem primarily from the needs of the users of the financial statements, which FASB has defined as investors, whether they are debt investors or equity investors.33 In light of a general sentiment that financial statements 33 Debt investment is investment in the financing of property or of some endeavor, in which the investor loaning funds does not own the property or endeavor, nor share in its profits. If property is pledged, or mortgaged, as security for the loan, the investor may claim the property to repay the debt if the borrower defaults on payments. Equity investment is investment in the ownership of property, in which the investor shares in gains or losses on the property. Definitions of the U.S. Department of Treasury can be accessed at <http://www.ots.treas.gov/glossary/gloss-d.html>.
OCR for page 90
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium should be all things to all people, it is important to realize when looking at financial statements that the accounting standards used to create them are developed with only one user in mind: the investor. “They are not made for regulators, as much as we might like them to be made for us,” Ms. Luisi observed. “They are not made for economists to study. They are not even made for management.” It is the goal of providing this one user, the investor, with unbiased, neutral information that shapes accounting standards. The goal is not to influence investors in a given direction or to get public policy implemented in a certain way by making a company appear a certain way; it is purely to present unbiased, neutral information on the basis of which investors can do their own research and determine what decisions they want to make regarding an investment. Financial statements are part of financial reporting. Disclosures are also part of financial reporting, and they are very important. When promulgating standards, FASB uses disclosures extensively; that a number is not to be found in a financial statement does not mean that the Board has decided it was unimportant. Disclosures are very important from the SEC perspective as well, noted Ms. Luisi, adding, “We obviously have our own requirements in MD&A [Management’s Discussion and Analysis] and various other places—in 10-Ks (a type of SEC filing) and registration statements—requiring disclosures that we think are important.” Qualifications for Recognition vs. Disclosure There are three primary qualifications distinguishing information that must be recognized in a financial statement from information that merely needs to be disclosed. Information that must be recognized: must meet the definition of an element; assets, liabilities, equity, revenue, expenses, gains, and losses are in this category. must trip recognition; an example of an asset that meets the definition of an element but doesn’t trip a criterion for recognition is a brand’s name. “Surely [Coca-Cola’s] brand name is an asset, surely it has probable future economic benefits that they control,” acknowledged Ms. Luisi, “but, in our current financial accounting framework, they haven’t tripped a recognition criterion that would allow them to recognize that asset on their balance sheet.” and must have a relevant attribute that is capable of reasonably reliable measurement or estimate. While historical cost was considered to be such an attribute in the past, the world has been moving more and more toward fair value, defined as “the amount at which an asset (or liability) could be bought (or incurred) or sold (or settled) in a current transaction … other than a forced sale or liquidation.” Moving to the terms “asset” and “liability,” Ms. Luisi stressed that their definitions and uses in accounting are not the same as in common English or,
OCR for page 91
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium perhaps, in economics. In its concept statements, the FASB has defined “asset” and “liability” as follows: Asset: probable future economic benefits obtained or controlled by a particular entity as a result of past transactions or events Liability: probable future sacrifice of economic benefits arising from present obligations of a particular entity to transfer assets or provide services to other entities in the future as a result of past transactions or events She stressed that a future economic benefit must be probable, it cannot not merely be expected, in order to be recorded on a balance sheet as an asset. Additionally, that probable future benefit must be controlled as part of a past transaction; it cannot depend on the action of another party. “You can’t say, ‘This company has put out a press release and so we know that it is probable that they are going to do something that will result in value to us,’ ” she explained. “You don’t control that benefit—you can’t make them follow through.” Tracing how the capitalization (or estimation of value) of software on the balance sheet arrived at its current form, Ms. Luisi recounted that in October 1974 the FASB put out Statement of Financial Accounting Standards No. 2 (FAS 2), Accounting for Research and Development Costs. The Board’s move to issue this statement the year following its creation indicates that, from the very beginning, it placed a high priority on the matter. This impression is strengthened by the fact that the Board held public hearings in 1973 while deliberating on FAS 2, and the fact that it cited National Science Foundation statistics on R&D in its Basis for Conclusion on the standard. The Board’s decision—which predates even its putting in place a definition for an asset—was that R&D was an expense, with the Basis for Conclusion stating that R&D lacks a requisite high degree of certainty about the future benefits to accrue from it. FASB Rules Software Development to Be R&D Four months after FAS 2 came out, an interpretation of it, FASB Interpretation No. 6 (FIN 6), was issued. FIN 6, Applicability of FASB Statement No. 2 to Computer Software, essentially said that the development of software is R&D also. FIN 6 drew an interesting line between software for sale and software for operations, for which reason different models apply today to (a) software developed to be sold or for use in a process or a product to be sold and (b) software developed for internal use, such as in payroll or administrative systems. Ten years thereafter, in 1985, the Board promulgated FAS 86, Accounting for the Costs of Computer Software to be Sold, Leased, or Otherwise Marketed, which Ms. Luisi characterized as “a companion to FAS 2.” From FAS 86 came the concept in the accounting literature of “technological feasibility,” that point at which a project under development breaks the probability threshold and qualifies as an asset.
OCR for page 92
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FAS 86 thereby gives a little more indication of how to determine when the cost of software development can be capitalized on the balance sheet rather than having to be expensed as R&D. But 13 more years passed before the promulgation of Statement of Position 98-1 (SOP 98-1), Accounting for Costs of Computer Software Developed or Obtained for Internal Use, by a subgroup of the FASB, the Accounting Standards Executive Committee (AcSEC). It was at the recommendation, or request, of the Securities and Exchange Commission’s chief accountant in 1994 that SOP 98-1 was added to the AcSEC’s agenda and created. During the intervening time, practice had become very diverse. Some companies, analogizing to FAS 86, were reporting their software-design costs as R&D expenses; others, regarding software used internally more as a fixed asset, were capitalizing the costs. SOP 98-1 set a different threshold for capitalization of the cost of software for internal use, one that allows it to begin in the design phase, once the preliminary project stage is completed and a company commits to the project. AcSEC was agreeing, in essence, with companies that thought reaching technological feasibility was not prerequisite to their being in a position to declare the probability that they would realize value from a type of software. It is worth noting that AcSEC’s debate on SOP 98-1 extended to the issue of whether software is a tangible or intangible asset. Unable to come to a decision on this point, the committee wrote in its Basis for Conclusion that the question was not important and simply said how to account for it. Ms. Luisi said she believed that, in most financial statements, software is included in property, plant, and equipment rather than in the intangible-assets line and is thus, from an accountant’s perspective, a tangible rather than an intangible asset. Further FASB Projects May Affect Software At that time, the FASB was working on a number of projects with the potential to affect how software is recognized on the balance sheet: Elements. With regard to the three qualifications for recognition in financial statements, the Board was going increasingly to an asset/liability model for everything. She noted that “the concepts of an earnings process to recognize revenue are going away,” and “the concepts of ‘this is a period expense, it needs to be on the income statement’ are going away.” This represented an important change to the accounting model that most contemporary accountants were taught in school and had been applying, and one that required an adjustment. Internally developed software was recognized on a balance sheet, unlike such intangible assets as a brand name. And, while it was recognized at a historical cost, it had tripped a recognition criterion. Recognition. With the Internet bubble of the previous decade, when there was a huge gap between market capitalization and equity, the FASB had been
OCR for page 107
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium tion, he would point to various factors that affect the comparability of such measures across countries and would discuss both attempts to improve these measures and the impact improved measures might have on the analysis of economic growth and productivity. Many of the problems that are inherent in making comparisons across countries had already come up during the symposium: Software is intangible, and, as such, can be somewhat harder to measure than other products. Markets for software are different from those for other goods, which means that, particularly as ownership and licensing arrangements are so common, software is a bit more complicated to deal with. Duplication of software is easy and often low cost, raising the question of whether a copy is an asset. In the current view, the answer was, basically, “yes.” The service life of software can be hard to measure, at least in the way that it is traditionally employed by national accountants. Software’s physical characteristics are not always clear. These special problems do not invalidate the system of national accounts that is used to measure investment, but figuring out how to apply the system’s rules to software does require special effort. Dr. Pilat displayed a chart based on official OECD data for 18 countries’ software investment in 1985, represented by crude estimates; in 1995; and in 2001, the last year for which figures were available (See Figure 17). While observing that, over the period depicted, investment had gone up for all countries, he remarked that there were very large differences among the countries. In Denmark, the United States, and Sweden about 15 percent of total 2001 nonresidential investment had gone to software, but for the UK, a country that might be expected to devote a similar amount of investment to software, official estimates put the total at only about 1.5 percent. “There may be a problem there,” he said. The picture of the computer services industry, which is basically the main producer of software, is somewhat different (See Figure 18). The UK is among the countries with a large industry that produces software services, and Ireland, which was just above the UK at the bottom of the software-investment chart, actually seems to have a large computer services industry. This result again suggests complications that might merit looking at in more detail. Use of Deflators Also Varies Country to Country Moving on to the issue of deflators, Dr. Pilat pointed to “very different treatment across countries in how software is looked at,” offering as evidence the fact that official statistics for Australia and Denmark showed a very rapid price decline
OCR for page 108
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 17 The data: software investment, as a percentage of non-residential gross fixed capital formation. SOURCE: Organisation for Economic Co-operation and Development, Database on Capital Services. FIGURE 18 Computer service industry as a percentage of total business services value added, 2000. SOURCE: Organisation for Economic Co-operation and Development, Technology and Industry Scoreboard, 2003.
OCR for page 109
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium over time, while those for Greece and Sweden showed prices increasing strongly (See Figure 19). Factors Accounting for the Difference As one problem contributing to the variation in measures of software investment, Dr. Pilat named that businesses and business surveys—that, he said, generally use “very prudent criteria” when counting software as investment—do not treat software as national accountants might like it to be treated. The consequence is a big difference between business survey data on software investment, which currently exists for only a few countries, and official measures of software investment as they show up in national accounts. Own-account software would not normally be picked up as investment in the business surveys, he remarked. If business surveys do not reveal much, national accountants must attempt to measure supply using the commodity-flow method described earlier by Mr. Wasshausen. But after ascertaining the total supply of computer services, national accountants make very different decisions on how much of that to treat as investment. Investment ratios therefore differ greatly from country to country, making it quite unlikely that data are comparable For example, about 65 or 70 percent of the total supply of software was being treated as investment by Spain FIGURE 19 Deflators for investment in software, 1995=100. SOURCE: Ahmad, 2003.
OCR for page 110
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium and Greece, whereas the corresponding number for the UK was only about 4 percent (See Figure 20). What accounts for this difference? It arises in part because the computer services industry represents a fairly heterogeneous range of activities, including not only software production but also such things as consulting services, and national accountants would not want to treat all of them as investment. The main problem is that criteria determining what to capitalize differ across countries. There are also small differences in the definitions of computer services that may exist across countries, although not within the European Union. And there are also problems with accounting for imports, because the trade data don’t provide much information on software, as well as with several technical adjustments, which can also differ across countries. Harmonizing the World Investment Picture Using a Standard Ratio To some extent, it is possible to tell what would happen if all countries used exactly the same investment ratio. On the basis of an investment ratio for all countries of 0.4 percent—that is, one treating 40 percent of all supply as investment—a very large increase would show up in software investment and GDP levels for the United Kingdom (See Figure 21). Meanwhile, there would be sub- FIGURE 20 Investment ratios for software differ (Share of total supply of computer services capitalized). SOURCE: Ahmad, 2003.
OCR for page 111
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 21 Impact on GDP if investment ratios were the same. SOURCE: Ahmad, 2003. stantial changes for France and Italy as well, and some decline in a few other countries. Returning to the problem of own-account software, Dr. Pilat traced the differential treatment it receives across countries: Japan excludes own-account from its national accounts altogether. Some countries that do include it ignore intermediate costs, looking only at wages and salaries of those who produce it—and then use widely divergent methods of estimating those wages and salaries, especially in regard to the time computer programmers spend on own-account vs. custom production. Among countries that take intermediate costs into account, adjustments used for them vary. Own-account production of original software designed for reproduction is not capitalized everywhere. Harmonized estimates reflecting identical treatment of own-account across countries, similar to those for investment ratio already discussed, would show a
OCR for page 112
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium significant change in levels of software investment (See Figure 22). The portion of Japanese GDP accounted for by own-account software would rise from zero to 0.6 percent, and most other countries would post fairly big increases, with the exception of Denmark, which would register a decrease. Dr. Pilat cautioned that these estimates, which he characterized as “very rough,” were not the product of a careful process such as BEA had undertaken for the U.S. economy but had been put together at the OECD solely for the purpose of illustrating what the problems are. OECD Task Force’s Software Accounting Recommendations In an effort to improve current-price estimates, an OECD-Eurostat Task Force was established in 2001, and it published a report in 2002 that included a range of recommendations on how to use the commodity-flow method, how to use the supply-based method, and how to treat own-account software in different countries. Most of these recommendations had been accepted by OECD countries and were being implemented. Work was also under way to improve business surveys in the hope of deriving more evidence from them over time. If all Task Force recommendations were implemented, Dr. Pilat predicted, the UK would be most significantly affected, but other countries’ software-investment data would show rises as well (See Figure 23). FIGURE 22 Impact of “harmonized” treatment own-account (percent of GDP). SOURCE: Ahmad, 2003.
OCR for page 113
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 23 Estimated investment in software as a percentage of GDP if Task Force recommendations were implemented. SOURCE: Ahmad, 2003. The use of deflators varied widely from country to country as well. While some countries used the U.S. deflator for prepackaged software and adjusted it somewhat, many others used such proxies as a general producer-price index, a price index for office machinery, or input methods. For own-account and customized software, earnings indexes were often being used. The reigning recommendation in these areas, on which the Task Force did not focus to any great extent, was to use the U.S. price index for prepackaged software or to adjust it a little while using earnings indexes for own-account and custom software. Harmonized Measures’ Potential Effect on GDP Levels As current OECD estimates for information and communication technologies (ICT) investment remained very much based on official measures, adoption of harmonized measures would have the most significant impact on the level of GDP in the UK, France, and Japan (See Figure 24). “While there might be a small change in the growth rate of GDP,” Dr. Pilat said, “some [factors] might actually wash out, so it is not entirely clear what that would do to different countries.” Software’s role in total capital input would definitely increase, which would mean
OCR for page 114
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 24 Impact of “harmonized” measures on the level of GDP, percentage of GDP. SOURCE: Ahmad, 2003. that estimates of multifactor productivity would be changing quite a bit as well. There would probably also be reallocation of other types of capital to software. “Sometimes a country will say, ‘we’re pretty happy with our total investment expenditure, but we don’t quite know where to put it: It may be software, but sometimes we treat it as other types of hardware,’” he explained. Dr. Pilat then displayed a graph demonstrating that the UK would experience a slight up-tick in cumulative growth rates for the second half of the 1990s if software measurement were changed (See Figure 25). According to another graph, this one showing the contribution of software investment to GDP growth according to growth-accounting analyses for the two halves of the 1990s (See Figure 26), revised estimates would produce a marked increase in that contribution for countries such as Japan and the UK that had a very small contribution to total GDP growth coming from software investment. Contributions of software to total capital in countries like Ireland, the UK, Japan, and Portugal are very small compared to those of other types of IT, which suggests that something isn’t entirely right and that a different contribution in the total growth accounting would result from revised estimates (See Figure 27). Concluding, Dr. Pilat observed that measures of software investment varied quite a lot among countries, but that OECD countries had more or less reached agreement on the treatment of software in their national accounts. Steps were on the way in most countries to move closer to one another in statistical practices,
OCR for page 115
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 25 Sensitivity of GDP growth to different investment ratios for purchased software. SOURCE: Ahmad, 2003. FIGURE 26 Contribution of software investment to GDP growth, 1990-1995 and 1995-2001 (in percentage points). SOURCE: Organisation for Economic Co-operation and Development, Database on Capital Services, 2004.
OCR for page 116
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium FIGURE 27 Contribution of software investment and other capital to GDP growth, 1995-2001 (in percentage points). SOURCE: Organisation for Economic Co-operation and Development, Database on Capital Services, 2004. and there would be some effort by the OECD to monitor the implementation of the Task Force recommendations via a survey due in 2004. But there were some data issues still outstanding, price indexes being one of them. And software trade remained a significant problem, as did detail on software supplies in many important industries, which was not available in many countries. He commended to the audience’s attention a working paper entitled “Measuring Investment in Software” by his colleague Nadim Ahmad, which served as the basis for his presentation and was to be found online at <http://www.oecd.org/sti/working-papers>. DISCUSSION Dr. Jorgenson asked whether it was correct to conclude from Dr. Pilat’s presentation that most of the OECD’s effort in standardization had been focused on the nominal value of investment, and that the radical differences in prices revealed in one of Dr. Pilat’s graphs had yet to be addressed. Acknowledging that this was indeed correct, Dr. Pilat said that the only place where the OECD had tried to do some harmonization was for the hedonic price indexes for investment in IT hardware. None had yet been done in any software estimates, although he suggested that the OECD could try it to see what it did. He
OCR for page 117
Measuring and Sustaining the New Economy: Software, Growth, and the Future of the U.S. Economy - Report of a Symposium was also considering using both his rough and his revised estimates of software investment to see what effect it might have on the OECD’s growth accounting estimates. Dr. Flamm commented that a lot of interesting work was in progress in all the spheres that had been discussed, and that much work was apparently still to be done at the OECD. He regarded as “good news,” however, that improved measures of software prices seemed to be coming down the road, with BEA, university, and other researchers working on the subject.
Representative terms from entire chapter: