Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 17
5. Potential Metrics for Addressing Study Objectives In keeping with the definitions and concepts in the previous section, the NRC study will identify the desired measures for expressing results related to each of the objectives defined in section 3, Clarifying Study Objectives. It is important to note that the metrics ultimately used in the study will be selected partly based on their theoretical importance in answering critical questions, and partly based on practicalities. Here we list a set of draft metrics of clear utility to the study; not all will ultimately be adopted, and, as the research progresses; undoubtedly others will be developed as additional elements emerge as the study moves forward. Research quality44 • Internal measures of research quality—These will be based on comparative survey results from agency managers with respect to the quality of SBIR-funded research versus the quality of other agency research. It is important here to recognize that standards and reviewer biases in the selection for SBIR awards in the selection of other awards may vary. • External measures of research quality Peer-reviewed publications o Citations o Technology awards from organizations outside the SBIR agency o Patents o Patent Citations o Agency mission Agency missions vary; for example procurement will not be relevant to NSF and NIH (and some of DoE) SBIR programs. The value of SBIR to the agency mission can best be addressed through surveys at the sub-unit manager level, similar to the approach demonstrated by Archibald and Finifter’s (2000) Fast Track study, which provides a useful model in this area.45 These surveys will seek to address: • The alignment between agency SBIR objectives and agency mission • Agency-specific metrics (to be determined) • Procurement: The rate at which agency procurement from small firms has changed since inception of SBIR; o The change in the time elapsed between a proposal arriving on an agency’s desk and the o contract arriving at the small business; The rate at which SBIR firm involvement in procurement has changed over time; o Comparison of SBIR-related procurement with other procurement emerging from extra-mural o agency R&D; Technology procurement in the agency as a whole; o • Agency success metrics – how does the agency assess and reward management performance? Issues include Time elapsed between a proposal arriving on an agency’s desk and the contract arriving at the o small business. 44 See also parameters of non-economic benefits, especially Knowledge Benefits, p. 11. 45 See National Research Council, The Small Business Innovation Research Program: An Assessment of the Department of Defense Fast Track Initiative, op. cit., pp. 211-250. 17
OCR for page 18
o Minimization of lags in converting from SBIR Phase I to Phase II Parallel data collection across the five agency SBIR programs is to compile year-by-year program demographics for approximately the last decade. Data compilation requests will include the number of applications, number of awards, ratio of awards to applications, and total dollars awarded for each phase of the multi-phase program. It will cover the geographical distribution of applicants, awards, and success rates; statistics on applications and awards by women-owned and minority-owned companies; statistics on commercialization strategies and outcomes; results of agency-initiated data collection and analysis; and uniform data from a set of case studies for each agency. The Committee plans to draw on the following data collection instruments: • Phase I recipient survey • Phase II recipient survey • SBIR program manager survey • COTAR (technical point of contact) survey • case data from selected cases Data collected from these surveys and case studies will be added to existing public sources of data that will be used in the study, such as: • all agency data covering award applications, awards, outcomes, and program management • patent and citation data • venture capital data • census data Additional data may be collected as a follow-up based on an analysis of response. The study will examine the agency rates of transition between phases, pending receipt of the agency databases for applications and awards of Phase I and Phase II. The Phase II survey will gather information on all Phase III activity including commercial sales, sales to the federal government, export sales, follow-on federal R&D contracts, further investment in the technology by various sources, marketing activities, and identification of commercial products or federal programs that incorporate the products. SBIR Program manager surveys and interviews will address federal efforts to exploit the results of phase II SBIR into phase III federal programs. Commercialization First order metrics for commercialization revolve around these basic areas: • Sales (firm revenues) Direct sales in the open market as a percentage of total sales o Indirect sales (e.g. bundled with other products and services) as a percent of total sales o • Licensing or sale of technology Contracts relating to products o Contracts relating to the means of production or delivery—processes o • SBIR-related products, services, and processes procured by government agencies. • Spin-off of firms The issue of commercial success goes beyond whether project awards go to firms that then succeed in the market. It is possible that these firms may well have succeeded anyway, or they may simply have displaced other firms that would have succeeded had their rival not received a subsidy. The issue is whether SBIR increases the number of small businesses that succeed in the market. If the data permit, the study team may try to emulate the research of Feldman and Kelley to test the hypothesis that the SBIR increases/does not increase the number of small businesses that pursue their research projects or achieve other goals.46 Broad economic benefits For firms • Support for firm development, which may include: 46 Maryann P. Feldman and Maryellen R. Kelley, “Leveraging Research and Development: The impact of the Advanced Technology Program.” National Research Council, The Advanced Technology Program, Assessing Outcomes, 2001 op. cit. 18
OCR for page 19
Creation of a firm (i.e., has SBIR led to the creation of a firm that otherwise would not have o been founded) Survival o Growth in size (employment, revenues) o Merger activity o Reputation o Increase in stock value/IPO, etc.47 o Formation of collaborative arrangements to pursue commercialization, including pre- o competitive R&D or a place in the supply chain Investment in plant (production capacity) o Other pre-revenues activities aimed at commercialization, such as entry into regulatory o pipeline and development of prototypes • Access to capital Private capital o From angel investors From venture capitalists Banks and commercial lenders Capital contributions from other firms Stock issue of the SBIR-recipient firm, e.g., initial public offerings (IPO) Subsequent (non-SBIR) funding procurement from government agencies o For agencies (Aside from mission support and procurement) • Enhanced research efficiency Outcomes from SBIR vs. non-SBIR research o Agency manager attitudes toward SBIR o For society at large Social returns include private returns, agency returns, and spillover effects from research, development, and commercialization of new products, processes, and services associated with SBIR projects. It is difficult, if not impossible, to capture social returns fully, but an attempt will be made to capture at least part of the effects beyond those identified above including the following: Evidence of spillover effects • Small business support: Small business share of agency R&D funding o Survival rates for SBIR supported firms o Growth and success measures for SBIR vs. non-SBIR firms o • Training: SBIR impact on entrepreneurial activity among scientists and engineers o Management advice from Venture Capital firms o Other training effects. o 47 The web site inknowvation.com has a data set on publicly traded SBIR firms. 19
OCR for page 20
Non-economic benefits Knowledge benefits • Intellectual property Patents filed and granted o Patent citations o Litigation o • Non-intellectual property Journal articles and citations o Human capital measures o Other non-economic benefits Given the complexity of the NRC study, the Committee is unlikely to devote substantial resources to this area. However, some evidence about other non-economic benefits e.g., environmental or safety impacts may emerge from the case studies and interviews. Trends in agency funding for small business • Absolute SBIR funding levels • SBIR vs. other agency extra mural research funding received by small businesses • Agency funding for small business relative to overall sources of funding in the US economy Best practices in SBIR funding It will be important to analyze the categories below with respect to the size of the firm. • Recipient views on process • Management views on process • Flexibility of process, e.g., award size • Timeliness of application decision process • Management actions on troubled projects Possible independent variables: demographic characteristics For all of the outcome metrics listed above, it will be important to capture a range of demographic variables that could become independent variables in empirical analyses. Bias What is the best way of assessing SBIR? One approach—utilized by many agencies when examining their SBIR programs—has been to highlight successful firms. Another approach has been to survey firms that have been funded under the SBIR program, asking such questions as whether the technologies funded were ever commercialized, the extent to which their development would have occurred without the public award, and how firms assessed their experiences with the program more generally. It is important to recognize and account for the biases that arise with these and other approaches. Some possible sources of bias are noted below48: Response bias—1: Many awardees may have a stake in the programs that have funded them, and consequently feel inclined to give favorable answers (i.e., that they have received benefits from the program and that commercialization would not have taken place without the awards). This may be a 48 See Joshua Lerner and Colin Kegler, “Evaluating the Small Business Innovation Research Program: A Literature Review, SBIR: An Assessment of the Department of Defense Fast Track Initiative, C. Wessner, ed., op. cit. 20
OCR for page 21
particular problem in the case of the SBIR initiative, since many small high-technology company executives have organized to lobby for its renewal. Response bias—2: Some firms may be unwilling to acknowledge that they received important benefits from participating in public programs, lest they attract unwelcome attention. Measurement bias: It may simply be very difficult to identify the marginal contribution of an SBIR award, which may be one of many sources of financing that a firm employed to develop a given technology. Selection bias. This source of bias concerns whether SBIR awards firms that already have the characteristics needed for a higher growth rate and survival, although the extent of this bias is likely overdrawn since an important role of SBIR is to telegraph information about firms to markets operating under conditions of imperfect information.49 Management bias: information from agency managers, who must defend their SBIR management before the Congress, may be subject to bias in different ways. Size bias: The relationship between firm size and innovative activity is not clear from the academic literature.50 It is possible that some indexes will show large firms as more successful (publications and total patents for example) while others will show small firms as more successful (patents per employee for example.) A complement of approaches will be developed to address the issue of bias. In addition to a survey of program managers, we intend to interview firms as well as agency officials, employ a range of metrics, and use a variety of methodologies. The Committee is aware of the multiple challenges in reviewing “the value to the Federal research agencies of the research projects being conducted under the SBIR program...” [H.R. 5667, sec. 108]. These challenges stem from the fact that the agencies differ significantly by mission, R&D management structures (e.g., degree of centralization), and manner in which SBIR is employed (e.g., administration as grants vs. contracts); and different individuals within agencies have different perspectives regarding both the goals and the merits of SBIR-funded research. The Committee proposes multiple approaches to assessing the contributions of the program to agency mission, in light of the complicating factors mentioned above: A planned survey of all individuals within studied agencies having SBIR program management responsibilities (that is, going beyond the single "Program Manager" in a given agency). The survey will be designed and implemented with the objective of minimizing framing bias. We will reduce sampling bias by soliciting responses from R&D managers without direct SBIR responsibilities as well as those who have both. Important areas of inquiry include study of the process by which topics are defined, solicitations developed, projects scored, and award selections made. Systematic gathering and critical analysis of the agencies' own data concerning take-up of the products of SBIR funded research. Study of the role of multiple-award-winning firms in performing agency relevant research; 49 See Adam Jaffee, “Building Program Evaluation into the Design of Public Research Support Programs,” Oxford Review of Economic Policy, forthcoming. 50 Many empirical studies suggest that small firms are more innovative than large firms or, at minimum, that the difference between large and small firm innovative activity is statistically insignificant. See Zoltan Acs and David B. Audretsch (1991), Innovation and Small Firms (Cambridge: MIT Press); Ricardo J. Caballero and Adam B. Jaffe (1993),"How high are the giants’ shoulders: an empirical assessment of knowledge spillovers and creative destruction in a model of economic growth," in O.J. Blanchard and S. Fischer (eds.) NBER Macroeconomic Annual 1993, (Cambridge, MA: MIT Press); and Jaffe and Trajtenberg (2002), Patents, Citations, and Innovations: A Window on the Knowledge Economy (Cambridge: MIT Press). Concerns relating to size- dependant bias can be addressed by employing James Heckman’s well-known techniques for controlling for the effects of sample selection bias. John Scott has recently employed such methods in a survey he conducted on environmental research. See John Scott, T., Environmental Research and Development: US Industrial Research, the Clean Air Act and Environmental Damage (Cheltenham, UK; Northampton, MA, USA: Edward Elgar Publishing, 2003). It will be interesting to see if including such controls (for firm size) in our econometric analysis confirms or rejects the hypothesis that there will be a size dependant bias as a result of the selection of indicators in which large firms will score more broadly than small ones. Another pragmatic step we will take to address this issue is to make sure that any metrics we use are normalized for firm size (e.g., patents per employee) 21
OCR for page 22
A possible study comparing funded and nearly funded projects at NIH (possibly extended to other agencies). 22
Representative terms from entire chapter: