1
Introduction

1.1
SMALL BUSINESS INNOVATION RESEARCH PROGRAM CREATION AND ASSESSMENT

Created in 1982 by the Small Business Innovation Development Act, the Small Business Innovation Research (SBIR) program was designed to stimulate technological innovation among small private-sector businesses while providing the government cost-effective new technical and scientific solutions to challenging mission problems. SBIR was also designed to help to stimulate the U.S. economy by encouraging small businesses to market innovative technologies in the private sector.1

As the SBIR program approached its twentieth year of existence, the U.S. Congress requested that the National Research Council (NRC) of the National Academies conduct a “comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet Federal research and development needs,” and make recommendations on improvements to the program.2 Mandated as a part of SBIR’s renewal in 2000, the NRC study has assessed the SBIR program as administered at the five federal agencies that together make up 96 percent of SBIR program expenditures. The agencies are, in

1

The SBIR legislation drew from a growing body of evidence, starting in the late 1970s and accelerating in the 1980s, which indicated that small businesses were assuming an increasingly important role in both innovation and job creation. This evidence gained new credibility with the Phase I empirical analysis by Zoltan Acs and David Audretsch of the U.S. Small Business Innovation Database, which confirmed the increased importance of small firms in generating technological innovations and their growing contribution to the U.S. economy. See Zoltan Acs and David Audretsch, Innovation and Small Firms, Cambridge, MA: The MIT Press, 1990.

2

See Public Law 106-554, Appendix I—H.R. 5667, Section 108.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 10
1 Introduction 1.1 SMALL BUSINESS INNOVATION RESEARCH PROGRAM CREATION AND ASSESSMENT Created in 1982 by the Small Business Innovation Development Act, the Small Business Innovation Research (SBIR) program was designed to stimulate technological innovation among small private-sector businesses while providing the government cost-effective new technical and scientific solutions to chal- lenging mission problems. SBIR was also designed to help to stimulate the U.S. economy by encouraging small businesses to market innovative technologies in the private sector.1 As the SBIR program approached its twentieth year of existence, the U.S. Congress requested that the National Research Council (NRC) of the National Academies conduct a “comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet Federal research and development needs,” and make recommendations on improvements to the program.2 Mandated as a part of SBIR’s renewal in 2000, the NRC study has assessed the SBIR program as administered at the five federal agencies that together make up 96 percent of SBIR program expenditures. The agencies are, in 1The SBIR legislation drew from a growing body of evidence, starting in the late 1970s and accel- erating in the 1980s, which indicated that small businesses were assuming an increasingly important role in both innovation and job creation. This evidence gained new credibility with the Phase I em- pirical analysis by Zoltan Acs and David Audretsch of the U.S. Small Business Innovation Database, which confirmed the increased importance of small firms in generating technological innovations and their growing contribution to the U.S. economy. See Zoltan Acs and David Audretsch, Innoation and Small Firms, Cambridge, MA: The MIT Press, 1990. 2 See Public Law 106-554, Appendix I—H.R. 5667, Section 108. 0

OCR for page 10
 INTRODUCTION decreasing order of program size: the Department of Defense (DoD), the National Institutes of Health (NIH), the National Aeronautics and Space Administration (NASA), the Department of Energy (DoE), and the National Science Foundation (NSF). The NRC Committee assessing the SBIR program was not asked to consider if SBIR should exist or not—Congress has affirmatively decided this question on three occasions.3 Rather, the Committee was charged with providing assessment- based findings to improve public understanding of the program as well as recom- mendations to improve the program’s effectiveness. 1.2 SBIR PROGRAM STRUCTURE Eleven federal agencies are currently required to set aside 2.5 percent of their extramural research and development budget exclusively for SBIR awards. Each year these agencies identify various R&D topics, representing scientific and technical problems requiring innovative solutions, for pursuit by small busi- nesses under the SBIR program. These topics are bundled together into individual agency “solicitations”—publicly announced requests for SBIR proposals from interested small businesses. A small business can identify an appropriate topic it wants to pursue from these solicitations and, in response, propose a project for an SBIR award. The required format for submitting a proposal is different for each agency. Proposal selection also varies, though peer review of proposals on a competitive basis by experts in the field is typical. Each agency then selects the proposals that are found best to meet program selection criteria, and awards contracts or grants to the proposing small businesses. As conceived in the 1982 Act, SBIR’s award-making process is structured in three phases at all agencies: • Phase I awards essentially fund feasibility studies in which award winners undertake a limited amount of research aimed at establishing an idea’s scientific and commercial promise. Today, the legislation anticipates Phase I awards as high as $100,000.4 • Phase II awards are larger—typically about $750,000—and fund more extensive R&D to further develop the scientific and commercial promise of research ideas. • Phase III. During this phase, companies do not receive additional funding from the SBIR program. Instead, award recipients should be obtaining additional funds from a procurement program at the agency that made the 3These are the 1982 Small Business Development Act, and the subsequent multi-year reauthoriza- tions of the SBIR program in 1992 and 2000. 4With the agreement of the Small Business Administration, which plays an oversight role for the program, this amount can be substantially higher in certain circumstances, e.g., drug development at NIH, and is often lower with smaller SBIR programs, e.g., EPA or the Department of Agriculture.

OCR for page 10
 SBIR AT THE NATIONAL INSTITUTES OF HEALTH award, from private investors, or from the capital markets. The objective of this phase is to move the technology from the prototype stage to the marketplace. Obtaining Phase III support is often the most difficult challenge for new firms to overcome. In practice, agencies have developed different approaches to facilitate SBIR grantees’ transition to commercial viability; not least among them are additional SBIR awards. Previous NRC research has shown that firms have different objectives in applying to the program. Some want to demonstrate the potential of promising research but may not seek to commercialize it themselves. Others think they can fulfill agency research requirements more cost-effectively through the SBIR program than through the traditional procurement process. Still others seek a certification of quality (and the investments that can come from such recognition) as they push science-based products towards commercialization. 5 1.3 SBIR REAUTHORIZATIONS The SBIR program approached reauthorization in 1992 amidst continued concerns about the U.S. economy’s capacity to commercialize inventions. Find- ing that “U.S. technological performance is challenged less in the creation of new technologies than in their commercialization and adoption,” the National Academy of Sciences at the time recommended an increase in SBIR funding as a means to improve the economy’s ability to adopt and commercialize new technologies.6 Following this report, the Small Business Research and Development En- hancement Act (P.L. 102-564), which reauthorized the SBIR program until Sep- tember 30, 2000, doubled the set-aside rate to 2.5 percent.7 This increase in the percentage of R&D funds allocated to the program was accompanied by a stronger emphasis on encouraging the commercialization of SBIR-funded tech- 5 See Reid Cramer, “Patterns of Firm Participation in the Small Business Innovation Research Program in Southwestern and Mountain States,” in National Research Council, The Small Business Innoation Research Program: An Assessment of the Department of Defense Fast Track Initiatie, Charles W. Wessner, ed., Washington, DC: National Academy Press, 2000. 6 See National Research Council, The Goernment Role in Ciilian Technology: Building a New Alliance, Washington, DC: National Academy Press, 1992, p. 29. 7 For FY2003, this has resulted in a program budget of approximately $1.6 billion across all federal agencies, with the Department of Defense having the largest SBIR program at $834 million, followed by the National Institutes of Health at $525 million. The DoD SBIR program, is made up of 10 par- ticipating components: Army, Navy, Air Force, Missile Defense Agency (MDA), Defense Advanced Research Projects Agency (DARPA), Chemical Biological Defense (CBD), Special Operations Com- mand (SOCOM), Defense Threat Reduction Agency (DTRA), National Imagery and MapPhasing Agency (NIMA), and the Office of Secretary of Defense (OSD). NIH counts 23 separate institutes and agencies making SBIR awards, many with multiple programs.

OCR for page 10
 INTRODUCTION nologies.8 Legislative language explicitly highlighted commercial potential as a criterion for awarding SBIR awards. For Phase I awards, Congress directed pro- gram administrators to assess whether projects have “commercial potential,” in addition to scientific and technical merit, when evaluating SBIR applications. The 1992 legislation mandated that program administrators consider the existence of second-phase funding commitments from the private sector or other non-SBIR sources when judging Phase II applications. Evidence of third-phase follow-on commitments, along with other indicators of commercial potential, was also to be sought. Moreover, the 1992 reauthorization directed that a small business’ record of commercialization be taken into account when evaluating its Phase II application.9 The Small Business Reauthorization Act of 2000 (P.L. 106-554) extended SBIR until September 30, 2008. It called for this assessment by the National Research Council of the broader impacts of the program, including those on employment, health, national security, and national competitiveness. 10 1.4 STRUCTURE OF THE NRC STUDY This NRC assessment of SBIR has been conducted in two phases. In the first phase, at the request of the agencies, a research methodology was developed by the NRC. This methodology was then reviewed and approved by an independent National Academies panel of experts.11 Information about the program was also gathered through interviews with SBIR program administrators and during two major conferences where SBIR officials were invited to describe program 8 See Robert Archibald and David Finifter, “Evaluation of the Department of Defense Small Busi- ness Innovation Research Program and the Fast Track Initiative: A Balanced Approach,” in National Research Council, The Small Business Innoation Research Program: An Assessment of the Depart- ment of Defense Fast Track Initiatie, op. cit., pp. 211-250. 9A GAO report had found that agencies had not adopted a uniform method for weighing commer- cial potential in SBIR applications. See U.S. General Accounting Office, Federal Research: Ealua- tions of Small Business Innoation Research Can Be Strengthened, GAO/RCED-99-114, Washington, DC: U.S. General Accounting Office, 1999. 10The current assessment is congruent with the Government Performance and Results Act (GPRA) of 1993, accessed at: . As characterized by the GAO, GPRA seeks to shift the focus of government decisionmaking and accountability away from a preoccupation with the activities that are undertaken—such as grants dispensed or inspections made—to a focus on the results of those activities. See . 11 National Research Council, An Assessment of the Small Business Innoation Research Program: Project Methodology, Washington, DC: The National Academies Press, 2004. The methodology report is available on the Web. Access at: .

OCR for page 10
 SBIR AT THE NATIONAL INSTITUTES OF HEALTH operations, challenges, and accomplishments.12 These conferences highlighted the important differences in each agency’s SBIR program’s goals, practices, and evaluations. The conferences also explored the challenges of assessing such a diverse range of program objectives and practices using common metrics. The second phase of the NRC study implemented the approved research methodology. The Committee deployed multiple survey instruments and its re- searchers conducted case studies of a wide profile of SBIR firms. The Committee then evaluated the results and developed both agency-specific and overall find- ings and recommendations for improving the effectiveness of the SBIR program. The final report includes complete assessments for each of the five agencies and an overview of the program as a whole. 1.5 SBIR ASSESSMENT CHALLENGES At its outset, the NRC’s SBIR study identified a series of assessment chal- lenges that must be addressed. As discussed at the October 2002 conference that launched the study, the administrative flexibility found in the SBIR program makes it difficult to make cross-agency assessments. Although each agency’s SBIR program shares the common three-phase structure, the SBIR concept is interpreted uniquely at each agency. This flexibility is a positive attribute in that it permits each agency to adapt its SBIR program to the agency’s particular mis- sion, scale, and working culture. For example, NSF operates its SBIR program differently than DoD because “research” is often coupled with procurement of goods and services at DoD but rarely at NSF. Programmatic diversity means that each agency’s SBIR activities must be understood in terms of their separate missions and operating procedures. This commendable diversity makes an assess- ment of the program as a whole more challenging. A second challenge concerns the linear process of commercialization implied by the design of SBIR’s three-phase structure.13 In the linear model, illustrated in Figure 1-1, innovation begins with basic research supplying a steady stream of fresh and new ideas. Among these ideas, those that show technical feasibility become innovations. Such innovations, when further developed by firms, become marketable products driving economic growth. As NSF’s Joseph Bordogna observed at the study’s initial conference, in- novation almost never takes place through a protracted linear progression from 12The opening conference on October 24, 2002, examined the program’s diversity and assessment challenges. For a published report of this conference, see National Research Council, SBIR: Program Diersity and Assessment Challenges, Charles W. Wessner, ed. Washington, DC: The National Acad- emies Press, 2004. The second conference, held on March 28, 2003, was titled, “Identifying Best Practice.” The conference provided a forum for the SBIR Program Managers from each of the five agencies in the study’s purview to describe their administrative innovations and best practices. 13This view was echoed by Duncan Moore: “Innovation does not follow a linear model. It stops and starts.” National Research Council, SBIR: Program Diersity and Assessment Challenges, op. cit.

OCR for page 10
 INTRODUCTION Basic Research Applied Research Development Commercialization FIGURE 1-1 The linear model of innovation. research to development to market. Research and development drives technologi- fig 1.1 cal innovation, which, in turn, opens up new frontiers in R&D. True innovation, Bordogna noted, can spur the search for new knowledge and create the context in which the next generation of research identifies new frontiers. This nonlinearity, illustrated in Figure 1-2, makes it difficult to rate the efficiency of SBIR program. Inputs do not match up with outputs according to a simple function. A third assessment challenge relates to the measurement of outputs and outcomes. Program realities can and often do complicate the task of data gather- ing. In some cases, for example, SBIR recipients receive a Phase I award from one agency and a Phase II award from another. In other cases, multiple SBIR awards may have been used to help a particular technology become sufficiently mature to reach the market. Also complicating matters is the possibility that for any particular grantee, an SBIR award may be only one among other federal and nonfederal sources of funding. Causality can thus be difficult, if not impossible, to establish. The task of measuring outcomes is made harder because companies that have garnered SBIR awards can also merge, fail, or change their name be- fore a product reaches the market. In addition, principal investigators or other key individuals can change firms, carrying their knowledge of an SBIR project with them. A technology developed using SBIR funds may eventually achieve Quest for Basic Understanding • New Knowledge • Fundamental Ideas Basic Potential Use Research • Application of Knowledge to New a Specific Subject Feedback : • “Prototypicalization” Unanticipated • Basic Research Applications Applied Needed for Discovery Research • Search for new Development of Ideas and Feedback : Products solutions to solve • Applied Research • Goods and Services longer-term Needed to Development issues design New product Characteristics Feedback: Market Signals/ Technical Challenge Commercial - • Desired Product Alterations -ization or New Characteristics • Cost/design trade-off FIGURE 1-2 A feedback model of innovation.

OCR for page 10
 SBIR AT THE NATIONAL INSTITUTES OF HEALTH commercial success at an entirely different company than that which received the initial SBIR award. Complications plague even the apparently straightforward task of assessing commercial success. For example, research enabled by a particular SBIR award may take on commercial relevance in new unanticipated contexts. At the launch conference, Duncan Moore, former Associate Director of Technology at the White House Office of Science and Technology Policy (OSTP), cited the case of SBIR-funded research in gradient index optics that was initially considered a commercial failure when an anticipated market for its application did not emerge. Years later, however, products derived from the research turned out to be a major commercial success.14 Today’s apparent dead end can be a lead to a major achievement tomorrow. Lacking clairvoyance, analysts cannot anticipate or measure such potential SBIR benefits. Gauging commercialization is also difficult when the product in question is destined for public procurement. The challenge is to develop a satisfactory measure of how useful an SBIR-funded innovation has been to an agency mis- sion. A related challenge is determining how central (or even useful) SBIR awards have proved in developing a particular technology or product. In some cases, the Phase I award can meet the agency’s need—completing the research with no further action required. In other cases, surrogate measures are often required. For example, one way of measuring commercialization success is to count the products developed using SBIR funds that are procured by an agency such as DoD. In practice, however, large procurements from major suppliers are typically easier to track than products from small suppliers such as SBIR firms. Moreover, successful development of a technology or product does not always translate into successful “uptake” by the procuring agency. Often, the absence of procurement may have little to do with the product’s quality or the potential contribution of SBIR. Understanding failure is equally challenging. By its very nature, an early- stage program such as SBIR should anticipate a high failure rate. The causes of failure are many. The most straightforward, of course, is technical failure, where the research objectives of the award are not achieved. In some cases, the project can be technically successful but a commercial failure. This can occur when a procuring agency changes its mission objectives and hence its procurement priorities. NASA’s new Mars Mission is one example of a mission shift that may result in the cancellation of programs involving SBIR awards to make room for new agency priorities. Cancelled weapons system programs at the Department of Defense can have similar effects. Technologies procured through SBIR may also fail in the transition to acquisition. Some technology developments by small businesses do not survive the long lead times created by complex testing and 14 Duncan Moore, “Turning Failure into Success,” in National Research Council, SBIR: Program Diersity and Assessment Challenges, op. cit., p. 94.

OCR for page 10
7 INTRODUCTION certification procedures required by the Department of Defense. Indeed, small firms encounter considerable difficulty in penetrating the “procurement thicket” that characterizes defense acquisition.15 In addition to complex federal acquisi- tion procedures, there are strong disincentives for high-profile projects to adopt untried technologies. Technology transfer in commercial markets can be equally difficult. A failure to transfer to commercial markets can occur even when a tech- nology is technically successful if the market is smaller than anticipated, compet- ing technologies emerge or are more competitive than expected, or the product is not adequately marketed. Understanding and accepting the varied sources of project failure in the high-risk, high-reward environment of cutting-edge R&D is a challenge for analysts and policy makers alike. This raises the issue concerning the standard on which SBIR programs should be evaluated. An assessment of SBIR must take into account the ex- pected distribution of successes and failures in early-stage finance. As a point of comparison, Gail Cassell, Vice President for Scientific Affairs at Eli Lilly, has noted that only 1 in 10 innovative products in the biotechnology industry will turn out to be a commercial success.16 Similarly, venture capital funds often achieve considerable commercial success on only two or three out of twenty or more investments.17 In setting metrics for SBIR projects, therefore, it is important to have a realis- tic expectation of the success rate for competitive awards to small firms investing in promising but unproven technologies. Similarly, it is important to have some understanding of what can be reasonably expected—that is, what constitutes “success” for an SBIR award, and some understanding of the constraints and op- portunities successful SBIR awardees face in bringing new products to market. 15 For a description of the challenges small businesses face in defense procurement, the subject of a June 14, 2005, NRC conference and one element of the congressionally requested assessment of SBIR, see National Research Council, SBIR and the Phase III Challenge of Commercialization, Charles W. Wessner, ed., Washington, DC: The National Academies Press, 2007. Relatedly, see remarks by Kenneth Flamm on procurement barriers, including contracting overhead and small firm disadvantages in lobbying in National Research Council, SBIR: Program Diersity and Assessment Challenges, op. cit., pp. 63-67. 16 Gail Cassell, “Setting Realistic Expectations for Success,” in National Research Council, SBIR: Program Diersity and Assessment Challenges, op. cit., p. 86. 17 See John H. Cochrane, “The Risk and Return of Venture Capital,” Journal of Financial Econom- ics, 75(1) 2005:3-52. Drawing on the VentureOne database Cochrane plots a histogram of net venture capital returns on investments that “shows an extraordinary skewness of returns. Most returns are modest, but there is a long right tail of extraordinary good returns. 15% of the firms that go public or are acquired give a return greater than 1,000%! It is also interesting how many modest returns there are. About 15% of returns are less than 0, and 35% are less than 100%. An IPO or acquisition is not a guarantee of a huge return. In fact, the modal or ‘most probable’ outcome is about a 25% return.” See also Paul A. Gompers and Josh Lerner, “Risk and Reward in Private Equity Investments: The Challenge of Performance Assessment,” Journal of Priate Equity, 1 (Winter 1977):5-12. Steven D. Carden and Olive Darragh, “A Halo for Angel Investors” The McKinsey Quarterly, 1, 2004 also show a similar skew in the distribution of returns for venture capital portfolios.

OCR for page 10
 SBIR AT THE NATIONAL INSTITUTES OF HEALTH From the management perspective, the rate of success also raises the question of appropriate expectations and desired levels of risktaking. A portfolio that always succeeds would not be investing in high-risk, high pay-off projects that push the technology envelope. A very high rate of “success” would, thus, paradoxically suggest an inappropriate use of the program. Understanding the nature of success and the appropriate benchmarks for a program with this focus is therefore impor- tant to understanding the SBIR program and the approach of this study. 1.6 STRUCTURE OF THIS REPORT This report sets out the Committee’s assessment of the SBIR program at the National Institutes of Health. The Committee’s detailed findings and recom- mendations are presented in the next chapter. The Committee finds that the NIH SBIR program largely meets it legislative objectives and makes recommendations to improve program outcomes. Chapter 3 reviews awards made by NIH. It ana- lyzes data supplied by NIH, reflecting on both the advantages and disadvantages of NIH data gathering methods. Chapter 4 looks at the outcomes of the NIH SBIR program, including commercial sales and employment effects. Chapter 5 examines how the SBIR program at NIH is managed, including an explanation of the NIH award cycle, outreach efforts to attract the best applicants, and initia- tives to support the commercialization of SBIR-funded technologies. Appendix A presents program data collected by NIH, DoD, and the NRC. Appendix B and C provide the template and results of the NRC Firm Survey and surveys of SBIR Phase I and Phase II projects. Appendix D presents illustrative case studies of firms participating in the NIH SBIR program. Finally, Appendix E provides a reference bibliography.