National Academies Press: OpenBook
« Previous: Summary
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 11
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 12
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 13
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 14
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 15
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 16
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 17
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 18
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 19
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 20
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 21
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 22
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 23
Suggested Citation:"1 Introduction." National Research Council. 2008. An Assessment of the SBIR Program at the Department of Energy. Washington, DC: The National Academies Press. doi: 10.17226/12052.
×
Page 24

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

1 Introduction 1.1 SBIR Creation and Assessment Created in 1982 by the Small Business Innovation Development Act, the Small Business Innovation Research (SBIR) program was designed to stimulate technological innovation among small private-sector businesses while providing the government cost-effective new technical and scientific solutions to challeng- ing mission problems. SBIR was also designed to help to stimulate the U.S. economy by encouraging small businesses to market innovative technologies in the private sector. As the SBIR program approached its twentieth year of existence, the U.S. Congress requested that the National Research Council (NRC) of the National Academies conduct a “comprehensive study of how the SBIR program has stimulated technological innovation and used small businesses to meet Federal research and development needs,” and make recommendations on improvements to the program. Mandated as a part of SBIR’s renewal in late 2000, the NRC study has assessed the SBIR program as administered at the five federal agencies that together make up 96 percent of SBIR program expenditures. The agencies are, in decreasing order of program size: the Department of Defense (DoD), the The SBIR legislation drew from a growing body of evidence, starting in the late 1970s and accel- erating in the 1980s, which indicated that small businesses were assuming an increasingly important role in both innovation and job creation. This evidence gained new credibility with empirical analysis by Zoltan Acs and David Audretsch of the U.S. Small Business Innovation Database, which confirmed the increased importance of small firms in generating technological innovations and their growing contribution to the U.S. economy. See Zoltan Acs and David Audretsch, Innovation and Small Firms, Cambridge, MA: The MIT Press, 1990. See Public Law 106-554, Appendix I—H.R. 5667, Section 108. 11

12 Sbir AT THE DEPARTMENT OF ENERGY National Institutes of Health (NIH), the National Aeronautics and Space Admin- istration (NASA), the Department of Energy (DoE), and the National Science Foundation (NSF). The NRC Committee assessing the SBIR program was not asked to consider if SBIR should exist or not—Congress has affirmatively decided this question on three occasions. Rather, the Committee was charged with providing ­assessment- based findings to improve public understanding of the operations, achievements, and challenges of the program as well as recommendations to improve the program’s effectiveness. 1.2 SBIR Program Structure Eleven federal agencies are currently required to set aside 2.5 percent of their extramural research and development budget exclusively for SBIR contracts. Each year these agencies identify various R&D topics, representing scientific and technical problems requiring innovative solutions, for pursuit by small ­businesses under the SBIR program. These topics are bundled together into individual agency “solicitations”—publicly announced requests for SBIR proposals from interested small businesses. A small business can identify an appropriate topic it wants to pursue from these solicitations and, in response, propose a project for an SBIR grant. The required format for submitting a proposal is different for each agency. Proposal selection also varies, though peer review of proposals on a competitive basis by experts in the field is typical. Each agency then selects the proposals that are found best to meet program selection criteria, and awards contracts or grants to the proposing small businesses. As conceived in the 1982 Act, SBIR’s grant-making process is structured in three phases: • Phase I grants essentially fund feasibility studies in which award winners undertake a limited amount of research aimed at establishing an idea’s scientific and commercial promise. Today, the legislative guidance anticipates normal Phase I grants around $100,000. • Phase II grants are larger—typically about $750,000—and fund more extensive R&D to develop the scientific and commercial promise of research ideas. • Phase III. During this phase, companies do not receive further SBIR awards. Instead, grant recipients should be obtaining additional funds from a procurement program at the agency that made the award, from private investors, These are the 1982 Small Business Development Act, and the subsequent multiyear reauthoriza- tions of the SBIR program in 1992 and 2000. With the agreement of the Small Business Administration, which plays an oversight role for the program, this amount can be higher in certain circumstances (e.g., drug development at NIH) and is often lower with smaller SBIR programs (e.g., EPA or the Department of Agriculture).

INTRODUCTION 13 or from the capital markets. The objective of this phase is to move the technology from the prototype stage to the marketplace. Obtaining Phase III support is often the most difficult challenge for new firms to overcome. In practice, agencies have developed different approaches to facilitate SBIR grantees’ transition to commercial viability; not least among them are additional SBIR grants. Previous NRC research has shown that firms have different objectives in applying to the program. Some want to demonstrate the potential of promising research but may not seek to commercialize it themselves. Others think they can fulfill agency research requirements more cost-effectively through the SBIR program than through the traditional procurement process. Still others seek a certification of quality (and the private investments that can come from such rec- ognition) as they push science-based products towards commercialization.  1.3 SBIR Reauthorizations The SBIR program approached reauthorization in 1992 amidst continued concerns about the U.S. economy’s capacity to commercialize inventions. Find- ing that “U.S. technological performance is challenged less in the creation of new technologies than in their commercialization and adoption,” the National Academy of Sciences at the time recommended an increase in SBIR funding as a means to improve the economy’s ability to adopt and commercialize new technologies. Following this report, the Small Business Research and Development Enhancement Act (P.L. 102-564), which reauthorized the SBIR program until September 30, 2000, doubled the set-aside rate to 2.5 percent. This increase in the percentage of R&D funds allocated to the program was accompanied by a stronger emphasis on encouraging the commercialization of SBIR-funded See Reid Cramer, “Patterns of Firm Participation in the Small Business Innovation Research Program in Southwestern and Mountain States,” in National Research Council, The Small Business Innovation Research Program, An Assessment of the Department of Defense Fast Track Initiative, Charles W. Wessner, ed., Washington, D.C.: National Academy Press, 2000. See National Research Council, The Government Role in Civilian Technology: Building a New Alliance, Washington, D.C.: National Academy Press, 1992, p. 29. For fiscal year 2005, this has resulted in a program budget of approximately $1.85 billion across all federal agencies, with the Department of Defense having the largest SBIR program at $943 million, followed by the National Institutes of Health (NIH) at $562 million. The DoD SBIR program, is made up of ten participating components: Army, Navy, Air Force, Missile Defense Agency (MDA), ­Defense Advanced Research Projects Agency (DARPA), Chemical Biological Defense (CBD), ­Special Opera­ tions Command (SOCOM), Defense Threat Reduction Agency (DTRA), National Imagery and Mapping Agency (NIMA), and the Office of Secretary of Defense (OSD). NIH counts 23 separate institutes and agencies making SBIR awards, many with multiple programs.

14 Sbir AT THE DEPARTMENT OF ENERGY technologies. Legislative language explicitly highlighted commercial potential as a criterion for awarding SBIR grants. For Phase I awards, Congress directed program administrators to assess whether projects have “commercial potential,” in addition to scientific and technical merit, when evaluating SBIR applications. The 1992 legislation mandated that program administrators consider the existence of second-phase funding commitments from the private sector or other non-SBIR sources when judging Phase II applications. Evidence of third-phase follow-on commitments, along with other indicators of commercial potential, was also to be sought. Moreover, the 1992 reauthorization directed that a small business’s record of commercialization be taken into account when evaluating its Phase II application. The Small Business Reauthorization Act of 2000 (P.L. 106-554) extended SBIR until September 30, 2008. It called for this assessment by the National Research Council of the broader impacts of the program, including those on employment, health, national security, and national competitiveness. 10 1.4 Structure of the NRC Study This NRC assessment of SBIR has been conducted in two phases. In the first phase, at the request of the agencies, a formal report on research methodol- ogy was to be developed by the NRC. Once developed, this methodology was then reviewed and approved by an independent National Academies panel of experts.11 Information about the program was also gathered through interviews with SBIR program administrators and during four major conferences where SBIR officials were invited to describe program operations, challenges, and accomplishments.12 These conferences highlighted the important differences in See Robert Archibald and David Finifter, “Evaluation of the Department of Defense Small Busi- ness Innovation Research Program and the Fast Track Initiative: A Balanced Approach,” in National Research Council, The Small Business Innovation Research Program: An Assessment of the Depart- ment of Defense Fast Track Initiative, op. cit., pp. 211-250. A GAO report had found that agencies had not adopted a uniform method for weighing commercial potential in SBIR applications. See U.S. General Accounting Office, Federal Research: Evaluations of Small Business Innovation Research Can Be Strengthened, AO/RCED-99-114, Washington, D.C.: U.S. General Accounting Office, 1999. 10The current assessment is congruent with the Government Performance and Results Act (GPRA) of 1993: <http://govinfo.library.unt.edu/npr/library/misc/s20.html>. As characterized by the GAO, GPRA seeks to shift the focus of government decision-making and accountability away from a pre­ occupation with the activities that are undertaken—such as grants dispensed or inspections made—to a focus on the results of those activities. See <http://www.gao.gov/new.items/gpra/gpra.htm>. 11The SBIR methodology report is available on the Web. Access at <http://www7.­nationalacademies. org/sbir/SBIR_Methodology_Report.pdf>. 12The opening conference on October 24, 2002, examined the program’s diversity and assessment challenges. For a published report of this conference, see National Research Council, SBIR: Pro- gram Diversity and Assessment Challenges, Charles Wessner, ed., Washington, D.C.: The National Academies Press, 2004. A second conference, held on March 28, 2003, was titled, “Identifying Best

INTRODUCTION 15 each agency’s SBIR program’s goals, practices, and evaluations. The confer- ences also explored the challenges of assessing such a diverse range of program objectives and practices using common metrics. The second phase of the NRC study implemented the approved research methodology. The Committee deployed multiple survey instruments and its researchers conducted case studies of a wide profile of SBIR firms. The Com- mittee then evaluated the results and developed both agency-specific and overall findings and recommendations for improving the effectiveness of the SBIR pro- gram. The final report includes complete assessments for each of the five agencies and an overview of the program as a whole. 1.5 SBIR Assessment Challenges At its outset, the NRC’s SBIR study identified a series of assessment chal- lenges that must be addressed. As discussed at the October 2002 conference that launched the study, the administrative flexibility found in the SBIR program makes it difficult to make cross-agency comparisons. Although each agency’s SBIR program shares the common three-phase structure, the SBIR concept is interpreted uniquely at each agency. This flexibility is a positive attribute in that it permits each agency to adapt its SBIR program to the agency’s particular mis- sion, scale, and working culture. For example, NSF operates its SBIR program differently than DoD because “research” is often coupled with procurement of goods and services at DoD but rarely at NSF. Programmatic diversity means that each agency’s SBIR activities must be understood in terms of their separate mis- sions and operating procedures. This diversity is commendable but, operationally, makes the task of assessing the program more challenging. A second challenge concerns the linear process of commercialization implied by the design of SBIR’s three phase structure.13 In the linear model, illustrated in Figure 1-1, innovation begins with basic research supplying a steady stream of fresh and new ideas. Among these ideas, those that show technical feasibility become innovations. Such innovations, when further developed by firms, become marketable products driving economic growth. As NSF’s Joseph Bordogna observed at the launch conference, innovation �������������������������������������������������������������� almost never takes place through a protracted linear progression from research to Practice.” The conference provided a forum for the SBIR Program Managers from each of the five agencies in the study’s purview to describe their administrative innovations and best practices. A conference on June 14, 2005, focused on the commercialization of SBIR funded innovations at DoD and NASA. See National Research Council, SBIR and the Phase III Challenge of Commercialization, Charles W. Wessner, ed., Washington, D.C.: The National Academies Press, 2007. A final conference, held on April 7, 2006, examined role of the state programs in leveraging SBIR to advance local and regional economic growth. 13This view was echoed by Duncan Moore: “Innovation does not follow a linear model. It stops and starts.” National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit.

16 Sbir AT THE DEPARTMENT OF ENERGY Basic Research Applied Research Development Commercialization FIGURE 1-1  The linear model of innovation. Quest for Basic Understanding • New Knowledge • Fundamental Ideas Basic fig 1.1 Use Potential Research • Application of Knowledge to a Specific Subject New Feedback : • “Prototypicalization” Unanticipated • Basic Research Applications Needed for Applied Discovery • Search for new Research Ideas and Development of Feedback : solutions to solve Products • Applied Research longer-term Needed to • Goods and Services issues design Development New product Characteristics Feedback: Market Signals/ Technical Challenge Commercial - • Desired Product Alterations or New Characteristics -ization • Cost/design trade-off FIGURE 1-2  A feedback model of innovation. development to market.14 Research and development drives technological innova- tion, which, in turn, opens up new frontiers in R&D. True innovation, Bordogna noted, can spur the search for new knowledge and create the context in which the Figure 1-2 next generation of research identifies new frontiers. This nonlinearity, illustrated in Figure 1-2, makes it difficult to rate the efficiency of SBIR program. Inputs do not match up with outputs according to a simple function. Figure 1-2, while more complex than Figure 1-1 is itself a highly simplified model. For example, feedback loops can stretch backwards or forwards by more than one level. A third assessment challenge relates to the measurement of outputs and outcomes. Program realities can and often do complicate the task of data gather- ing. In some cases, for example, SBIR recipients receive a Phase I award from one agency and a Phase II award from another. In other cases, multiple SBIR awards may have been used to help a particular technology become sufficiently mature to reach the market. Also complicating matters is the possibility that for any particular grantee, an SBIR award may be only one among other federal and non-federal sources of funding. Causality can thus be difficult, if not impossible, to establish. The task of measuring outcomes is made harder because compa- 14While few hold this process of linear innovation to be literally true, the concept nonetheless s ­ urvives—for example, in retrospective accounts of the path taken by a particular innovation.

INTRODUCTION 17 nies that have garnered SBIR awards can also merge, fail, or change their name before a product reaches the market. In addition, principal investigators or other key individuals can change firms, carrying their knowledge of an SBIR project with them. A technology developed using SBIR funds may eventually achieve commercial success at an entirely different company than that which received the initial SBIR award. Complications plague even the apparently straightforward task of assessing commercial success. For example, research enabled by a particular SBIR award may take on commercial relevance in new unanticipated contexts. At the launch conference, Duncan Moore, former Associate Director of Technology at the White House Office of Science and Technology Policy (OSTP), cited the case of SBIR-funded research in gradient index optics that was initially considered a commercial failure when an anticipated market for its application did not emerge. Years later, however, products derived from the research turned out to be a major commercial success.15 Today’s apparent dead end can be a lead to a major achievement tomorrow. Lacking clairvoyance, analysts cannot anticipate or measure such potential SBIR benefits. Gauging commercialization is also difficult when the product in question is destined for public procurement. The challenge is to develop a satisfactory measure of how useful an SBIR-funded innovation has been to an agency mis- sion. A related challenge is determining how central (or even useful) SBIR awards have proved in developing a particular technology or product. In some cases, the Phase I award can meet the agency’s need—completing the research with no further action required. In other cases, surrogate measures are often required. For example, one way of measuring commercialization success is to count the products developed using SBIR funds that are procured by an agency such as DoD. In practice, however, large procurements from major suppliers are typically easier to track than products from small suppliers such as SBIR firms. Moreover, successful development of a technology or product does not always translate into successful “uptake” by the procuring agency. Often, the absence of procurement may have little to do with the product’s quality or the potential contribution of SBIR. Understanding failure is equally challenging. By its very nature, an early- stage program such as SBIR should anticipate a high failure rate. The causes of failure are many. The most straightforward, of course, is technical failure, where the research objectives of the award are not achieved. In some cases, the project can be a technically successful but a commercial failure. This can occur when a procuring agency changes its mission objectives and hence its procurement priorities. NASA’s new Mars Mission is one example of a mission shift that may result in the cancellation of programs involving SBIR awards to make room for 15Duncan Moore, “Turning Failure into Success,” in National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit., p. 94.

18 Sbir AT THE DEPARTMENT OF ENERGY new agency priorities. Cancelled weapons system programs at the Department of Defense can have similar effects. Technologies procured through SBIR may also fail in the transition to acquisition. Some technology developments by small businesses do not survive the long lead times created by complex testing and certification procedures required by the Department of Defense. Indeed, small firms encounter considerable difficulty in penetrating the “procurement thicket” that characterizes defense acquisition.16 In addition to complex federal acquisi- tion procedures, there are strong disincentives for high-profile projects to adopt untried technologies. Technology transfer in commercial markets can be equally difficult. A failure to transfer to commercial markets can occur even when a technology is technically successful if the market is smaller than anticipated, competing technologies emerge or are more competitive than expected, if the technology is not cost competitive, or if the product is not adequately marketed. Understanding and accepting the varied sources of project failure in the high-risk, high-reward environment of cutting-edge R&D is a challenge for analysts and policy makers alike. This raises the issue concerning the standard on which SBIR programs should be evaluated. An assessment of SBIR must take into account the expected distribution of successes and failures in early-stage finance. As a point of com- parison, Gail Cassell, Vice President for Scientific Affairs at Eli Lilly, has noted that only one in ten innovative products in the biotechnology industry will turn out to be a commercial success.17 Similarly, venture capital funds often achieve considerable commercial success on only two or three out of twenty or more investments.18 In setting metrics for SBIR projects, therefore, it is important to have a realis- tic expectation of the success rate for competitive awards to small firms investing 16For a description of the challenges small businesses face in defense procurement, the subject of a June 14, 2005, NRC conference and one element of the congressionally requested assessment of SBIR, see National Research Council, SBIR and the Phase III Challenge of Commercialization, op. cit. Relatedly, see remarks by Kenneth Flamm on procurement barriers, including contracting overhead and small firm disadvantages in lobbying in National Research Council, SBIR: Program Diversity and Assessment Challenges, op. cit., pp. 63-67. 17Gail Cassell, “Setting Realistic Expectations for Success,” Ibid, p. 86. 18See John H. Cochrane, “The Risk and Return of Venture Capital,” Journal of Financial ­Economics, 75(1)3-52, 2005. Drawing on the VentureOne database Cochrane plots a histogram of net venture capital returns on investments that “shows an extraordinary skewness of returns. Most returns are modest, but there is a long right tail of extraordinary good returns. Fifteen percent of the firms that go public or are acquired give a return greater than 1,000 percent! It is also interesting how many modest returns there are. About 15 percent of returns are less than 0, and 35 percent are less than 100 percent. An IPO or acquisition is not a guarantee of a huge return. In fact, the modal or ‘most probable’ outcome is about a 25% return.” See also Paul A. Gompers and Josh Lerner, “Risk and Reward in Private Equity Investments: The Challenge of Performance Assessment,” Journal of Private Equity, 1(Winter 1977):5-12. Steven D. Carden and Olive Darragh, “A Halo for Angel Investors” The McKinsey Quarterly, 1, 2004, also show a similar skew in the distribution of returns for venture capital portfolios.

INTRODUCTION 19 in promising but unproven technologies. Similarly, it is important to have some understanding of what can be reasonably expected—that is, what constitutes “success” for an SBIR award, and some understanding of the constraints and opportunities successful SBIR awardees face in bringing new products to market. From the management perspective, the rate of success also raises the question of appropriate expectations and desired levels of risk taking. A portfolio that always succeeds would not be investing in high risk, high pay-off projects that push the technology envelope. A very high rate of “success” would, thus, paradoxically suggest an inappropriate use of the program. Understanding the nature of success and the appropriate benchmarks for a program with this focus is therefore impor- tant to understanding the SBIR program and the approach of this study. 1.6 Assessing SBIR at the Department of Energy (DoE) In gathering and analyzing the data to assess the SBIR program at the Department of Energy, the Committee drew on the following set of research questions: • How successful has DoE SBIR program been in commercializing tech- nologies supported by Phase I and Phase II awards (and what are the factors that have contributed to or inhibited this level of commercialization)? • To what extent has DoE SBIR program supported DoE’s mission (and what are the factors that have contributed to or inhibited this level of support)? • To what extent has DoE SBIR program stimulated innovation? • How well has the DoE SBIR program encouraged small firms and sup- ported the growth and development of woman- and minority-owned businesses? • How effective has DoE’s management of the SBIR program been (and how might this management be improved)? 1.6.1 Surveys of DoE SBIR Award-recipient Companies Original data gathered by the research team in support of the NRC study of DoE SBIR program included a survey of DoE Phase II award-recipient firms; a survey DoE Phase I award-recipient firms that did not also receive a Phase II award; a survey of DoE technical staff involved in the SBIR program; numer- ous interviews with DoE personnel directly involved in administering the SBIR program; the assessment and analysis of data provided by DoE’s SBIR staff; and ten company case studies. The NRC Phase II Survey In spring 2005, the NRC administered a survey of Phase II SBIR projects across agencies as part of its congressionally mandated evaluation of the SBIR

20 Sbir AT THE DEPARTMENT OF ENERGY 100 90 80 70 60 1998-Present Percent 1993-1997 50 1985-1992 40 1900-1984 30 20 10 0 Year Founded Year of First Phase II Award FIGURE 1-3  Project start dates for respondents in NRC study. Program. The survey targeted a sample of Phase II awards that were awarded through 2001. A large majority of1-3 II awards would have been completed Phase by the 2005 survey date, and at least some commercialization efforts could have been initiated. There may be some biases in these data. Projects from firms with multiple awards were underrepresented in the sample, because they could not be expected to complete a questionnaire for each of possibly numerous awards received; but they may have been overrepresented in the responses because they might be more committed to the SBIR program. Nearly 40 percent of respondents began Phase I efforts after 1998, partly because the number of Phase I awards increased, starting in the late 1990s, and partly because winners from more distant years are harder to reach, as small businesses regularly cease operations, staff with knowledge of SBIR awards leave, and institutional knowledge erodes. For DoE, the sample size of Phase II projects targeted was 439. One hundred- fifty-seven respondents provided information on Phase II projects awarded by DoE, a response rate of approximately 36 percent. The response was considered adequate for drawing inferences. The NRC Phase I Survey The committee conducted a second recipient survey, in an attempt to deter- mine the impact of Phase I awards that did not go on to Phase II. The original sample for this Phase I study was the 2,005 DoE Phase I awards from 1992-2001 inclusive. Valid responses were received from 155 DoE Phase I projects that did not advance to Phase II.

INTRODUCTION 21 Survey of DoE Project Managers The technical project managers of individual SBIR projects can provide unique perspectives on the SBIR program. The project managers were surveyed electronically in three agencies—DoD, DoE, and NASA. The Project Manager Survey was based on Phase II projects awarded during the study period (1992-2001 inclusive). Project managers for these projects were identified with the help of the agencies. As expected, there was significant attrition (because of the absence of email addresses, the inability to identify the ­project manager, the project manager having left the agency or becoming deceased, etc.). The three agencies were able to locate the names and email addresses of project managers for 2,584 projects. Of these, responses were received for 513 projects (a 20 percent response rate), of which 84 were for DoE projects (a 16 percent response rate). The number of individuals responding was fewer than the number of projects because some project managers had oversight for multiple projects. 1.6.2 Case Studies Case studies can provide valuable insights concerning the viewpoints and concerns of the small businesses that participate in SBIR, insights that cannot be derived from statistical analysis. While all of the companies selected for case study won SBIR awards from the Department of Energy, most also won awards from other agencies as well. The interviews concerned their SBIR experience as a whole, and were not limited to DoE program. Candidate case study firms were selected from four lists: top recipients of SBIR awards from the Department of Energy; DoE SBIR awardees who received R&D 100 awards; DoE identified “success stories;” and firms with large com- mercial sales as reported to DoE SBIR program. From a list of 34 candidate firms, 10 were selected, including firms from a variety of locations, across a range of founding dates, having received different numbers of SBIR awards received, and representing different technological domains. The case study interviews focused on learning how the companies use the SBIR program: the extent to which SBIR is important to their company’s survival and growth, whether and how they intend to commercialize SBIR technology, whether and how the receipt of multiple awards influence their ability to com- mercialize, what challenges they have faced in the commercialization process, in what way they see the SBIR program serving the needs of technology entrepre- neurs and how they believe the program can be improved. In addition, we sought to learn how the companies were affected by the agencies’ administration of the program and what suggestions the companies would have on how to improve program administration. The case study companies range in age from 7 to 44 years old, in employees from 5 to 105, and in the number of SBIR awards from one to over a hundred. They cover seven states, both rural and urban areas, and present a variety of

22 Sbir AT THE DEPARTMENT OF ENERGY Box 1-1 Multiple Sources of Bias in Survey Response Large innovation surveys involve multiple sources of bias that can skew the results in both directions. Some common survey biases are noted below.a • Successful and more recently funded firms are more likely to respond. R ­ esearch by Link and Scott demonstrate that the probability of obtaining ­research project information by survey decreases for less recently funded ­projects and it increased the greater the award amount.b Nearly 40 percent of respondents in the NRC Phase II Survey began Phase I efforts after 1998, partly because the number of Phase I awards increased, starting in the mid 1990s, and partly because winners from more distant years are harder to reach. They are harder to reach as time goes on because small businesses regularly cease operations, are acquired, merge, or lose staff with knowledge of SBIR awards. • Success is self reported. Self-reporting can be a source of bias, although the dimensions and direction of that bias are not necessarily clear. In any case, policy analysis has a long history of relying on self-reported performance mea­ sures to represent market-based performance measures. Participants in such retrospectively analyses are believed to be able to consider a broader set of allocation options, thus making the evaluation more realistic than data based on third party observation.c In short, company founders and/or principal investiga­ tors are in many cases simply the best source of information available. • Survey sampled projects at firms with multiple awards. Projects from firms with multiple awards were under-represented in the sample, because they could not be expected to complete a questionnaire for each of dozens or more awards. • Failed firms are difficult to contact. Survey experts point to an “asymmetry” in their ability to include failed firms for follow-up surveys in cases where the firms no longer exist.d It is worth noting that one cannot necessarily infer that the SBIR project failed; what is known is only that the firm no longer exists. • Not all successful projects are captured. For similar reasons, the NRC Phase II Survey could not include ongoing results from successful projects in firms that merged or were acquired before and/or after commercialization of the project’s technology. The survey also did not capture projects of firms that did not respond to the NRC invitation to participate in the assessment. • Some firms may not want to fully acknowledge SBIR contribution to ­project success. Some firms may be unwilling to acknowledge that they received impor­ tant benefits from participating in public programs for a variety of reasons. For example, some may understandably attribute success exclusively to their own efforts. • Commercialization lag. While the NRC Phase II Survey broke new ground in data collection, the amount of sales made—and indeed the number of projects that generate sales—are inevitably undercounted in a snapshot survey taken at a single point in time. Based on successive data sets collected from NIH SBIR award recipients, it is estimated that total sales from all responding projects may be on the order of 50 percent greater than can be captured in a single survey.e This underscores the importance of follow-on research based on the now-established survey methodology.

INTRODUCTION 23 14 12 Sales (Millions of Dollars) 10 8 6 4 2 0 1 2 3 4 5 6 7 8 9 10 11 12 survey taken Years after Phase II Award FIGURE 1-B-1  Survey bias due to commercialization lag. These sources of bias provide a context for understanding the response rates to the NRC Phase I and Phase II Surveys conducted for this study. For the NRC Phase II Survey, of the 335 DoE firms that could be contacted out of a sample size of 439, 157 responded, representing a 47 percent response rate. The NRC Phase I Survey captured 14 percent of the 2,005 awards made by all five agencies over the period of 1992 to 2001. See Appendixes B and C forFigure B-A-1 additional information on the surveys. aFor a technical explanation of the sample approaches and issues related to the NRC surveys, see Appendix B. bAlbert N. Link and John T. Scott, Evaluating Public Research Institutions: The U.S. Advanced Technology Program’s Intramural Research Initiative, London: Routledge, 2005. cWhile economic theory is formulated on what is called ‘revealed preferences,’ meaning individuals and firms reveal how they value scarce resources by how they allocate those resources within a market framework, quite often expressed preferences are a better source of information especially from an evaluation perspective. Strict adherence to a revealed preference paradigm could lead to misguided policy conclusions because the paradigm assumes that all policy choices are known and understood at the time that an individual or firm reveals its preferences and that all relevant markets for such preferences are operational. See {1} Gregory G. Dess and Donald W. Beard, “Dimensions of Organizational Task Environments,” Administrative Science Quarterly, 29:52-73, 1984; and {2} Albert N. Link and John T. Scott, Public Accountability: Evaluating Technology-Based Institutions, Norwell, MA: Kluwer Academic Publishers, 1998. dAlbert N. Link and John T. Scott, Evaluating Public Research Institutions: The U.S. Advanced Technology Program’s Intramural Research Initiative, op. cit. eData from NIH indicates that a subsequent survey taken two years later would reveal very substantial increases in both the percentage of firms reaching the market, and in the amount of sales per project. See National Research Council, An Assessment of the Small Business Innovation Research Program at the National Institutes of Health, Charles W. Wessner, ed., Washington, D.C.: The National Academies Press, 2008.

24 Sbir AT THE DEPARTMENT OF ENERGY approaches to the SBIR program and to commercialization. The case study reports can be found in Appendix D. 1.7 Structure of the Report The report is presented in eight chapters. Following this introduction, Chap- ter 2 lists the Committee assessment findings and recommendations for improv- ing the operation of the SBIR program at the Department of Energy. Chapter 3 provides some basic statistics concerning energy technology devel- opment and the SBIR program at DoE. The chapter begins with a brief summary of national trends in the funding of energy research and development before pre- senting SBIR Phase I and Phase II pertaining to the number of awards granted, award size, and geographical location of award-recipient firms over the period of study. Chapter 4 examines actions taken by the Department of Energy to encourage commercialization efforts by SBIR awardees; how commercialization outcomes are measured; and what the various measures described indicate with respect to the commercialization of SBIR-supported technologies. Chapter 5 focuses on the manner in which the program supports the mission of the Department of Energy “to advance the national, economic, and energy security of the United States; to promote scientific and technological innova- tion in support of that mission; and to ensure the environmental cleanup of the national nuclear weapons complex.” The section highlights the tension that exists between commercialization and agency mission objectives of the program. Chapter 6 describes the DoE program’s support of woman- and minority- owned technology companies. Chapter 7 addresses the manner in which the program spurs knowledge creation and the development of new technologies, including the strengthening of knowledge networks involving small business, and the creation of codified knowledge in the form of patents and publications. Finally, Chapter 8 assesses the management of the program, focusing on program: outreach, the application process, award management, and program structure. It describes some developments in the administration of the program that took place following the study period, but prior to the completion of this report.

Next: 2 Findings and Recommendations »
  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!