Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 61
7 INTERNATIONAL PERSPECTIVES ON MEASURING RESEARCH IMPACTS Measuring the impacts of research ranges from studying broad changes in public policy to tracking the influence of a certain research paper on subsequent publications in that field. Some of the newest techniques marry on-line data collection and databases with analytic tools, yielding a nuanced picture of research outcomes and the influence of funding dollars. At the workshop, speakers from the United Kingdom (UK), the European Union (EU), and Brazil shared some of their thoughts on recent evaluation methods and future goals. Measuring the effectiveness of research is a growing field precisely because of the scarcity of resources and the need for policy makers to demonstrate returns on investments around the world. MEDICAL RESEARCH COUNCIL EVALUATION SYSTEM The United Kingdom Medical Research Council (MRC) provides government funding for public, private, and university research in the United Kingdom. Science funding in the UK comes from the government, the private sector, and charities, and universities function on a dual support system, where money for staff and infrastructure comes from higher education funding councils and research councils designate funds on a project and program basis. Ian Viney, Head of Evaluation for the London-based MRC, outlined the council’s efforts to measure and influence research impacts. The MRC is focused on collecting comprehensive evidence regarding the progress, productivity, and quality of research output; supporting studies along the lines of those funded by the National Science Foundation’s Science of Science and Innovation Policy Program 61
OCR for page 62
62 MEASURING THE IMPACTS OF FEDERAL INVESTMENTS IN RESEARCH (SciSIP); encouraging researchers to maximize their “pathways to impact”; and adding the assessment of impact as a factor in allocating new funds to UK universities. In 2006 the MRC started using an online system called e-Val. The system, which replaces end-of-grant reporting, requires grant recipients to make online reports each year, resulting in structured feedback over the lifetime of a grant rather than a long report at the end summarizing years of progress. The evaluation is designed to track how scientists are influencing policy development and contributing to new products and interventions. In building the evaluation, the MRC asked questions intended to yield hard evidence of impacts, outcomes, and output, in addition to traditional tracking of papers and patents. In two years of data gathering, more than 3,000 researchers have participated. The system has collected 70,000 reports representing feedback on £2 billion of MRC funding, or 92 percent of MRC expenditures in the last four years. In 2010 the evaluation provided details on 5,000 active collaborations. Since 2006, MRC researchers reported over 130 citations in policy documents, 360 new products and interventions in development, 200 published patents, and 37,500 publications. The online evaluation system helps the MRC link research outputs with the social, economic, and academic impacts of research. For example, one study done by the Health Economics Research Group, the Office of Health Economics, and RAND Europe (2008) focused on the return on investment for research on cardiovascular disease and mental health. Combined with data from e-Val, the study built a strong quantitative argument for investment in medical research in time for the change of party control of government in 2009 and the subsequent review of all government spending. Monitoring policy citations and the influence of scientists in policy helps track progress over time and demonstrates how research translates to clinical practice, said Viney. The evaluations also have given context to case studies, which the council often uses to illustrate to the government the benefits of MRC funding. But it is not easy to encourage researchers to think about the ultimate objectives of their work and how to maximize their impact. Viney pointed out that the medical community is somewhat more accustomed to this, while other disciplines are more resistant. The Research Councils UK (RCUK), which is made up of seven UK research councils that together allocate £3 billion each year to research, is keen to maximize the economic, academic, and societal
OCR for page 63
63 INTERNATIONAL PERSPECTIVES ON MEASURING RESEARCH IMPACTS impacts of research, and the councils are including information on these impacts in all of their funding applications. They ask researchers not to predict what the impact will be but simply to consider enhancing the potential influence of their research. A peer review process, “Pathways to Impact,” is also designed to further this goal. The Higher Education Funding Councils in the UK, which allocate £2 billion to university research every year, have moved in a similar direction. Due partly to pressure encouraging the Higher Education Funding Councils to look more closely at impacts, they implemented the Research Excellence Framework, which assesses research outputs, impacts, and the research environment at each university. The framework splits disciplines into units of assessment, defined as substantive bodies of research in coherent discipline groups. There are roughly 30 units of assessment. A pilot using expert panels to assess impact at 29 universities, with each university submitting case studies for two units of assessment, was considered quite successful. The panels found ways to assess the validity and significance of impacts across diverse disciplines, including clinical medicine, physics, earth systems, social work, and English literature. The panels will contribute 20 percent to the overall REF assessment, with the goal of increasing that contribution after 2014. In the government’s 2010 Comprehensive Spending Review, the MRC’s evaluation helped protect the medical research budget in real terms until 2014 while the overall science budget received no inflation increase. For the MRC, this is a tangible example of evaluation influencing policy, Viney said. Other funding agencies are now looking at ways to imitate e-Val, and discussions are under way to harmonize and rationalize the data collection process with a view of generating a more UK-wide view of research output. Plans are also under way to commission more work on estimates of spillover benefits in the UK, rather than borrowing from U.S. estimates. Viney concluded that the government is focused on economic growth and leveraging investment, and the importance of describing, understanding, and assessing impact is becoming more widely accepted in the UK. MEASURING IMPACTS OF RESEARCH FUNDING IN THE EUROPEAN UNION When evaluating research, it is important to compare old and new approaches. Brian Sloan, Directorate-General for Research and
OCR for page 64
64 MEASURING THE IMPACTS OF FEDERAL INVESTMENTS IN RESEARCH Innovation for the European Commission, discussed various forms of evaluation in the context of the European Union Framework Program, which supports European science, technology, and competitiveness. The program is designed to complement national programs, focusing on areas where national funding may not reach, and to encourage cooperation and coordination between countries. The program allocates funding for transnational research projects, and also for mobility so that researchers are able to travel from one country to another. The current Framework Program, the seventh since 1984, has a budget of 50 billion Euros, or approximately $70 billion, which is 7 to 8 percent of European R and D funding. There are four components: Cooperation, Ideas, People, and Capacities. The Cooperation piece funds transnational research consortia. Ideas funds national teams that compete across the European Union. People funds mobility. Capacities provides funding for infrastructure. Within each of these divisions is a range of different science and technology fields. Traditional methods that the Framework Program has used to evaluate the impacts of research include interviews, surveys of program participants, and expert panels. But Sloan pointed out several challenges inherent in these methods. Surveys can be a burden to participants, especially when long and detailed answers are required. This can influence the quality of their response. Response bias and partial responses are also a concern. In addition, because most research projects have various funding sources, it can be difficult to attribute specific findings directly to EU funding. While these methods are still quite valuable, it is worthwhile to look at new approaches. New methods include what is called linking and ex-ante modeling. Until recently, it was difficult to identify recipients of EU funding by linking into bibliometric databases, but in 2009 it became possible to search grant activity and funding acknowledgements in the Web of Science database and therefore accurately identify not only program participants but their affiliates. Using the database in this way allows for assessment of research output and comparison with other projects, national averages, and world averages. There is also a built-in control group, which is lacking in surveys or participant interviews. Using bibliometric data, it is possible to map co-publication or track which disciplines publish most within the various programs. This type of evaluation is particularly relevant for the Framework Program, as one of its goals is to measure the results of funding against other transnational endeavors. It is also possible to measure the effects of
OCR for page 65
65 INTERNATIONAL PERSPECTIVES ON MEASURING RESEARCH IMPACTS distance or language on collaboration, and evaluate whether the program is succeeding at connecting people and regions that would not otherwise be brought together. Another approach the program took was linking with the Community Innovation Survey, a harmonized questionnaire that surveys 40,000 firms across 30 European countries. The survey looks at innovative outputs and activity, R and D spending, patents, cooperation, and new products. Included in the survey were questions asking whether firms had received any EU funding from 2002 to 2004 and whether they had participated in the Framework Program. The responses provided crucial data that could then be used to compare Framework Program participants with other researchers, controlling for variables such as company size and sector, and discern whether the program increases collaboration and productivity. The commission also found ex-ante evaluation to be a useful tool when applied to the Framework Program. The European Commission produces an ex-ante impact assessment report each time it develops new funding programs, explaining what problem is being addressed, why the government and in particular the EU must intervene, the objectives of the program, and what policy options have already been considered. For each option, the assessment also includes predictions of economic, social, and environmental impacts. Using an econometric model, the commission used a similar approach to assess macroeconomic impacts of the seventh Framework Program up to 2030 under various scenarios. The model predicted effects of the program on exports, imports, research, GDP, employment, and a range of other indicators. Again, like bibliometric data, this approach allowed for comparisons and manipulation of data, as well as bringing up potentially interesting and important developments that may not otherwise have been recognized. Ex-ante evaluation and linking provide another angle on measuring research outcomes and impacts. Because official statistical surveys provide such a large amount of reliable data, sophisticated analyses can be done of networking effects that cannot be captured from participant surveys. Sloan emphasized the potential of such approaches to yield further progress in the future.
OCR for page 66
66 MEASURING THE IMPACTS OF FEDERAL INVESTMENTS IN RESEARCH MEASURING IMPACTS OF SCIENCE, TECHNOLOGY, AND INNOVATION INVESTMENTS IN BRAZIL Brazil’s Marcio de Miranda Santos, Executive Director of the Center for Strategic Management and Studies in Science, Technology, and Innovation, explained why quality data and a good information gathering system are invaluable for evaluating research impacts and outcomes. A comprehensive information infrastructure that facilitates evaluation of research is difficult to build, since many types of information are necessary for a thorough evaluation, including data on individual researchers, projects, collaborations, R and D networks, research institutions, and public agencies. The system has to be adaptable and able to handle the complexity of a range of inputs. Santos described Brazil’s strategy for building such a system. Several principles are guiding the center’s work. One is to expand on what is already available. In Brazil, this means linking data from sources such as the National Council for Science and Technology, the National Agency for Industrial Development, various innovation agencies, projects, and dissertations. The data requirements must be designed not just for government needs but to provide access and functionality for science, technology, and innovation participants as well. An effective program will rely on traditional software engineering methods as well as knowledge engineering and e-government approaches. The Lattes platform, which Brazil has been using since 1999, holds program information in a database that currently contains over 2 million curricula vitae (CVs) and is updated every three months on average. In 2008 the Center used it successfully to do an ex-ante evaluation of networks that had submitted proposals to the National Institutes of Science and Technology program (INCT). The program aims to promote networks among research groups and individuals, internationally competitive research, high-quality S and T development, and joint use of laboratories by universities and companies. The program also will contribute to improving education standards at all levels. Using Lattes, the Center took snapshots of information from individual CVs and from the INCT program as a whole and analyzed that information to determine the success of the program. A snapshot of one project from 2008, with 25 people in the network, provided data on co- authorship of papers, researchers who shared advisors, and participation in other projects and committees. The Center then used Innovation
OCR for page 67
67 INTERNATIONAL PERSPECTIVES ON MEASURING RESEARCH IMPACTS Portal, an electronic service designed to link information from different data sources, to follow shifts in project networks and collaboration. For example, three scientists working on the first project were not co-authors at the time the proposal was submitted, but by 2011 they had begun to produce papers with other project participants. Another example comes from the Brazilian Academy of Mechanical Sciences and Engineering, which was interested in identifying the weaker departments in mechanical engineering in Brazil. Researchers used Lattes to examine the distribution of knowledge within mechanical engineering, based on the number of publications produced by each scientific domain. They broke down the field into smaller subdomains and pinpointed weaker areas where reinforcement would be useful. This methodology allowed public decision makers to not only identify weak spots but also track improvement, measure the impact of research investments, and make decisions on how to further improve the system. The advantages of an integrated national platform such as Lattes are substantial, said Santos. It allows efficiency in both ex-ante and ex-post evaluation processes, increased transparency, and increased community participation. Research institutions, individuals, and firms are able to access the Lattes platform as well, so it is an open system not limited to the government, and groups become aware of their own progress and that of other teams and programs. Some areas are still weak, but the center is currently developing a system to incorporate more information from the private sector in particular, which is one of the largest gaps. “[The platform] facilitates the participation of the scientific community,” said Santos. “If the scientific community knows what’s going on, it will be better for national federal agencies to interact and allow for the community to participate, because they know they have access to information.” DISCUSSION During the question period a participant asked Viney how the U.K. Medical Research Council (MRC) convinces grantees to participate in the e-Val system, since it is more time consuming than end-of-grant reporting. Viney explained that the MRC has been successful at getting increased government funding using data from the e-Val, which they can use to leverage participation since the research community is able to see the impact of providing such detailed reports. The e-Val is also
OCR for page 68
68 MEASURING THE IMPACTS OF FEDERAL INVESTMENTS IN RESEARCH mandatory for new grants, so participants must comply if they want to receive MRC funding in the future. Responding to questions about how the impact on policy is measured, Sloan explained that the Framework Program has attempted to study the impact of their projects on policymaking by questioning participants, but has not done citation analysis of policy documents. Viney said that the MRC has looked at where MRC research is cited, paying particular attention to which documents are more influential and tracking any resulting policy changes. A workshop participant asked about the European Commission’s guarantee fund, where some money is held back until participants fulfill the requirements of the grant, and whether surveys must be completed in order to receive that money. Sloan said that it depends on how strongly the requirements are enforced, but that much of what is asked is voluntary. In response to a question asking whether a clear policy is in place requiring researchers to acknowledge their funding when they publish a paper or develop a patent, Viney said that analysis of citations and publications is based on the most reliable data possible. He said that research councils in the UK do require a standard type of acknowledgement in publications, but that the MRC could potentially do a better job working with publishers and checking compliance. Santos added that in Brazil, there are policies for federal agency funding and some state funding, but there is room for improvement so that their system is able to capture exactly who funded what.