Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 80
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process 4 Misconduct in Science—Incidence and Significance Estimates reported in government summaries, research studies, and anecdotal accounts of cases of confirmed misconduct in science in the United States range between 40 and 100 cases during the period from 1980 to 1990.1 The range reflects differences in the definitions of misconduct in science, uncertainties about the basis for “confirmed” cases, the time lag between the occurrence and disclosure of some cases, and potential overlap between government summaries (which are anonymous) and cases identified by name in the research literature. When measured against the denominator of the number of research awards or research investigators, the range of misconduct-in-science cases cited above is small.2 Furthermore, less than half of the allegations of misconduct received by government agencies have resulted in confirmed findings of misconduct in science. For example, after examining 174 case files of misconduct in science in the period from March 1989 through March 1991, the Office of Scientific Integrity in the Public Health Service found evidence of misconduct in fewer than 20 cases, although 56 investigations, mostly conducted by universities, were still under way (Wheeler, 1991). However, even infrequent incidents of misconduct in science raise serious questions among scientists, research sponsors, and the public about the integrity of the research process and the stewardship of federal research funds.
OCR for page 81
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process INCIDENCE OF MISCONDUCT IN SCIENCE—PUBLISHED EVIDENCE AND INFORMATION The incidence of misconduct in science and the significance of several confirmed cases have been topics of extensive discussion. Measures of the incidence of misconduct in science include (1) the number of allegations and confirmations of misconduct-in-science cases recorded and reviewed by government agencies and research institutions and (2) data and information presented in analyses, surveys, other studies, and anecdotal reports. Some observers have suggested that incidents of misconduct in science are underreported. It may be difficult for co-workers and junior scientists, for example, graduate students and postdoctoral fellows, to make allegations of misconduct in science because of lack of supporting evidence and/or fear of retribution. The significant professional discrimination and economic loss experienced by whistle-blowers as a result of reporting misconduct are well known and may deter others from disclosing wrongdoing in the research environment. Government Statistics on Misconduct in Science Owing to differing perspectives on the role of government and research institutions in addressing misconduct in science, and to discrepancies in the number of allegations received by government offices, the number of open cases, and the cases of misconduct in science confirmed by research institutions or government agencies, many questions remain to be answered. These areas of uncertainty and disagreement inhibit the resolution of issues such as identifying the specific practices that fit legal definitions of misconduct in science; agreeing on standards for the evidence necessary to substantiate a finding of misconduct in science; clarifying the extent to which investigating panels can or should consider the intentions of the accused person in reaching a finding of misconduct in science; assessing the ability of research institutions and government agencies to discharge their responsibilities effectively and handle misconduct investigations appropriately; determining the frequency with which misconduct occurs; achieving consensus on the penalties that are likely to be imposed by diverse institutions for similar types of offenses; and evaluating the utility of allocating substantial amounts of public and private resources to handle allegations, only a few of which may result in confirmed findings of misconduct. The absence of publicly available summaries of the investigation and adjudication of incidents of mis-
OCR for page 82
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process conduct in science inhibits scholarly efforts to examine how prevalent misconduct in science, is and to evaluate the effectiveness of governmental and institutional treatment and prevention programs. As a result, analyses of and policies related to misconduct in science are often influenced by information derived from a small number of cases that have received extensive publicity. The panel has not seen evidence that would help determine whether these highly publicized cases are representative of the broader sample of allegations or confirmed incidents of misconduct in science. One trend should be emphasized, however. The highly publicized cases often involve charges of falsification and fabrication of data, but the large majority of cases of confirmed misconduct in science have involved plagiarism (NSF, 1991a; Wheeler, 1991). Possible explanations for this trend are that plagiarism is more clearly identifiable by the complainants and more easily proved by those who investigate the complaint. Five semiannual reports prepared by the National Science Foundation 's Office of Inspector General (NSF 1989c; 1990a,b; 1991a,c) and a 1991 annual report prepared by the Office of Scientific Integrity Review of the Department of Health and Human Services (DHHS, 1991b) are the first systematic governmental efforts to analyze characteristics of a specific set of cases of misconduct in science. Although the treatment of some individual cases reported in these summaries has been the subject of debate and controversy, the panel commends these analyses as initial efforts and suggests that they receive professional review and revisions, if warranted. National Science Foundation The National Science Foundation's (NSF's) Office of Inspector General (OIG) received 41 allegations of misconduct in science in FY 1990 and reviewed another group of 6 allegations received by NSF prior to 1990 (NSF, 1990b).3 From this group of 47 allegations, OIG closed 21 cases by the end of FY 1990. In three cases NSF made findings of misconduct in science; in another four cases, NSF accepted institutional findings of misconduct in science. NSF officials caution that, in their view, future cases may result in a larger percentage of confirmed findings of misconduct because many of the open cases raise complicated issues that require more time to resolve.4 The panel matched the 41 allegations reviewed by NSF in FY 1990 against the definitions of misconduct in science used by NSF at that time (Table 4.1). The NSF's Office of the Director recommended the most serious
OCR for page 83
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process penalty (debarment for 5 years) in a case involving charges of repeated incidents of sexual harassment, sexual assault, and threats of professional and academic blackmail by a co-principal investigator on NSF-funded research (NSF, 1990b, p. 21). Following an investigation that involved extensive interviews and affidavits, NSF's OIG determined that “no federal criminal statutes were violated [but that] the pattern and effect of the co-principal investigator's actions constituted a serious deviation from accepted research practices” (NSF, 1990b, p. 21). NSF's OIG further determined that these incidents were “an integral part of this individual's performance as a researcher and research mentor and represented a serious deviation from accepted research practices” (p. 27). However, reports of this particular case have caused some scientists to express concern that the scope of the definition of misconduct in science may be inappropriately broadened into areas designated by the panel as “other misconduct,” such as sexual harassment. TABLE 4.1 Allegations of Misconduct in Science Reviewed in FY 1990 by the National Science Foundation Category Number of Allegations Fabrication or falsification 9 Plagiarism 20 Other deviant research practices 8a Violations of other research conductb regulations 1b Violations of other legal requirements governing research 4c TOTAL 41d NOTE: The table represents the categories assigned by the panel to the allegations themselves. NSF's OIG does not necessarily endorse these categories, nor does it necessarily regard all these cases as exemplifying misconduct in science. a Allegations of deviant practices included unauthorized use of research preparations, failure to identify original authors of proposal, tampering with others' experiments, discrimination by a reviewer or research investigator, and exploitation of a subordinate. b Alleged violation of recombinant DNA regulations. c Alleged violations included financial conflict of interests under award by an investigator or reviewer, NSF staff mishandling of proposal or award, and violation of a sanction against a principal investigator. d Some allegations involved more than one form of misconduct. SOURCE: Based on data from Office of Inspector General, NationalScience Foundation (personal communications on December 27, 1990,and February 22, 1991).
OCR for page 84
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process Department of Health and Human Services In FY 1989 and FY 1990, following the creation of the Office of Scientific Integrity (OSI), the Department of Health and Human Services (DHHS) received a total of 155 allegations of misconduct in science, many of which had been under review from earlier years by various offices within the Public Health Service (PHS).5 In April 1991, OSI reported that since its formation it had closed about 110 cases, most of which did not result in findings of misconduct in science. The Office of Scientific Integrity Review (OSIR), in the office of the assistant secretary for health, reviewed 21 reports of investigations of misconduct in science in the period from March 1989 to December 1990, some of which involved multiple charges.6 The cases reviewed by OSIR had been forwarded to that office by OSI and had completed both an inquiry and investigation stage. Findings of misconduct in science, engaged in by 16 individuals, were made in 15 of the reports of investigations reviewed by OSIR. The OSIR 's summary of findings is given in Table 4.2 . The OSIR recommended debarment in six cases, the most extreme administrative sanction available short of referral to the Justice Department for criminal prosecution. Actions to recover PHS grant funds were undertaken in two cases. Consequences of Confirmed Misconduct Confirmed findings of misconduct in science can result in governmental penalties, such as dismissal or debarment, whereby individuals or institutions can be prohibited from receiving government grants or contracts on a temporary or permanent basis (42 C.F.R. 50). An individual who presents false information to the government in any form, including a research proposal, employment application, research report, or publication, may be subject to prosecution under the False Claims Act (18 U.S.C. 1001). At least one case of criminal prosecution against a research scientist, for example, rested on evidence that the scientist had provided false research information in research proposals and progress reports to a sponsoring agency.7 Similar prosecutions have occurred in connection with some pharmaceutical firms or contract laboratories that provided false test data in connection with licensing or government testing requirements (O'Reilly, 1990). Government regulations on misconduct in science provide a separate mechanism through which individuals and institutions can be
OCR for page 85
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process subjected to government penalties and criminal prosecution if they misrepresent information from research that is supported by federal funds, even if the information is not presented directly to government officials. Research institutions and scientific investigators who apply for and receive federal funds are thus expected to comply with high standards of honesty and integrity in the performance of their research activities. TABLE 4.2 Findings of Misconduct in Science in Cases Reviewed by the Office of Scientific Integrity Review, Department of Health and Human Services, March 1989 to December 1990 Type of Allegation Findings of Misconduct (15 investigations) Fabrication or falsification 6 Plagiarism 5 Other deviant research practices 7 TOTAL 18a a The total of findings of misconduct is larger than the number of investigations because some cases had multiple findings. SOURCE: Department of Health and Human Services (1991b). Government Definitions of Misconduct in Science—Ambiguity in Categories The PHS's misconduct-in-science regulations apply to research sponsored by all PHS agencies, including the National Institutes of Health, the Alcohol, Drug Abuse, and Mental Health Administration, the Centers for Disease Control, the Food and Drug Administration, and the Agency for Health Care Policy and Research. The PHS defines misconduct in science as “fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting, or reporting research. It does not include honest error or honest differences in interpretations or judgments of data” (DHHS, 1989a, p. 32447). 8 The PHS's definition does not further define fabrication, falsification, plagiarism, or other serious deviations from commonly accepted research practices. The ambiguous scope of this last category is a topic of major concern to the research community because of the
OCR for page 86
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process perception that it could be applied inappropriately in cases of disputed scientific judgment. The first annual report of the DHHS's OSIR suggests the types of alleged misconduct in science that might fall within the scope of this category (DHHS, 1991b): Misuse by a journal referee of privileged information contained in a manuscript, Fabrication of entries or misrepresentation of the publication status of manuscripts referenced in a research bibliography, Failure to perform research supported by a PHS grant while stating in progress reports that active progress has been made, Improper reporting of the status of subjects in clinical research (e.g., reporting the same subjects as controls in one study and as experimental subjects in another), Preparation and publication of a book chapter listing co-authors who were unaware of being named as co-authors, Selective reporting of primary data, Unauthorized use of data from another investigator's laboratory, Engaging in inappropriate authorship practices on a publication and failure to acknowledge that data used in a grant application were developed by another scientist, and Inappropriate data analysis and use of faulty statistical methodology. The panel points out that most of the behaviors described above, such as the fabrication of bibliographic material or falsely reporting research progress, are behaviors that fall within the panel's definition of misconduct in science proposed in Chapter 1. The NSF's definition (NSF, 1991b) is broader than that used by the PHS9 and extends to nonresearch activities supported by the agency, such as science education. NSF also includes in its definition of misconduct in science acts of retaliation against any person who provides information about suspected misconduct and who has not acted in bad faith. The panel believes that behaviors such as repeated incidents of sexual harassment, sexual assault, or professional intimidation should be regarded as other misconduct, not as misconduct in science, because these actions (1) do not require expert knowledge to resolve complaints and (2) should be governed by mechanisms that apply to all institutional members, not just those who receive government research awards. Practices such as inappropriate authorship, in the panel's view, should be regarded as questionable research practices,
OCR for page 87
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process because they do not fit within the rationale for misconduct in science as defined by the panel in Chapter 1. The investigation of questionable research practices as incidents of alleged misconduct in science, in the absence of consensus about the nature, acceptability, and damage that questionable practices cause, can do serious harm to individuals and to the research, enterprise. Institutional or regulatory efforts to determine “correct” research methods or analytical practices, without sustained participation by the research community, could encourage orthodoxy and rigidity in research practice and cause scientists to avoid novel or unconventional research paradigms. 10 Reports from Local Institutional Officials Investigatory Reports Government regulations currently require local institutions to notify the sponsoring agency if they intend to initiate an investigation of an allegation of misconduct in science. The institutions are also required to submit a report of the investigation when it is completed. These reports, in the aggregate, may provide a future source of evidence regarding the frequency with which misconduct-in-science cases are handled by local institutions. Although some investigatory reports have been released on an ad hoc basis, research scientists generally do not have access to comprehensive summaries of the investigatory reports prepared or reviewed by government agencies. The absence of such summaries impedes informed analysis of misconduct in science and inhibits the exchange of information and experience among institutions about factors that can contribute to or prevent misconduct in science. Other Institutional Reports The perspectives and experiences of institutional officials in handling allegations of misconduct in science are likely in the future to be important sources of information about the incidence of misconduct. This body of experience is largely undocumented, and most institutions do not maintain accessible records on their misconduct cases because of concerns about individual privacy and confidentiality, as well as concerns about possible institutional embarrassment, loss of prestige, and lawsuits. The DHHS's regulations now require grantee institutions to provide annual reports of aggregate information on allegations, inquir-
OCR for page 88
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process ies, and investigations, along with annual assurances that the institutions have an appropriate administrative process for handling allegations of misconduct in science (DHHS, 1989a). The institutional reports filed in early 1991 were not available for this study. These institutional summaries could eventually provide an additional source of evidence regarding how frequently misconduct in science addressed at the local level involves biomedical or behavioral research. If the reports incorporate standard terms of reference, are prepared in a manner that facilitates analysis and interpretation, and are accessible to research scientists, they could provide a basis for making independent judgments about the effectiveness of research institutions in handling allegations of misconduct in science. The NSF's regulations do not require an annual report from grantee institutions. International Studies Cases of misconduct in science have been reported and confirmed in other countries. The editor of the British Medical Journal reported in 1988 that in the 1980s at least five cases of misconduct by scientists had been documented in Britain and five cases had been publicly disclosed in Australia (Lock, 1988b, 1990.). As a result of a “nonsystematic” survey of British medical institutions, scientists, physicians, and editors of medical journals, Lock cited at least another 40 unreported cases. There has been at least one prominent case of misconduct in science in India recently (Jayaraman, 1991). Several cases of misconduct in science and academic plagiarism have been recorded in Germany (Foelsing, 1984; Eisenhut, 1990). Analyses, Surveys, and Other Reports Hundreds of articles on misconduct in science have been published in the popular and scholarly literature over the past decade. The study panel's own working bibliography included over 1,100 such items. Although highly publicized reports about individual misconduct cases have appeared with some frequency, systematic efforts to analyze data on cases of misconduct in science have not attracted significant interest or support within the research community until very recently. Research studies have been hampered by the absence of information and statistical data, lack of rigorous definitions of misconduct in science, the heterogeneous and decentralized nature of the research environment, the complexity of misconduct cases, and
OCR for page 89
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process the confidential and increasingly litigious nature of misconduct cases (U.S. Congress, 1990b; AAAS-ABA, 1989). As a result, only a small number of confirmed misconduct cases have been the subject of scholarly examination. The results of these studies are acknowledged by their authors to be subject to statistical bias; the sample, which is drawn primarily from public records, may or may not be representative of the larger pool of cases or allegations. Preliminary studies have focused primarily on questions of range, prevalence, incidence, and frequency of misconduct in science. There has been little effort to identify patterns of misconduct or questionable practices in science. Beyond speculation, very little is known about the etiology, dynamics, and consequences of misconduct in science. The relationship of misconduct in science to factors in the contemporary research environment, such as the size of research teams, financial incentives, or collaborative research efforts, has not been systematically evaluated and is not known. Woolf Analysis Patricia Woolf of Princeton University, a member of this panel, has analyzed incidents of alleged misconduct publicly reported from 1950 to 1987 (Woolf, 1981, 1986, 1988a). Woolf examined 26 cases of misconduct identified as having occurred or been detected in the period from 1980 to 1987, the majority of which (22 cases) were in biomedical research. Her analysis indicated that 11 of the institutions associated with the 26 cases were prestigious schools and hospitals, ranked in the top 20 in the Cole and Lipton (1977) evaluation of reputation. Woolf found that a “notable percentage ” of the individuals accused of misconduct were from highly regarded institutions: “seven graduated from the top twenty schools” (Woolf, 1988a, p. 79), as ranked by reputation, an important finding that deserves further analysis. She also suggested that because cases of misconduct are often handled locally, the total number of cases is likely to be larger than reported in the public record (Woolf, 1988a). The types of alleged misconduct reported in the cases analyzed by Woolf, some of which involved more than one type, included plagiarism (4 cases); falsification, fabrication, and forgery of data (12 cases); and misrepresentation and other misconduct (12 cases). She suggested that “plagiarism is almost certainly under-represented in this survey, as it appears to be handled locally and without publicity whenever possible” (Woolf, 1988a, p. 83). Woolf identified several important caveats, noted below, that still
OCR for page 90
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process apply to all systematic efforts to analyze the characteristics and demography of misconduct in science (Woolf, 1988a, p. 76): Small number of instances. There are not enough publicly known cases to draw statistically sound conclusions or make significant generalizations, and those that are available are a biased sample of the population of actual cases. Blurred categories. It is not possible in all cases to cleanly separate misconduct in science from falsification in drug trials or laboratory tests. Similarly, one person may indulge in plagiarism, fabrication, and falsification. Incomplete information. Some information about reported instances is not yet available. Variety of sources. The sources of information (for Woolf's analysis) include public accounts, such as newspaper reports, as well as original documents and interview material. They are not all equally reliable with regard to dates and other minor details. Unclear resolution. Disputed cases that have nevertheless been “settled” are included (in Woolf's analysis). In some highly publicized cases of alleged misconduct in science, the accused scientist has not admitted, and may have specifically denied, misconduct in science. OSIR Analysis The DHHS's OSIR prepared a first annual report in early 1991 that analyzed data associated with investigations of misconduct in science reviewed by that office in the period March 1989 through December 1990 (DHHS, 1991b). The report examined misconduct investigations carried out by research institutions and by the OSI. Seniority of Subjects of Misconduct Cases in Woolf and OSIR Analy ses. Both Woolf and the OSIR examined the rank of individuals who have been the subjects of misconduct-in-science cases. Although some have speculated that junior scientists might be more likely to engage in misconduct in science, both Woolf's analysis and the OSIR's analysis suggest that misconduct in science “did not occur primarily among junior scientists or trainees ” (DHHS, 1991b, p. 7). Their preliminary studies suggest that the incidence of misconduct is likely to be greater among senior scientists (Table 4.3), a finding that deserves further analysis. Detection of Misconduct in Science in Woolf and OSIR Analyses. Woolf and the OSIR examined processes used to detect incidents of con-
OCR for page 91
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process firmed or suspected misconduct in science and also analyzed the status of individuals who disclosed these incidents (Table 4.4 and Table 4.5). Their analyses indicate that existing channels within the peer review process and research institutions do provide information about misconduct in science. Initial reports were often made by supervisors, collaborators, or subordinates who were in direct contact with the individual suspected of misconduct. These findings contradict opinions that checks such as peer review, replication of research, and journal reviews do not help identify instances of misconduct. TABLE 4.3 Academic Ranks of Subjects in Confirmed Cases of Misconduct in Science Number of Subjects Rank 1980-1987a 1989-1990b Full or asssociate professor, or senior scientist/laboratory chief 13 7 Assistant professor 2 4 Research associate/fellow 3 3 Various posts held 5 na No academic appointment/technicians 2 2 Unknown 1 na 26 16 a Data from Woolf (1988a). b Department of Health and Human Services (1991b). However, the panel notes that supervisors, colleagues, and subordinate personnel may report misconduct in science at their peril. The honesty of individuals who hold positions of respect or prestige cannot be easily questioned. It can be particularly deleterious for junior or temporary personnel to make allegations of misconduct by their superiors. Students, research fellows, and technicians can jeopardize current positions, imperil progress on their research projects, and sacrifice future recommendations from their research supervisors by making allegations of misconduct by their co-workers. The Acadia Institute Survey One provocative study of university officers' experience with misconduct in science is a 1988 survey of 392 deans of graduate studies from institutions affiliated with the Council of Graduate Schools (CGS).11 12 The survey was conducted with support from NSF and the American
OCR for page 92
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process Association for the Advancement of Science. Approximately 75 percent (294) of the graduate deans responded to the survey. TABLE 4.4 Primary Sources of Detection of Alleged Misconduct (1980 to 1987) Factor Number of Cases Admission 2 Co-worker or former co-worker reported: Laboratory suspicions, irregular procedures 13 Misuse of funds 1 Inability to replicate or continue work 8 Institutional review board raised questions 1 Scientists at other institutions reported suspicions (including inability to replicate work) 6 Editorial peer review 3 Promotion review of publications 1 Formal audit 1 Protest by original author (plagiarism) 3 Unknown 2 NOTE: Some instances were or seem to have been suspected or detected at about the same time by more than one factor. From the available record it is difficult to make a clear distinction between factors that enabled detection of misconduct in science and those used to demonstrate or prove it. SOURCE: Data from Woolf (1988a). The Acadia Institute survey data indicate that 40 percent (118) of the responding graduate deans had received reports of possible faculty misconduct in science during the previous 5 years. Two percent (6) had received more than five reports. These figures suggest that graduate deans have a significant chance of becoming involved in handling an allegation of misconduct in science. The survey shows that about 190 allegations of misconduct in science were addressed by CGS institutions over the 5-year period (1983 to 1988) reported in the survey. It is not known whether any or all of these allegations were separately submitted to government offices concerned with misconduct in science during this time period, although overlap is likely. The Acadia Institute survey also suggests, not surprisingly, that allegations of misconduct in science are associated with institutions that receive significant amounts of external research funding. As noted in the NSF's OIG summary report of the Acadia Institute survey: “Of the institutions receiving more than $50 million in external research funding annually, 69 percent  had been notified of possi-
OCR for page 93
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process ble faculty misconduct. Among institutions receiving less than $5 million, only 19 percent  had been so notified” (NSF, 1990d, pp. 2-3). TABLE 4.5 Status of Individual Bringing Allegations Status Number of Cases Supervisor (e.g., department chair, laboratory chief) 4 Colleague (scientific associate of about the same seniority or status) 4 Collaborator 4 Junior scientific associate 2 Graduate student or postdoctoral trainee 5 Laboratory technician 3 Chair of a department at another institution 1 Self (self-report of misconduct by the subject) 1 SOURCE: Department of Health and Human Services (1991b). When asked about cases of verified misconduct by their faculties during the previous 5 years, 20 percent (59) of all the responding graduate deans indicated such instances. Among universities with over $50 million per year in external funding (about 55 institutions fell within this category in 1988), 41 percent (20) had some verified misconduct, according to responses of graduate deans participating in the Acadia Institute survey. The actual number of cases associated with these percentages, which is small, is consistent with the panel's observation that the total number of confirmed cases of misconduct in science is very small. Nevertheless, reports indicating that prestigious research institutions consistently receive, and confirm, allegations of misconduct in science are disturbing. Other Reports Bechtel and Pearson. Bechtel and Pearson (1985) examined both the question of prevalence of misconduct in science and the concept of deviant behavior by scientists as part of a larger exploration of “elite occupational deviance” that included white collar crime. The authors reviewed 12 cases of misconduct in science, drawn from reports in the popular and scientific press in the 1970s and early 1980s. They found that available evidence was inadequate to support accurate generalizations about how widespread misconduct in science might
OCR for page 94
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process be. As to the causes of deviant behavior, the authors concluded that “in the debate between those who favor individualistic explanations based on psychological notions of emotional disturbance, and the critics of big science who blame the increased pressures for promotion, tenure, and recognition through publications, one tends to see greater merit in the latter” (p. 244). They suggested that further systematic examination is required to determine the appropriate balance between individual and structural sources of deviant behavior. Sigma Xi Study. As part of a broader survey it conducted in 1988, Sigma Xi, the honor society for scientific researchers in North America, asked its members to respond to the following statement: “Excluding gross stupidities and/or minor slip ups that can be charitably dismissed (but not condoned), I have direct knowledge of fraud (e.g., falsifying data, misreporting results, plagiarism) on the part of a professional scientist.”13 Respondents were asked to rank their agreement or disagreement with the statement on a five-point scale. The survey was mailed to 9,998 members of the society; about 38 percent responded (which indicates a possible source of bias). Although 19 percent of the Sigma Xi respondents indicated that they had direct knowledge of fraud by a scientist, it is not certain from the survey whether direct knowledge meant personal experience with or simply awareness of scientific fraud. It is also possible that some respondents were referring to identical cases, and respondents may have reported knowledge of cases gained secondhand. Furthermore, it is not clear what information can be gained by having respondents rank “direct knowledge” on a five-point scale of agreement and disagreement. Additional Information. Estimates about the incidence of misconduct in science have ranged from editorial statements that the scientific literature is “99.9999 percent pure” to reader surveys published in scientific journals indicating that significant numbers of the respondents have had direct experience with misconduct of some sort in science.14 The broad variance in these estimates has not resolved uncertainties about the frequency with which individuals or institutions actually encounter incidents of misconduct in science. In March 1990, the NSF's OIG reported that, based on a comprehensive review of the results from past surveys that attempted to measure the incidence of misconduct in science, “the full extent of misconduct is not yet known” (NSF, 1990d, p. 9). The NSF reports found that only a few quantitative studies have examined the extent of misconduct in science and that prior survey efforts had poor
OCR for page 95
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process response rates, asked substantively different questions, and employed varying definitions of misconduct. These efforts have not yielded a database that would provide an appropriate foundation for findings and conclusions about the extent of misconduct in science and engineering. 15 FINDINGS AND CONCLUSIONS The panel found that existing data are inadequate to draw accurate conclusions about the incidence of misconduct in science or of questionable research practices. The panel points out that the number of confirmed cases of misconduct in science is low compared to the level of research activity in the United States. However, as with all forms of misconduct, underreporting may be significant; federal agencies have only recently imposed procedural and report ing requirements that may yield larger numbers of reported cases. The possibility of underreporting can neither be dismissed nor con firmed at this time. More research is necessary to determine the full extent of misconduct in science. Regardless of the incidence, the panel emphasizes that even infrequent cases of misconduct in science are serious matters. The number of confirmed incidents of misconduct in science, together with the possibility of underreporting and the results presented in some preliminary studies, indicate that misconduct in science is a problem that cannot be ignored. The consequences of even infrequent cases of misconduct in science require that attention be giv en to appropriate methods of treatment and prevention. NOTES 1. Reports of cases involving findings of misconduct in science were provided to the panel by DHHS and NSF. These reports indicate a total of 15 cases of findings of misconduct in science by DHHS in the period from March 1989 to December 1990 and 3 cases of findings of misconduct in science by NSF in the period from July 1989 to September 1990. See NSF (1990b) and DHHS (1991b). Information was also provided in a personal communication from Donald Buzzelli, staff associate, OIG, NSF, February 1, 1991. Congressional testimony by and telephone interviews with NIH and ADAMHA officials indicated that in the period from 1980 to 1987, roughly 17 misconduct cases handled by these agencies resulted in institutional findings of research misconduct, some of which are included in the Woolf analysis discussed below. During this same period, NSF made findings of misconduct in science in seven cases. See the testimony of Katherine Bick and Mary Miers in U.S. Congress (1989a); see also Woolf (1988a). The report by Woolf (1988a) identified 40 publicly reported cases of alleged misconduct in science in the period from 1950 to 1987, many of which involved confirmed
OCR for page 96
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process findings of misconduct. Another two dozen or so cases of alleged misconduct in science were reported in congressional hearings in the 1980s. Some of the cases discussed in congressional hearing' s and in the Woolf analysis are included in the NSF and DHHS reports mentioned above. Some cases discussed in congressional hearings are still open, and the remainder have been closed without an institutional finding of misconduct in science. The estimate of confirmed cases of misconduct in science does not include cases in which research institutions have made findings of misconduct, unless these cases are included in the Woolf analysis or the congressional hearings mentioned above. During the time of this study, there were no central records for institutional reports on misconduct in science that would indicate the frequency with which these organizations found allegations to have merit. Finally, several authors have reviewed selected cases of misconduct in science, both contemporary and historical. The most popular accounts are a book by Broad and Wade (1982), who cite 34 cases of “known or suspected cases of scientific fraud” ranging from “ancient Greece to the present day”; a book by Klotz (1985); and one by Kohn (1986), who cites 24 cases of “known or suspected misconduct.” These texts, and the government reports, congressional hearings, and Woolf analysis cited above, discuss many of the same cases. 2. The preamble to the PHS's 1989 regulations for scientific misconduct notes that “reported instances of scientific misconduct appear to represent only a small fraction of the total number of research and research training awards funded by the PHS” (DHHS, 1989a, p. 32446). The preamble to the NSF's 1987 misconduct regulations states that “NSF has received relatively few allegations of misconduct or fraud occurring in NSF-supported research or proposals” (NSF, 1987, p. 24466). Furthermore, according to the National Library of Medicine, during the 10-year period from 1977 to 1986, about 2.8 million articles were published in the world's biomedical literature. The number of articles retracted because of the discovery of fraud or falsification of data was 41, less than 0.002 percent of the total. See Holton (1988), p. 457. 3. Analyses of the NSF's experience are complicated by the fact that different offices have held authority for handling research misconduct cases. Prior to the creation of the OIG in March 1989, this authority was assigned to the NSF's Division of Audit and Oversight. The OIG “inherited” approximately 19 case files, and it received 6 new allegations of research misconduct during FY 1989. NSF officials reported in 1987 that NSF had examined 12 charges of research misconduct, 7 of which were found to be warranted, of which 3 were considered minor violations. See Woolf (1988a). 4. Personal communication, OIG, NSF, February 1, 1991. 5. Personal communication, Jules Hallum, director, OSI, February 27, 1991. 6. Four of these investigations were conducted by the PHS. Sixteen were conducted by outside, primarily grantee, institutions. One additional investigation was an intramural case within the PHS. 7. See the documentation regarding the case of psychologist Stephen Breuning as detailed in the DHHS's Report and Recommendations of a Panel to Investigate Allegations of Scientific Misconduct under Grants MH-32206 and MH-37449, April 20, 1987. 8. The definition excludes violations of regulations that govern human or animal experimentation, financial or other record-keeping requirements, or the use of toxic or hazardous substances. It applies to individuals or institutions that apply for as well as those that receive extramural research, research-training, or research-related grants or cooperative agreements under the PHS, and to all intramural PHS research. In the proposed rule, the PHS's definition of misconduct included a second clause referring
OCR for page 97
RESPONSIBLE SCIENCE: Ensuring the Integrity of the Research Process to “material failure to comply with federal requirements that uniquely relate to the conduct of research.” This clause was eliminated in the misconduct definition adopted in the final rule (DHHS, 1989a) to avoid duplicate reporting of violations of research regulations involving animal and human subjects, since these areas are covered by existing regulations and policies. 9. In the commentary accompanying its final rule, NSF (1987) noted that several letters on the proposed rule had commented that the proposed definition was too vague or overreaching. The NSF's 1987 definition originally included two clauses in addition to those in the PHS misconduct definition: “material failure to comply with federal requirements for protection of researchers, human subjects, or the public or for ensuring the welfare of laboratory animals” and “failure to meet other material legal requirements governing research” (NSF, 1987, p. 24468). These categories were removed in 1991 when the regulations were amended. 10. In a “Dear Colleague Letter on Misconduct” issued on August 16, 1991, the NSF's OIG stated, “The definition is not intended to elevate ordinary disputes in research to the level of misconduct and does not contemplate that NSF will act as an arbitrator of mere personality clashes or technical disputes between researchers.” 11. K. Louis, J. Swazey, and M. Anderson, University Policies and Ethical Issues in Research and Graduate Education: Results of a Survey of Graduate School Deans, preliminary report (Bar Harbor, Me.: Acadia Institute, November 1988). The survey was published as Swazey et al. (1989). 12. It should be noted that the survey instrument used by the Acadia Institute did not define “research misconduct,” but instead left that term open to the interpretation of the respondents. In some parts of the survey, “plagiarism” was distinguished from “research misconduct.” 13. Sigma Xi (1989), as summarized in NSF (1990d), pp. 4-5. 14. Cited in Woolf (1988a), p. 71. She quotes an editorial by Koshland (1987) for the first figure and a survey by St. James-Roberts (1976b) for the latter. 15. See Tangney (1987) and Davis (1989). See also St. James-Roberts (1976a). The reader survey reported in St. James-Roberts (1976b) received 204 questionnaire replies. Ninety-two percent of the respondents reported direct or indirect experience with “intentional bias” in research findings. The source of knowledge of bias was primarily from direct contact (52 percent). Forty percent reported secondary sources (information from colleagues, scientific grapevine, media) as the basis for their knowledge. See also Industrial Chemist (1987a,b). The editors expressed surprise at the high level of responses: 28.4 percent of the 290 respondents indicated that they faked a research result often or occasionally.
Representative terms from entire chapter: