Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
â2â Basic Concepts I N THIS CHAPTER we introduce two overarching themes that are critical for our ï¬ndings and recommendations. First is the need for continued vigilance by all those involved in the U.S. human re- search participant protection systemâresearchers, institutional review boards (IRBs), research institutions, funding agencies, and the Ofï¬ce for Human Research Protections (OHRP)âto maintain the principles for participant protection that were articulated in the Belmont Report produced by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). Second is the need to maintain that vigilance in a way that is commensurate with the risk of each research protocol. Following a summary of the Belmont Report principles and practices that follow from them, the chapter con- siders more fully issues of harm, beneï¬t, risk, and minimal risk. It then considers the current mismatch between the risks of research projects and the type of review afforded them by many IRBs. Finally, as context, the chapter discusses examples of social, behavioral, and economic sci- ences (SBES) research and issues for participant protection. PRINCIPLES AND PRACTICES FOR ETHICAL RESEARCH General Principles Although U.S. policies and regulations for protection of human re- search participants date back to the 1960s (see Chapter 3), basic ethical principles underlying and informing such regulations were not articu- lated until 1979, when the national commission issued the Belmont Report. That report identiï¬ed three major ethical principles for the conduct of research on humansârespect for persons, beneï¬cence, and justice: Respect for Personsâthe obligation to treat individuals as autonomous agents whose decisions on whether or not to participate in research are to be respected and not overrid- den by a researcher. From this principle follows the require- ment for researchers to obtain voluntary informed consent 23
24 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH from participants. Special recognition must be given to is- sues of respect when dealing with people who are immature, incapacitated, or whose autonomy is constrained. Those with limited capacities need to be protected from harm by providing for consent by authorized proxies and by taking extra care to minimize research risks or, in some cases, pre- cluding their participation in research. Beneï¬cenceâthe obligation to secure participantsâ well- being by protecting them from harm to the extent possible and by maximizing the beneï¬tsâto them especially, but also to societyâthat are expected to result from the research. From this principle follows the requirement for researchers and IRBs to assess risks of harm and probability of beneï¬ts in a systematic manner. Justiceâthe obligation to show fairness in the selection of research participants with regard to the distributions of the burdens and beneï¬ts of the research. From this principle follows the requirement for researchers to select partici- pants in an equitable manner for particular studies and for funding agencies to consider the distribution of burdens and beneï¬ts across society (e.g., to ensure that certain groups are not systematically excluded from or included in research). Applying Principles to the Conduct of Research Everyone concerned with research on humans should be fully cog- nizant of the Belmont principles in designing and reviewing protocols and monitoring ongoing research. Resolving conï¬icts among princi- ples, however, can prove challenging in practice and underscores the necessity of the ethical review processes that are in place for research with humans. In practice, the three principles translate into consider- ation of three requirements: informed consent, assessment and appro- priate balancing of risks and beneï¬ts, and fair procedures for selection of research participants. In addition, although not explicitly articu- lated in the Belmont Report, the principles support the protection of conï¬dentiality. (See also Box 1-1 in Chapter 1, which lists the criteria that IRBs must consider in reviewing research protocols.) Informed Consentâproviding an individual with compre- hensible information regarding known risks of harm, possi-
BASIC CONCEPTS 25 ble beneï¬ts, and other details of the proposed study prior to the point at which the person freely chooses to participate. By providing full information to prospective participants, researchers assure that each of them can decide whether he or she is willing to participate given his or her situation and personal tolerance for risk. Consider, for example, a test of an experimental drug for the treatment of a mental illness when the drug is known to have a number of potentially serious side effects. A less invasive example would be a psy- chological experiment in which a lengthy series of mental tests are administered to elderly persons over the course of a few hours. In the second case some temporary fatigue or distress is likely, which may be regarded as harmful to some people. Regardless of an experimenterâs belief in the poten- tial beneï¬ts to the participant or the long-term beneï¬ts from the research, it would be unethical for the experimenter to subject the person to these kinds of risks without consent. The right to decide about participation on the basis of full information is not limited to studies that pose signiï¬cant risks of harm. It exists for studies that are as inconsequen- tial as stating color preferences for automobiles in market research, as well as for studies probing the effect of grieving on the emotional health of a surviving spouse. Under care- fully considered circumstances, however, it can be appro- priate to use less than fully informed consentâfor example, keeping information about a particular feature of a study from a prospective participant until the study is completed when such information would likely alter the participantâs behavior, the knowledge to be gained is important, and the risk to the participant from omitting the particular informa- tion is minimal. Assessment of Harms, Risks, and Beneï¬tsâweighing and appropriately balancing the risks of harm and the potential for beneï¬ts from participation in the proposed study. Al- though there is little disagreement about the desirability of minimizing harm and maximizing beneï¬ts from participa- tion in research, determining for a speciï¬c research proto- col the type and extent of harm, the probability or likelihood of harm, and the beneï¬ts likely to be obtained from partic- ipation is, at best, inexact. Such assessments are almost always subjective and often involve issues on which reason-
26 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH able people disagree. Yet such judgments cannot be avoided (see âHarms, Risks, and Beneï¬tsâ below). Fair Selection of Research Participantsâassuring fair pro- cedures and outcomes in the selection of research partic- ipants. Achieving fairness requires consideration of those who are included in research and those who are excluded. If participation is believed to be beneï¬cial to either the par- ticipants or the populations represented by them, then ex- cluding some people raises an issue of fairness. For exam- ple, early studies of cardiovascular disease rarely included women, leading to knowledge with potential limitations for understanding cardiovascular diseases in women. If partic- ipation is believed to carry signiï¬cant risks of harm, then restricting research to particular population groups is also an issue of fairness, particularly if those groups are subject to coercion (e.g., prisoners who are denied privileges or of- fered added privileges to participate). Conï¬dentiality Protectionâkeeping the participantâs iden- tity conï¬dential. Conï¬dentiality is another means of show- ing respect for a person. A person has the right to expect that, if he or she participates in research under conditions of conï¬dentiality, the researcher will respect and assure that conï¬dentiality. Conï¬dentiality may also address benef- icence. In some cases, making research information public could put a participant at risk. For example, if sensitive per- sonal information became known to the personâs employer, it could put his or her job or beneï¬ts in jeopardy. HARMS, RISKS, AND BENEFITS In this section we brieï¬y discuss some of the critical factors sur- rounding the judgments about harms, risks, and beneï¬ts that are nec- essary to address the ethical principle of beneï¬cence. Types of Harm Drawing on the ï¬nal report of the National Bioethics Advisory Com- mission (NBAC) (2001:71-72) and adding examples from SBES re- search, below we discuss six types of harms that can occur to research
BASIC CONCEPTS 27 participants: physical, psychological, social, economic, legal, and dig- nitary.1 ⢠Physical harm from research can include death, injury, pain, suf- fering, or discomfort. Examples in biomedical research range from death due to an experimental drug administered in a can- cer study to discomfort from having to keep still for a long time during an MRI (magnetic resonance imaging) study. Examples in SBES research range from death or injury due to the failure of an alternative automated method of helping blind people cross at trafï¬c signals to discomfort from being subjected to loud noises or bright lights during a stimulus-response study. Physical harm, including injury and death, can also result from a breach of con- ï¬dentiality that discloses sensitive information (e.g., that one is participating in a study of gang violence, which could lead to re- taliation by gang members). ⢠Psychological harm from research can include negative self- perception, emotional suffering (e.g., anxiety or shame), or aber- rations in thought or behavior (e.g., agreeing to a hateful state- ment under pressure from the research environment). In both biomedical and SBES research, psychological harm from the re- search procedure can range from momentary anxiety or embar- rassment to long-lasting, intense psychological distress and fear, which could in extreme cases result in suicide. A biomedical ex- ample is when a participant in a genetics study learns that he or she is likely to develop a disease for which there is no treatment or cure. An SBES example is when a participant in a study on traumatic events recalls memories that are intensely distressing. Psychological harm, such as distress, anger, or guilt, can also re- sult from disclosure of sensitive or embarrassing information col- lected in the research. ⢠Social harm can involve negative effects on relationships or in- teractions with other people. Such effects are most likely to re- sult from a breach of conï¬dentiality, in which a participantâs an- swers become known to others. Examples of social harm include discriminatory behavior resulting in loss of insurance or employ- ment from knowledge of study results (e.g., that one has or is likely to contract a speciï¬c disease). Stigmatization is another 1 Recent guidance from the National Science Foundation (2002:17-18) is similar but omits dignitary harm and includes âmoral harm when participation in research strength- ens the subjectsâ inclinations to behave unethically.â
28 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH social harm that can result from knowledge about a personâs par- ticipation in a study or particular ï¬ndings. ⢠Economic harm usually involves ï¬nancial loss, which can result from study participation (e.g., the need to pay for transportation or child care in order to participate), from disclosure of study ï¬ndings or participation (e.g., loss of health insurance or employ- ment), or as a side effect of other harms (e.g., having to pay court costs in a lawsuit that results from a breach of conï¬dentiality). ⢠Legal harm can include arrest, conviction, incarceration, and civil lawsuits. Such harm can result, for example, from a breach of conï¬dentiality in studies of possession or use of illegal drugs, sexual abuse, or shoplifting behavior, or in situations in which state law requires that certain types of researchers report partic- ular activities, such as child abuse. ⢠Dignitary harm can result when individuals are treated as means to an end and not as people deserving respect for their own values and preferences. Such harm can happen in studies that do not appropriately obtain informed consent. Research projects can pose risks of more than one type of harm (e.g., stigmatization, psychological stress, and ï¬nancial loss from dis- closure of conï¬dential information). Research projects can also result in harm to people not directly involved in the research (see National Bioethics Advisory Commission, 2001:72). For example, family mem- bers could be stigmatized or otherwise harmed by a breach of conï¬- dentiality that disclosed information about a family from an individ- ualâs participation in genetic research. Figure 2-1 shows a distribution of the kind(s) of harm that a sample of investigators of biomedical and SBES research projects anticipated could potentially result to partici- pants in their projects, with a slightly different categorization than we use. Differences in methods used in SBES research relate to the appro- priate focus of IRBs in determining the kinds of potential harm to hu- man participants. For research involving interventions, such as a labo- ratory experiment in which the participant is subjected to a stimulus, a primary focus must be on the potential harm to the participant from the intervention itself. The potential harm from a breach of conï¬dentiality is also of concern. For research in which the participant is answer- ing questions from a researcher, the primary focus is on the harm that could result from a breach of conï¬dentiality. The psychological harm from asking sensitive questions is also of concern and is affected by
BASIC CONCEPTS 29 60% Behavioral/Social Clinical/Biomedical 50% Percentage of Protocols 40% 30% 20% 10% 0% ic al l l l al r ica cia ga he om dic on Le Ot log So ati on Me ho uc Ec yc Ed Ps Figure 2-1 Types of Possible Harm Anticipated by Investigators for Protocols, by Type of Research NOTE: Classiï¬cation by investigators (n = 632): behavioral/social research includes social science, behavioral science, educational research, and health services research; clinical/biomedical research includes clinical research, biomedical science, and epidemiology. SOURCE: 1995 survey of IRBs in Bell, Whiton, and Connelly (1998:Figure 11a). whether the researcher assures the participant that any such question can be skipped. For research that involves no contact between the re- searcher (or research team) and the participant, the primary concern is the potential harm from a breach of conï¬dentiality. Procedures to encourage participation also raise the potential for harm. For experiments, one problem may arise when volunteers be- come so motivated by direct incentives to participate, such as the pos- sibility that they or a close relative or friend will beneï¬t from an experi- mental treatment, that they fail to take adequate account of the risks of participation. Another problem can occur if volunteers are so coerced (overtly or in subtle ways) that their right to voluntarily participate is not respected (e.g., students who perceive that participating in an ex- periment is necessary to remain on good terms with the instructor). Yet another problem can arise for surveys for which achieving high re-
30 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH sponse rates among randomly selected (presumably disinterested) peo- ple is a key issue for the quality of the research results: What are the appropriate procedures to ensure participation without harming par- ticipants by the recruitment procedure (e.g., by making their identity known to others)? Estimating Risks of Harm It is generally not difï¬cult to imagine types of harm that particular research projects may pose. What is often difï¬cult to estimate accu- rately is the severity of the harm and the likelihood that it will occurâ that is, to estimate risk. It is particularly difï¬cult to estimate risk for many types of nonphysical harms given the absence of a good base of evidence. As the National Bioethics Advisory Commission (2001:72) notes: Determinations concerning the probability of physical harms are often easier to make than those involving the proba- bility of nonphysical harms. For example, the magnitude and probability of harms associated with a blood draw are well known and can be objectively quantiï¬ed. This is gen- erally not the case for psychological, social, economic, and legal harms. . . . IRBs, therefore, can err in either direction [italics added], by assuming a higher probability and rec- ommending unnecessary protections or preventing research from being conducted or by assuming a lower probability and allowing research to occur without all the appropriate protections. . . . [Also,] although a good deal of information has been gathered about some nonphysical harmsâfor ex- ample, the risks from disclosures associated with transmit- ting or storing certain types of informationâthe possibility of such harms is not widely appreciated. Assessments of the extent to which IRBs overestimate (or underes- timate) risks of different types of harms are limited (see âRole of IRBsâ below). Moreover, even if IRBs and researchers agree on the risks of a particular research study, it may still be a matter of judgment as to whether the study meets the Common Rule deï¬nition of posing no more than âminimal riskâ to participants (see âMinimal Riskâ below). Beneï¬ts Beneï¬ts can be as difï¬cult to identify and quantify as the risks of harm. Balancing risks of harm against likely beneï¬ts, particularly
BASIC CONCEPTS 31 when the beneï¬ts are indirect, is also far from easy. For experimental biomedical research, beneï¬ts are often thought of as improved medi- cal treatments for illnesses or disabilities. Yet a major issue for clinical trials of experimental drugs or devices is that participants may con- fuse research with medical care and expect an immediate beneï¬t to themselves when such beneï¬t may not be likely even if the participant receives the experimental treatment and not a placebo. For most, if not all, SBES research, there is usually little direct ben- eï¬t to participants in the sense that the results of the research will be of immediate help to them, but SBES and biomedical research can provide two other kinds of beneï¬ts. The ï¬rst type of beneï¬t is when knowledge about humans and human societies helps decision making in the public and private sectors by individuals, households, businesses, organizations, and governments. For example, from psychological re- search much has been learned about the human brain and the kinds of stimuli that are essential to the development of cognitive, social, and emotional skills. This knowledge has been used by parents, educators, and others to help children grow. From economic decision-making research, knowledge has been gained about how people respond to ï¬nancial incentives. This knowledge has been used to craft policies to encourage saving. From survey research have come indicators of consumer spending and conï¬dence in the economy that are important forecasters of economic growth or recession. A second type of beneï¬t in much SBES and biomedical research comes from the study procedure. This type includes such beneï¬ts as the opportunity for education and gaining access to information (e.g., information about nutrient contents of foods in a study of food-buying patterns of low-income families or resources for child-rearing advice in a study of mother-child interactions) and the opportunity to earn the esteem of other participants and the research team. These kinds of beneï¬ts can be meaningful to participants and help build positive long-term relationships with a research program (see Sieber, 1992:Ch. 9). MINIMAL RISK Driven primarily by the nature of the IRB process, a normative âminimal-riskâ construct has evolved. It plays a central role in the sequential decisions by IRBs regarding the type of review for each pro- tocol. If the protocol involves research with human participants (see Chapter 6), the ï¬rst decision is whether the IRB will exempt the pro- tocol from review. If the ï¬rst decision is to review the protocol, the
32 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH next decision is whether the IRB will conduct an expedited review or a full committee review. The latter is required when the protocol is not eligible for exemption and the IRB determines that it involves more than minimal risk. Having determined the type of review, the IRB then must conduct that review to evaluate the research practices and procedures of the protocol as they relate to the ethical treatment of human participants, including judgments about the key practices dis- cussed earlierâinformed consent, balancing of risks and beneï¬ts, se- lection, and conï¬dentialityâconsidering both the vulnerability of the population of interest and who is being invited to participate in the study. IRBs must impose stringent requirements for informed consent when the IRB judges a protocol to be more than minimal risk. The Common Rule (45 CFR 36.102i) deï¬nes âminimal riskâ to mean that âthe probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinar- ily encountered in daily life or during the performance of routine phys- ical or psychological examinations or tests.â Beyond that deï¬nition, little concrete guidance is available to IRBs for determining minimal risk. Moreover, the deï¬nition itself is ambiguous in several respects. For example, a âroutineâ psychological test may be of more than minimal risk when it is performed on severely depressed people. Furthermore, different populations experience different risks in daily lifeâfor exam- ple, the risks that combat soldiers willingly accept as part of training are much greater than the risks that white-collar workers would ac- cept as part of their jobs. Also, some populations (e.g., poor children in bad neighborhoods) experience high levels of risk in their daily lives through no choice of their own. Not surprisingly, views differ on what constitutes minimal risk. The National Human Research Protections Advisory Committee Social and Behavioral Science Working Group recently attempted to deï¬ne mini- mal risk as meaning âthat the worst harm that could occur in a study should not be very seriousâeven if many subjects experience it, and, if the harm is serious, then the probability of any given subject expe- riencing it should be quite low.â2 This formulation suggests not only that projects posing no or minor harm to participants and having a low probability that harm will occur are minimal risk, but also that projects posing no or minor harm to participants and having a high- probability that harm will occur are minimal risk. Recent guidance from the National Science Foundation (2002:9) agrees, noting, in par- 2 This is a draft statement; see http://www.asanet.org/public/humanresearch/ riskharm02.html [4/10/03].
BASIC CONCEPTS 33 ticular, that a high probability harm can be minimal risk provided that the magnitude of the harm is very low. An example is an innocuous sur- vey that annoys the respondent by taking longer than he or she would like. Even if most or all respondents are annoyed, an innocuous survey is still minimal risk because the harm to any one respondent is mi- nor and ï¬eeting, and people experience similar transitory annoyances every day. In addition, the working group formulation suggests that projects posing serious harms to participants can be minimal risk if the prob- ability of such harm occurring to any given participant is extremely low. Barnbaum (2002), however, argues that such projects should not be treated as minimal risk because serious harm could occur for one or more participants. For example, a police ofï¬cer who participated in a study of police ofï¬cersâ views on police corruption and violence could lose his or her job if conï¬dentiality were breached and his or her participation disclosed. We agree that the example cited by Barnbaum should not be treated as minimal risk. However, just because a serious harm can be imagined does not mean that a project must be treated as more than minimal risk. In a survey of the general population, it is almost always possible to imagine that some respondent somewhere could have a negative re- action to being questioned that could, theoretically, result in a serious harm, such as a relapse for a person suffering from depression. How- ever, such relapses may occur for many reasons in the course of daily life. If adequate measures are taken to inform prospective respondents of the nature of the survey and their right not to answer some or all questions, then the mere possibility that a random respondent might have an adverse reaction should not be sufï¬cient reason to take the project out of the minimal-risk category. For that to occur, there should be evidence that particular questions have had signiï¬cant adverse ef- fects, or there should be a direct link of the possible harm to the type of respondent, as in the case of the police ofï¬cer example.3 We further believe that, when determining the level of risk, it is im- portant to consider not only the possible intensity of the harm, but also its likely duration. For example, the occurrence of psychological harm in a research project could result in one of three situations: (1) a min- imal and ï¬eeting annoyance or other emotion; (2) a sharp but short- lived feeling of anxiety, embarrassment, anger, or other emotion; or (3) an intense and long-lasting feeling of anxiety, anger, guilt, or other strong emotion. Of these three situations, we argue that the second as 3 In Chapter 6 we discuss the need for SBES researchers to document harm to par- ticipants as a means to build an evidence base; see also Chapter 7 for a discussion of the desirability of systematic research on risks and harm of different kinds of research.
34 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH well as the ï¬rst is most often minimal risk. Only the third situation seems a situation of greater than minimal risk. Another issue in connection with minimal risk is the standard of comparison when evaluating the risks of the research against the risks of daily life: Whose daily life is to be the comparison? Federal regula- tions use a high standard for research on prisoners, namely, that min- imal risk is that of nonincarcerated healthy individuals (45 CFR 46, subpart C). The Ofï¬ce for Protection from Research Risks of the Na- tional Institutes of Health (NIH) endorsed that same high standard for research with the general population in 1993. However, NBAC argued that such a high standard for the Common Rule (45 CFR 46, subpart A) goes against the history of human participant protection regulation. For example, the preamble to the 1981 version of 45 CFR 46 stated that âthe risks of harm ordinarily encountered in daily life means those risks encountered in the daily lives of the subjects of the researchâ (46 Federal Register 8366; see also Appendix A). However, NBAC does support an interpretation in which the stan- dard for minimal risk is the general population. Such a standard, while not as restrictive as one using healthy individuals, is more restrictive than a relative standard, in which risks are deï¬ned relative to the par- ticular research population. For example, a relative standard might say that bone marrow aspiration is minimal risk for people with acute leukemia, but a general population standard would classify such a pro- cedure as more than minimal risk (National Bioethics Advisory Com- mission, 2001:83).4 We are not prepared to reach a conclusion about the appropriate population standard for minimal risk. We believe that the issue merits wide debate that will, hopefully, lead to useful guidance for IRBs and researchers. Such debate should involve not only the small circle of ethicists who have considered the matter, but also the broader commu- nity of IRB members, researchers, and representatives of participants. We argue in subsequent chapters that much more concrete guid- ance is needed for IRBs and researchers on the kinds of research pro- tocols that qualify as minimal risk. We also acknowledge that there will always be a role for judgment on the part of IRB members to apply appropriately the Common Rule regulations and guidance regarding minimal risk to individual research populations and settings. 4 It is not clear whether a âgeneral populationâ standard would refer only to the U.S. population or how an evaluation of minimal risk should be applied to research that involves participants from other countries.
BASIC CONCEPTS 35 ROLE OF IRBS Consideration of minimal risk leads to a consideration of the func- tioning of IRBs because the minimal-risk construct plays such a promi- nent role in the decision making of individual IRBs as they deal with individual research protocols. When considering the overall decision making represented by the total set of IRB judgments on all protocols, two major criticisms of the current IRB system arise: (1) that IRBs are overloaded, underfunded, and, consequently, hard pressed to fully carry out their responsibilities for protecting human participants in more-than-minimal-risk research; and (2) that IRBs are spending too much time on scrutinizing minimal-risk research (perhaps as a reac- tion to heightened scrutiny of IRB operations by the federal govern- ment and the media in the light of highly publicized deaths to research participants; see Chapter 3). To the extent that overreview of minimal- risk research is interfering with the ability of IRBs to properly review higher risk research, then these two criticisms are two sides of the same coinânamely, the problem of determining the risk in a research protocol and acting appropriately on that determination. Comparative, reliable data on the operations of IRBs are scarce, an issue that we address later in Chapter 6. However, we believe the avail- able information is sufï¬cient to warrant three conclusions: (1) IRBs are indeed overburdened; (2) IRB practices regarding the type of re- view vary considerably across IRBs; and (3) this variability is much more likely to affect the type and nature of review afforded minimal- risk research compared with research that is of more than minimal risk. These ï¬ndings imply that the resources of many IRBs are not be- ing used as effectively as they could be and that standards for reviewing research have a sizable idiosyncratic element across IRBs. The Institute of Medicine (IOM) committee recommends increased resources for IRBs (see Chapter 7). We agree but add that using these resources simply to devote more time and energy to reviewing pro- tocols may not be sufï¬cient. Such resources should also be invested in aiding the development and application of consistent guidelines for types of review that are commensurate with risk. Having such guide- lines is likely to reduce workloads that result from using inappropriate procedures for review of minimal-risk research. To help develop guid- ance for risk determination and the application of types of review, it is incumbent upon researchers to develop a knowledge base about the risks and harms that are likely for different kinds of research and about appropriate informed consent procedures and related topics. Such knowledge can inform OHRP guidance, assist IRB decision making,
36 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH and contribute to improved understanding among researchers about ethically responsible research designs. Heavy Workload There has been signiï¬cant growth in IRB workloads over time. It appears that at least half of IRBs at academic research institutions have heavy workloads, with the number of reviews per year (including initial reviews of new projects, continuing reviews, and reviews of proposed changes to previously approved projects) totaling more than the num- ber of calendar days. More speciï¬cally, the available evidence shows the following: ⢠In 1975, IRBs averaged 43 initial reviews per year; by 1983, the average number of initial reviews had increased to 133 per year. In 1995 (the latest available data), the average number of initial reviews had increased to 214 per year. In 1995, the average num- ber of all reviews (initial, continuing, and changes) totaled 578 per IRB.5 ⢠The average number of all reviews in 1995 varied from 87 reviews for IRBs in the lowest 10 percent of IRB workloads to 2,144 re- views for IRBs in the highest 10 percent of IRB workloads. IRBs in the top half of the distribution averaged more reviews than days in the year (see Figure 2-2; computed from Bell, Whiton, and Connelly, 1998:7,9). ⢠Because high-volume IRBs had such heavy workloads, they ac- counted for a disproportionate share of reviews: those IRBs in the highest decile of IRB workloads accounted for 37 percent of the total estimated number of reviews; those IRBs in the highest 50 percent of IRB workloads accounted for 88 percent of the total estimated number of reviews. To handle their heavy workloads, high-volume IRBs (those in the highest 10 percent of IRB workloads) function differently than low- volume IRBs (those in the lowest 10 percent of IRB workloads). Based 5 The 1975 data are from a study by the University of Michigan for the National Com- mission for the Protection of Human Subjects in Biomedical and Behavioral Research (Cooke, Tannenbaum, and Gray, 1978; Gray, Cooke, and Tannenbaum, 1978; hereafter the 1975 survey); the 1983 data are from Grundner (1983); the 1995 data are from a study commissioned by the NIH Ofï¬ce for Extramural Research (Bell, Whiton, and Con- nelly, 1998; hereafter the 1995 survey). The 1975 and 1995 surveys represent IRBs at research institutions with multiple project assurances or the equivalent (see Appendix D); very little information is available about IRBs in other settings, such as community- based hospitals (see âIRBs with Very Low Volumeâ below).
BASIC CONCEPTS 37 2000 1500 Number of Reviews 1000 500 0 ) 9th 8th 7th 6th 5th 4th d d ) est est 3r 2n gh ow (hi t (l Decile 1s th 10 Figure 2-2 Average Reviews by IRBs in Each Decile of Workload Volume, 1995 NOTE: Reviews include initial reviews, continuing or annual reviews, and amendments to approved protocols; data provided by IRB chairs (n = 394) for 1995 or the most recently completed year of record. Workload volume deciles computed by Bell, Whiton, and Connelly on the basis of initial reviews only. SOURCE: Computed from Bell, Whiton, and Connelly (1998:7,9). on average data from the 1995 survey (Bell, Whiton, and Connelly, 1998:Ch.IV): ⢠High-volume IRBs met more often than did low-volume IRBs (21 meetings and 5 meetings per year, respectively). ⢠Chairs of high-volume IRBs spent more time on all IRB activi- ties, including meetings, preparation, and other activities, than did chairs of low-volume IRBs (386 hours and 72 hours per year, respectively). ⢠Members of high-volume IRBs spent more time on all IRB activ- ities than did members of low-volume IRBs (108 hours and 28 hours per year, respectively). ⢠High-volume IRBs had more members than low-volume IRBs (20 members and 10 members, respectively).
38 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH ⢠All high-volume IRBs had administrative staff support (122 hours per month); only 15 percent of low-volume IRBs had such sup- port (12 hours per month over all low-volume IRBs). ⢠High-volume IRBs were more likely to use consultants to review individual proposals than were low-volume IRBs (a median of 33 times per 100 initial reviews and 1.4 times per 100 initial reviews, respectively). Yet despite these coping strategies, high-volume IRBs spent less time than low-volume IRBs on review of individual projects. The aver- age times from the 1995 survey show striking differences: ⢠High-volume IRBs spent less full board meeting time (3 minutes) on initial proposal reviews than did low-volume IRBs (21 min- utes). (Full board meeting time covered more-than-minimal-risk projects and such minimal-risk projects as the IRB decided not to exempt or review with an expedited procedure.) ⢠High-volume IRBs spent less total time on review of all initial proposals (7 hours) than did low-volume IRBs (15 hours), includ- ing time spent on meetings, preparation, recording and review of minutes, etc., by IRB chairs, members, administrators, and staff. Over time, the trend is toward substantially less time spent by IRBs on initial proposal review: ⢠Full board meeting time averaged only 8 minutes per initial pro- posal review in 1995, compared with 1 hour in 1975. ⢠Total time spent per review of all initial proposals averaged 7 hours for high-volume IRBs in 1995 and 15 hours for low-volume IRBs in 1995, compared with 38 hours for all IRBs in 1975 (Bell, Whiton, and Connelly, 1998:48,51; Gray, Cooke, and Tannenbaum, 1978:1095).6 Clearly, IRBs are stretched thin. Whether that situation adversely affects human participants is not an easy matter to assess. Reports of harm to research participants are not compiled and made available in ways that would help answer the question, and some harm may not be reported. The media have been assiduous in publicizing unexpected deaths of research participants (see Chapter 3), but the numbers, sever- ity, and duration of injuries or other types of harm have not been doc- umented in any systematic way (see Institute of Medicine, 2002:Ch.5). 6 There was no provision for exemption or expedited review in 1975.
BASIC CONCEPTS 39 Even with good data on harm, it would not necessarily be clear the extent to which deï¬ciencies of IRB review played a causal role rather than other factors, such as failure by researchers to carry out the re- search as proposed or just misfortune. Yet the data on the limited amount of time spent per initial review among high-volume IRBs and the very limited time spent on full board review by such IRBs does suggest that the IRBs that handle most of the research workload may not be well positioned to identify potential risks of harm to human participants. This may be true even if high- volume IRBs operate more efï¬ciently than low-volume IRBs. Perhaps supporting such a conclusion is a ï¬nding in the 1995 Bell survey that most of the changes IRBs required investigators to make to their pro- posals in order to gain approval dealt with the form used to document consent (Bell, Whiton, and Connelly, 1998:Figure 41). Compared with 78 percent of proposals for which the consent form had to be modiï¬ed, investigators were much less often asked to change other aspects of their studies: consent procedure, 21 percent; privacy or conï¬dentiality protection, 14 percent; participant recruitment, 11 percent; scientiï¬c design, 6 percent; all other areas combined, 27 percent.7 Inappropriate Level of Review In the perception of the SBES research community, IRBs often overestimate the risks of SBES protocols, resulting in the application of Common Rule provisions that were developed for more-than- minimal-risk research to minimal-risk research. Moreover, because some but not all IRBs appear to use more stringent review standards than the regulations or the risk level of many protocols require, re- searchers face varying standards for review when they are involved in multi-institution projects or move from one research institution to another. In spite of anecdotal concerns raised about IRB behavior, there is little hard evidence about the extent to which IRBs may be overes- timating risks of the protocols they review, particularly with regard to the minimal-risk versus more-than-minimal-risk distinction in the Common Rule. We found only one study of human research partici- pant protection that included an independent assessment of risks. In that study, members and staff of the Advisory Committee on Human Radiation Experiments (1996:443) reviewed 125 biomedical research projects funded in the early 1990s (mainly radiation studies). They 7 The 1975 Michigan survey found that the attention that IRBs focused on consent forms was not productive in that it did not result in more complete or readable forms (see Chapter 4).
40 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH classiï¬ed 60 percent as minimal risk or perhaps minimal risk and the remaining 40 percent as clearly more than minimal risk. The study did not report on the extent to which the study assessments about risk agreed with assessments of either the IRBs or the researchers involved. While reports by investigators are consistent with a conclusion that sig- niï¬cant percentages of research are minimal risk, consistency is not strong support.8 Investigatorsâ reports may be biased toward underes- timating types of harms and levels of risk, and IRBs may not agree with an investigatorâs viewpoint. Given the obvious advantage of accurate risk assessment for human participant protection, the effective func- tioning of IRBs, and the credibility of the oversight process in the eyes of participants and researchers, further research on risk determination is certainly warranted. There is more evidence about the extent to which IRBs are not using the ï¬exibility in the Common Rule that permits less than full board review for research that the IRB itself agrees is minimal risk. This ï¬exibility dates back to 1981 when the Common Rule (then a regulation only of the U.S. Department of Health and Human Services) speciï¬ed several categories of research that IRBs could exempt from review and additional categories of minimal-risk research that IRBs could review by an expedited procedure in which the chair or a subcommittee of board members would conduct the review instead of the full board. The explicit intent of these provisions, which were implemented after a major battle involving the SBES research community (see Chapter 3), was to exempt a large proportion of minimal-risk research (much of it SBES research) and to allow IRBs to use an expedited procedure for review of many other projects that were deemed to be minimal risk.9 Exemption A 1983 study found reluctance among IRBs to avail themselves of the new Common Rule exemption provisions: almost all IRBs at that time had decided not to exempt research projects from review that fell under one of the four eligible categories of educational, social, and be- 8 The 1995 Bell survey reported that three-fourths of projects were judged by inves- tigators to have less than 10 percent likelihood of a âlowâ degree of harm (Bell, Whiton, and Connelly, 1998:20). Similarly, the 1975 Michigan survey reported that one-half of projects were judged by investigators to be without risk or to have a âvery lowâ probabil- ity of âminorâ medical or psychological complications. Risk assessments were obtained for over 2,000 projects (Gray, Cooke, and Tannenbaum, 1978:1096-1097). 9 Exemption does not require explicit determination of minimal risk, but the cate- gories are designed to exempt SBES (and biomedical) research that is minimal risk (e.g., because no identifying information is obtained), as well as SBES research that involves public ofï¬cials or programs.
BASIC CONCEPTS 41 havioral research (Grundner, 1983). Even by 1995, when six categories of research could be exempted, 48-63 percent of IRB chairs reported that their standard practice was not to exempt research that fell into one of the categories (e.g., 60% of chairs reported not exempting re- search using tests, surveys, or observations as standard practice).10 Furthermore, 35 percent of IRBs reported that they never exempted any research from review (Bell, Whiton, and Connelly, 1998:9,29). We do not know how these percentages may have changed across all IRBs since 1995. Our review of IRB websites of 47 major research universi- ties in late 2002 found that relatively few IRBs at these institutionsâ9 percentâdid not offer an option to exempt research from review. We also do not know how many IRBs operated at the other extremeâthat is, always granting an exemption requested by an investigator. Expedited Review As of 1995, many IRBs did not use the option to expedite the review of minimal-risk projects that fell under one of the speciï¬ed categories but, instead, gave many such projects full board review. Thus, for three SBES-related categoriesâexisting data, voice recordings, and in- dividual or group behaviorâ42 percent, 50 percent, and 51 percent of IRB chairs, respectively, reported that their standard practice was full board review.11 Moreover, 15 percent of IRBs reported that they never expedited any initial reviews (Bell, Whiton, and Connelly, 1998:10,30). Our review of IRB websites of 47 major research universities in late 2002 produced a similar ï¬ndingâ13 percent of IRBs at these institu- tions did not offer an option for expedited review. At the other extreme, 2 percent of IRBs in the 1995 survey conducted no full board reviewsâ that is, all new protocols were reviewed by an expedited procedure or were exempted from review. Variability At present, there appears to be wide variability in the extent to which IRBs avail themselves of the option for either exemption or expe- dited review. Moreover, such variability is not linked to IRB workload: high-volume IRBs are not signiï¬cantly more likely than low-volume IRBs to use the provisions for exemption and expedited review. 10 These percentages are averaged across high-volume and low-volume IRBs and are very similar for the two groups. 11 The classiï¬cation of research categories eligible for expedited review differed some- what at the time of the 1995 survey from the current list (see Box A-5 in Appendix A). These percentages are averaged across high-volume and low-volume IRBs and are very similar for the two groups.
42 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH Thus, in 1995 the distribution of exempt protocols, initial expedited reviews, and full board reviews as percentages of all initial reviewsâ 15 percent, 26 percent, and 59 percent, respectively, over all IRBsâ hardly varied across workload-volume deciles (Bell, Whiton, and Con- nelly, 1998:9-10). However, IRBs within each workload-volume decile exhibited extreme variability: every decile included one or more IRBs for which more than 95 percent, or less than 5 percent, of their work- load comprised full board reviews, with the percentages of full board reviews for the remaining IRBs spread out fairly evenly between these two extremes. Burden From these data, it is clear that many IRBs are not exempting or expediting as much research as they could under the Common Rule provisions. Because of substantial differences in estimated time spent by type of review, these IRBs are therefore adding to their review bur- den and that of investigators in preparing for review. Investigators in the 1995 Bell survey needed only 7 hours, on average, to prepare for and complete an initial expedited review, half the time (14 hours) spent on preparing for and completing a full board initial review. IRB meet- ing time averaged 2 minutes per expedited initial review, compared with 8 minutes per full board initial review. Expedited reviews are also completed in less elapsed time than are full board reviews: 18 percent of expedited reviews in the 1995 Bell survey were completed in 1 week or less, compared with only 5 per- cent of full board reviews; 84 percent of expedited reviews were com- pleted in 1 month or less, compared with only 49 percent of full board reviews. (By 3 monthsâ time, over 90 percent of all reviews had been completed; Bell, Whiton, and Connelly, 1998:Figure 33). The savings in elapsed time from expedited review facilitates more timely initiation of research, which can be important for many reasons, including the ability to recruit participants, reduce recall errors in interviews, and meet contractual deadlines. Such savings also conserves on the scarce time of IRB members. A cautionary note is that IRBs that rarely or never conduct full board reviewsâapparently a small group from the available dataâmay create too casual an atmosphere regarding human research participant protection and undermine trust in the protection system. Similarly, re- searchers who always seek exemption or expedited review, even when there is a reasonable doubt that the research is less than minimal risk, may undermine the protection system.
BASIC CONCEPTS 43 IRBs with Very Low Volume The results presented above on IRB burden and other ï¬ndings in our report are based primarily on the experience of IRBs that are housed at research institutions that perform signiï¬cant amounts of research. Almost no information is available on isolated, very-low- volume IRBs, which likely represent a substantial proportion of IRBs but a relatively low proportion of research protocol reviews. One study examined 12 such IRBs associated with community-based hospitals (Ofï¬ce of Inspector General, 1998a). These IRBs conducted a me- dian number of 44 initial reviews per year, with a range of 5 to 124 reviews. The study found that the 12 IRBs experienced workload pres- sures because of lack of resources. Their members lacked experience with human research participant protection issues and tended to raise fewer questions in review and require fewer modiï¬cations to research protocols than IRBs at academic research centers (Ofï¬ce of Inspector General, 1998a, 1998b). These IRBs may be more at risk of insufï¬cient review than of excessive review. SBES RESEARCH To this point, we have discussed issues of harm, risk, and beneï¬t, and IRB operations with respect to minimal-risk research from the per- spective of the SBES research community without clarifying what we mean by SBES research. To provide context, we conclude this chapter by brieï¬y considering SBES research ï¬elds, questions of interest, and commonly used methods. We offer examples of SBES research. Five characteristics of SBES research are important to keep in mind: 1. SBES research is extremely diverse, including classical laboratory experiments, ethnographic research, oral histories, large-scale ï¬eld experiments, small-scale surveys, large-scale surveys, secondary analysis, other types of methods, and combinations of methods. This diversity can pose challenges for overworked IRBs, particularly in the absence of detailed guidance about how to handle particular sit- uations.12 Clearly, a âone size ï¬ts allâ approach is not appropriate, whether the issue is protecting conï¬dentiality, evaluating harms and risks, minimizing risk of harm, or ensuring informed consent. 2. SBES research often does not lead to direct beneï¬ts to the partic- ipants themselves such as are possible from medical research, but 12 Biomedical research also covers a wide range of topics and methods that pose chal- lenges for the adequacy of IRB review.
44 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH much SBES research is minimal risk, especially when an appro- priate standard of assessment is applied (e.g., not exaggerating the harm from transitory psychological effects).13 3. An important source of risk that must be addressed for many SBES research projects is protecting the conï¬dentiality of individual infor- mation. For many research studies in this domain, disclosure risk is the only or the primary form of risk. 4. SBES research uses deception at times as a necessary design ele- ment in order to obtain valid results. The Common Rule speciï¬es conditions when deception is appropriate (45 CFR 46.116d), which include that the project is judged to be minimal risk and that the re- search could not produce valid results without the use of deception. 5. SBES research is often embedded in life events (e.g., an ethno- graphic study of job-seeking behavior in a community, which is carried out over a period of several months or years). Such re- search usually necessitates that interview protocols and study pro- cedures be modiï¬ed as the study proceeds. How to accommodate such changes in ways that do not put participants at risk and do not disrupt or delay the progress of the research is a challenge for IRBs. How to maintain informed consent as the study proceeds is also an issue in such research, as well as in longitudinal surveys that follow individuals or families over many years. SBES Research Fields and Questions The social, behavioral, and economic sciences encompass a wide range of academic disciplines. While there is no agreement on the pre- cise boundaries, SBES under most deï¬nitions includes such disparate ï¬elds as cultural anthropology, cognitive science, economics, educa- tion research, health services research, history (some ï¬elds), political science, psychology, sociology, and survey research.14 In terms of questions asked, SBES research is concerned with un- derstanding an ever wider range of attitudes, abilities, behaviors, char- acteristics, experiences, interactions, moods, perceptions, and statuses of individuals, groups, organizations, and governments. Examples 13 See Chapter 3 for some examples of SBES research conducted several decades ago that were more than minimal risk and violated one or more of the Belmont principles. 14 See, for example, International Encyclopedia of the Social and Behavioral Sciences (Smelser and Baltes, 2001:Table 2), which lists these ï¬elds as SBES research or as âre- lated ï¬elds.â Other related ï¬elds listed include archaeology, demography, geography, law, linguistics, and philosophy.
BASIC CONCEPTS 45 range from political science studies of the determinants and conse- quences of voting behavior; to anthropological studies of the roles of men and women in family, religious, and civic life; to social psycholog- ical studies of how people stereotype others and the kinds of behaviors that are linked to positive or negative stereotypes. Individual research projects, of course, are not necessarily, nor of- ten, limited to a single discipline or research question. A growing number of research projects are interdisciplinary in nature. Moreover, some SBES research interests increasingly overlap with biomedical research, which has traditionally focused on human physiology, hu- man diseases and their treatment, and human health.15 For example, a contemporary study on effective regimens for controlling blood sugar would likely involve a multidisciplinary team of biomedical and SBES researchers to examine psychological, social, and cultural factors that might mediate the strictly biochemical effects of different diets or drug dosages. Conversely, for studies of social behaviors, such as the deci- sion to retire, there is growing interest in augmenting traditional so- cial and economic measures with biological health measures for use in explanatory models. Yet another example of merging interests is the collaboration of behavioral psychologists and neurologists to use ad- vanced brain scan techniques to understand the mechanisms by which various stimuli evoke feelings, perceptions, and actions. SBES Research Methods SBES research uses a wide variety of research methods. Tradition- ally, some methods have been more frequently used by some disciplines than othersâfor example, laboratory experiments in psychology, and observations and unstructured interviews in anthropology. However, most disciplines today encompass multiple methods, and individual re- search projects often use two or more types of measurement. Biomedical research also uses a wide range of methods, includ- ing surveys and other measurement types that are commonly associ- ated with SBES, but the two domains differ in the frequency of use of particular methods. Both the overlap and differences are evident from the 1995 survey, in which a sample of biomedical and SBES re- search protocols were categorized by 16 research methods (more than one could be reported per protocol). Thus, while both domains used self-administered questionnaires, 59 percent of SBES protocols did so, compared with only 21 percent of biomedical protocols. Conversely, 25 15 The IOM committee (Institute of Medicine, 2002) implicitly uses this deï¬nition of biomedical research in its report, although it is nowhere stated.
46 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH Behavioral/Social 50% Clinical/Biomedical Percentage of Protocols 40% 30% 20% 10% 0% ire d po d s- O ven rom sa ws a na ng b ,c gn e ,b na gn d n a,d on n t r na e r e ses ur ere n en he ion a ing a tio tio tio tio ati atm ion i s t Ot esi sti f esi r P th vie sig va nte ts lec tra erv est i n lat est he Ga Te rD ser dD De r I uc ter Tre Qu -Adm Se nis ipu r eT bs ve he od Ob Ot ata In ve lin ind rO mi om asi ct siv an Ot y-Pr for o f D -B rt lf bje Bl Ad he nv lM va nd Se Stu C o v e gle os le- B Su Ot n-I In Ra bo ica dy Cr Sin of ub No om ce Stu log dy Do Pla nd ho Ra yc Ps Figure 2-3 SBES and Biomedical Protocols by Type of Method Used NOTES: Classiï¬cation by investigators (n = 632): behavioral/social research includes social science, behavioral science, educational research, and health services research; clinical/biomedical research includes clinical research, biomedical science, and epidemiology. a Denotes statistically signiï¬cant difference at the 0.05 level. b Examples of invasive testing include blood sample, biopsy, or spinal tap; examples of noninvasive testing include electrocardiogram or psychological testing. c âManipulationâ means a technique designed to elicit or provoke a response. d In a single-blind design, the subject does not know which treatment is being used but the investigator does. In a double-blind design, neither the subject nor the investigator knows which treatment is being used. e In a cross-over design, treatments are switched between groups during the study. SOURCE: Bell, Whiton, and Connelly (1998:Figure 8). percent of biomedical protocols used double-blind experiments com- pared with only 3 percent of SBES protocols (Bell, Whiton, and Con- nelly, 1998:16); see Figure 2-3. Below we brieï¬y describe and provide illustrative examples of some commonly used methods in SBES researchâlaboratory experiments, ï¬eld experiments, observations of natural behaviors, unstructured in- terviews with participants, structured interviews in sample surveys, and analyses of existing data on individuals. For each example, we
BASIC CONCEPTS 47 provide an assessment of the risk of harm and identify other issues of concern for human research participant protection. Laboratory Experiments In laboratory experiments an investigator manipulates social or physical conditions in some fashion, and human participants respond to these manipulations (also called treatments or interventions), to which participants are assigned on a random basis. The key purpose of an experiment is to draw inferences about the effect of the inter- vention on some dependent variable. Participants are not randomly selected; they are recruited in various ways (e.g., newspaper advertise- ments, students in classrooms) that tend to attract those interested in the purpose of the experiment. See Box 2-1 for two examplesâone a typical economic decision-making experiment and the other a social psychology experiment with deception, both of which we judge to be minimal-risk protocols. Field Experiments In ï¬eld experiments an investigator manipulates social or physical conditions in a âreal-worldâ setting to determine the effects on some behavior(s) of human participants. Field experiments are more difï¬- cult to carry out than laboratory experiments for at least two reasons: the environment is more difï¬cult to control in the ï¬eld than in the lab- oratory, and ï¬eld experiments are conducted on a larger scale. They may have hundreds or thousands of participants in one or more loca- tions and record measurements at intervals over months or years. The ï¬rst difï¬culty makes it desirable to use a control group in addition to one or more treatment groups with participants assigned randomly to a group.16 The second difï¬culty usually necessitates a large, multisite team of investigators and elaborate project management. See Box 2-2 for a large-scale welfare policy experiment that is clearly more than minimal risk and a minimal-risk employment discrimination experi- ment involving deception. Observations of Natural Behaviors Observational studies range widely in subject matter. They include the videotaping of interactions among shoppers and store clerks; the 16 Alternatively, a less powerful comparison method may be used (quasi-experiment or natural experiment), such as comparing outcomes for individuals who experienced an environmental change and a comparison group considered to be similar to the treatment group except for experiencing the change.
48 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH BOX 2-1 Laboratory Experiment Examples Economic Decision-Making Experiment The research question is how differing rewards (usually monetary) and rules of behavior affect decision making (e.g., the decision to join a coalition and increase the likelihood of a smaller reward or to stand apart and hope to receive a larger reward that is less certain). A small number of participants (less than, say, 50) are brought together in a laboratory, classroom, or on the Internet. They are given precise, detailed instructions on how they are to interact and how they will be rewarded on the basis of their decisions and the decisions of other participants. They are informed that they may leave at any time. Their decisions are recorded, and they are rewarded accordingly (in private, anonymously, and after the experiment). Reward amounts are small, usually $50 or less. No personal identiï¬ers are kept from the experiment, and no other data (or very limited data, e.g., gender) are collected from participants. Commentary This type of experiment is minimal riskâit attempts to replicate commonly encountered decisions (e.g., bargaining over merchandise or votes); the rewards offered are not large enough to be an undue incentive to participate or to cause participants more than momentary dismay (or glee) at the outcome; identifying information is not retained and so the risk that participantsâ identities could be linked with their decisions is minimal. Such experiments could be exempted from IRB review or reviewed by an expedited procedure. Written consent to participate may not be needed. When students are involved (as is typically the case), care is needed to make sure they understand that deciding not to participate will not affect their grades in the course. Social Psychology Experiment with Deception The research question is the extent to which people engage in ethnic stereotyping. A small number of participants (less than, say, 50) are brought together in a laboratory or classroom. They are told that the purpose of the experiment is to determine how fast people can associate characteristics (e.g., good, bad) with lists of names (which differ in cues about ethnic origin). Their results are recorded, and they are told at the conclusion of the experiment about its true purpose. No personal identiï¬ers are kept from the experiment, nor are other data obtained about participants except their ethnic origin and perhaps their age and gender. Commentary This type of experiment is minimal riskâit attempts to replicate common behavior; there are no incentives to participate; the procedure itself is not stressful; identifying information is not retained; and the inadvertent release of participantsâ identities and results would cause them only momentary embarrassment at most. The deception invoked covers only the purpose of the experiment, and participants are fully debriefed. Such experiments could be reviewed by an expedited procedure; they could not be exempted given the need to consider the deception involved. Another issue to consider would be fair selection of participantsâideally, a set of experiments would include a range of ethnic groups.
BASIC CONCEPTS 49 BOX 2-2 Field Experiment Examples Welfare Policy Experiment The research question of interest is whether welfare recipients are more likely to ï¬nd a job lasting at least 6 months if they receive training in speciï¬c work skills or coaching in job-related behavior skills (e.g., putting together a resume, interviewing for a job, punctuality). In, say, three cities in which coaching is standard practice, the design randomly assigns 2,000 current welfare recipients each to treatment A (skill training) and treatment B (job behavior coaching). Participants are fully informed about the nature of the experiment and that it will not affect their welfare beneï¬ts and are promised conï¬dentiality. They are given detailed interviews about their work and welfare history before assignment and reinterviewed at 3-month interviews over a 15-month period (to allow for training and job search time and to observe whether the job lasts at least 6 months). Data from administrative records (time on welfare, beneï¬t amounts, other income) are also obtained. Microdata ï¬les containing data for each participant are prepared, stripped of identiï¬ers, and the data processed to minimize the risk that individual participants could be re-identiï¬ed. The ï¬les are deposited with a university archive for secondary analysis. Commentary This type of experiment is of more than minimal risk, principally because of the extensive amount of data collected, some of which could be sensitive (e.g., if a respondent reports illicit income). The treatment poses little added risk because skill training is (or has been) an expected part of welfare assistance in many jurisdictions. Protecting conï¬dentiality, particularly of the public-use microdata ï¬les, is the key concern. Fair selection of participants is also a concernâthat is, how the jurisdictions are chosen and whether any are included that offer neither job behavior coaching nor work skill training as standard practice. If the experiment is evaluating a federal beneï¬ts program for a federal program agency, then it is eligible for exemption under the Common Rule (see Box 1-1 in Chapter 1). However, every involved IRB that does not grant an exemption must approve the experiment, or they may delegate review to one institution that has an approved federal-wide assurance (with OHRPâs approval of the delegation). Employment Discrimination Experiment The research question is whether people with felony records experience discrimination in job hiring and whether the effect varies by race. The investigators sample want ads in the Sunday newspapers for factory jobs in a city. They send a pair of candidates to respond to each ad by ï¬lling out applications in person at the employerâs site. Pairs are assigned to employers randomly. Pairs vary by race (some pairs are two black men, some are two white men); the members of each pair have similar characteristics (age, education, etc.), except that one member of each pair purports to have a felony conviction. Candidates report if they are called back with a job offer. Employers are not debriefed because it is illegal in the state to discriminate against job applicants who are convicted felons. Data are reported in aggregate form only, and employer identiï¬ers (name, address) are destroyed. Commentary This experiment is minimal risk because there is no way to link call-back responses to employers and, therefore, no way to embarrass them or subject them to the risk of prosecution. Deception is essential to obtain unbiased responses, and the absence of debrieï¬ng about the deception is essential to making the study minimal risk.
50 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH visual observation and documentation of conversations within a small group; the audiotaping of conversations of family members; and the vi- sual observation and documentation of neighborhood characteristics. In some observational studies, the researcherâs presence and study methods are known to the participants; in others, the researcher at- tempts to record activities unaffected by knowledge on the part of par- ticipants that they are being observed. The purpose of the research is to describe the nature of the activities of those observed; secondarily, the research may argue that the ï¬ndings would be replicated in other settings. See Box 2-3 for two examplesâa minimal-risk study of street- crossing behavior, with no interaction of the researcher and partici- pants, and a more-than-minimal-risk study of the behavior of patrons of a bar, with interaction between the researcher and participants. Unstructured or Semistructured Interviews with Participants Unstructured or partially structured interviews are used in a wide range of social and behavioral investigations to elicit an individualâs interpretation of events, beliefs, and behavior. Oral histories of public ï¬gures or individuals involved in a public event might include unstruc- tured or semistructured interviews. Ethnographers may use informal unstructured interviews during the initial phase of their work.17 Infor- mal unstructured interviews also may be used throughout a study to build rapport or explore newly emerging topics of interest. Unstructured interviews have a topical focus but are marked by minimal control over the informantsâ responses. An investigator work- ing in the area of HIV prevention, for example, may use unstructured interviews with injection drug users to explore beliefs and practices as- sociated with accessing treatment programs. While the general topic has been deï¬ned, there is no attempt to follow a predetermined line of inquiry using an interview guide of questions to be asked. In an un- structured interview, the conversation follows the direction taken by the interviewer based on the responses of the interviewee. In contrast, semistructured interviews involve the use of an inter- view guide to assist the investigator in systematically studying a partic- 17 Hammersley and Atkinson (1995:248) deï¬ne ethnographic research to include the following features: âa strong emphasis on exploring the nature of particular social phe- nomena, rather than setting out to test hypotheses about them; a tendency to work pri- marily with âunstructuredâ data, that is data that have not been coded at the point of data collection in terms of a closed set of analytic categories; investigation of a small number of cases, perhaps just one case, in detail; analysis of data that involves explicit interpretation of the meanings and functions of human actions, the product of which mainly takes the form of verbal descriptions and explanations, with quantiï¬cation and statistical analysis playing a subordinate role at most.â
BASIC CONCEPTS 51 BOX 2-3 Natural Behavior Observation Examples Street-Crossing Observation, No Interaction The purpose of the study is to observe street-crossing behavior of pedestrians (e.g., whether they obey walk signs) in relation to such factors as time of day, number of people crossing, and number of cars in the street. The investigator(s) stands at the crossing, makes notes of what occurs (time, number and gender of pedestrians, etc.), and takes still photographs. The investigator makes no effort to interact with pedestrians or to obtain identiï¬ers. Faces are blanked out on the photographs, and they are not published. Commentary The project is minimal risk and eligible for exemption. The setting is a public place in which there is no expectation of privacy. The only concern is keeping the photographic material conï¬dential and unpublished (since consent was not obtained). Observation of Behavior at a Bar by a Regular Customer (Participant Ob- servation) The purpose of the research is to document the social interactions at a neighborhood bar and compare the results with similar studies that have been con- ducted in other places or other times. The investigator becomes a regular customer, informing the bartender and customers that he or she is doing a book about the bar and will not publish anything in the book about the individual without his or her permission and will not use real names. The investigator conducts research over a 6-month period (typing notes in a palm pilot during restroom breaks and after leaving the bar each evening), writes a book, and carefully reviews applicable sections and statements with each person prior to publication, obtaining signed releases. Commentary This project is of more than minimal risk given that sensitive material may be discussed by participants with the investigator and that others may recognize participants despite the use of pseudonyms. A key concern is informed consent and when it must be obtained. ular set of issues. Questions are listed in a speciï¬c order and are often followed by leads for exploring the topic in greater detail. Semistruc- tured interviews might be implemented in situations in which the re- searcher needs to be sure that the same data are collected from all par- ticipants. Semistructured interview guides also may be used in focus groups with small numbers of individuals selected on the basis of spe- ciï¬c criteria to discuss a particular topic (e.g., a sample of women with known risk factors for breast cancer discussing genetic testing; fam- ily members caring for an elderly parent with Alzheimerâs disease). A moderator facilitates discussion using a ï¬exible interview guide, and the discussion may continue for 1-2 hours; the conversation is audio- taped and transcribed. Data collected from participants using unstructured or semistruct- ured interviews may not be summarized in a statistical fashion. The inference from the data collection is often targeted to a relatively small group (a village, a network of friends, a work group). In these in-
52 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH stances, information is valuable because it provides richly elaborated data about the question under investigation. In other cases, researchers may use unstructured and semistructured interviews in addition to quantitative methods to develop an in-depth and robust understanding of a problem. For example, a study of advance care planning in a hos- pice program might include a survey of a random sample of patients and their families and health providers, along with semistructured in- terviews with some study participants. See Box 2-4 for three examples of research using unstructured or semistructured interviews; two of the examples are minimal risk, one of which involves changes in the interview protocol as the study pro- ceeds. Structured Interviews in Sample Surveys Much quantitative SBES research involves collection of data in sam- ple surveys, which identify a target population and draw a subset with probability methods to assure that each member of the population has a known, nonzero chance of selection. Once the sample is identiï¬ed, the researcher cannot substitute other cases without threatening the ability of the research to describe the target population. Participa- tion is sought by having interviewers visit or telephone sample persons or by contacting them by the mail or the Internet. Surveys are de- signed to describe the attributes of large populations with measurable levels of uncertainty from sampling. See Box 2-5 for two examplesâ one a minimal-risk telephone survey and the other a complex, more- than-minimal-risk longitudinal survey with linkages to administrative records. Secondary Analyses of Existing Survey and Records Data Secondary analyses perform no new data collection. Instead, sur- vey data collected by another researcher are reanalyzed on a topic not previously researched with the data, or records collected for some other purpose are statistically analyzed to study the attributes of a pop- ulation covered by the record system. Such records may include pro- gram agency records, medical records, academic records, and crimi- nal and civil justice records. Examples of secondary analyses with survey data include studies of labor force behavior with the public-use microdata ï¬les from the Cen- sus Bureauâs Current Population Survey or the University of Michiganâs Panel Study of Income Dynamics. These ï¬les are preprocessed by the
BASIC CONCEPTS 53 issuing agency to ensure the conï¬dentiality of individuals in the sam- ple. Examples of secondary analyses with records include studies of income dynamics using Social Security Administration earnings data and studies of occupational mobility using personnel records of a ï¬rm. Records data are most often anonymized prior to the analysis. The sub- jects of the research sometimes cannot be reached by the holder of the record system and do not know of the analysis; other times, they can be reached. See Box 2-6 for two examplesâan analysis of public-use ï¬les from a large government survey and a study of school transcript records, both minimal risk. CONCLUSION We have discussed a broad array of issues related to the determina- tion of harm, risk, beneï¬t, and minimal risk in SBES research, along with evidence that many IRBsâdespite punishing workloadsâdo not appear to be using the ï¬exibility in the Common Rule regulations to exempt eligible research or to use an expedited procedure to review minimal-risk SBES and biomedical research. Our discussion of types of SBES research illustrates the diversity of the ï¬eld and the challenge of developing examples to include in guidance. We argue in subsequent chapters for detailed guidance for researchers and IRBs that will im- prove understanding and encourage the use of the ï¬exibility that is cur- rently possible according to the Common Rule to protect participants in ways that are commensurate with risk. Such guidance will also help researchers design studies that appropriately balance risks and beneï¬ts and that incorporate good practices for human participant protection.
54 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH BOX 2-4 Unstructured or Semistructured Interview Examples Epidemiological and Ethnographic Study of Injection Drug Users The goal of this study is to examine the diffusion of beneï¬ts associated with injection drug usersâ participation in needle exchange programs. This multisite project involves more than 500 injection drug users in three cities. Participants are interviewed several times over a 4-year period and agree to have researchers observe them while they are engaged in drug-related activities in order to determine risky behaviors for HIV transmission. Participants also agree to show researchers their drug paraphernalia, including needles used to administer drugs. Both unstructured and semistructured ethnographic interviews are conducted, and a detailed survey is completed. Information on sensitive topics such as drug use history and sexual behavior is obtained. The survey does not include personally identiï¬able information. Transcripts of interviews do not include names of participants. All data are kept in locked ï¬les. Given the vulnerability of the population because of their involvement in illicit activities, two of the sites allow verbal informed consent; however, the third study site requires written informed consent from all study participants. Commentary The primary risk to participants in this study is the potential breach of conï¬dentiality that could result in stigmatization, physical or emotional harm, or possibly incarceration. This study requires full board approval from an IRB because of the vulnerability of the population and the potential risks involved. However, this example calls attention to the inconsistent application of federal requirements for written informed consent in behavioral studies. The IRBs at the study sites requiring only verbal consent have considerable experience reviewing research proposals addressing social and health behavior of drug users; the IRB requiring written consent has little experience with research on drug users. Given the sensitive nature of the study and the importance of protecting the names of participants, written informed consent should not be necessary to conduct this study. However, investigators should be required to document carefully the procedures used to obtain informed consent and methods for recording the consent discussion. Case Study of Informed Consent Practices in International Genetic Re- search A semistructured interview is administered to 20 health professionals in a Nigerian town to explore challenges associated with obtaining informed consent in community-based genetic research being conducted in their area. The study participants are invited to participate in this case study because of their involvement in international scientiï¬c investigations. Interviews last approximately 1 1/2 hours. Verbal consent is obtained. Identifying information is removed from the interview transcripts. Transcripts are kept in a locked ï¬le. Commentary This study is minimal risk. Verbal consent is appropriate. The IRB could implement expedited review. The primary risk is the potential for breach of conï¬dentiality regarding sensitive information that may be communicated during the interview. The IRB should require investigators to indicate procedures for obtaining consent, strategies for protecting conï¬dentiality, and, if audiotapes are used during the interview, when they will be destroyed or erased or how they will be protected if they are stored permanently in a research archive.
BASIC CONCEPTS 55 BOX 2-4 (continued) Ethnographic Study of Communication about Death and Dying Among Hospice Staff The goal of this study is to explore patterns of communication about death and the dying process among health professionals working in a hospice. The investigator has discussed the study goals and procedures with the hospice administra- tion. After the study has been approved, the staff is informed about the study objectives. Patients and family members are advised of the study. The ethnographer conducts ï¬eld observations over a 6-month period and conducts unstructured and semistructured interviews with hospice staff. Verbal consent is obtained from all individuals interviewed. If interviews are recorded, consent is recorded at the beginning of the interview. The semistructured interview guide used initially is changed on the basis of responses of the staff to speciï¬c questions; some questions are deleted, and new questions added. Field notes are recorded using code names for individuals observed. Interview data and audiotapes are locked. Commentary This type of study involves minimal risk. The primary risk to participants is the potential for breach of conï¬dentiality, particularly concerning sensitive information regarding communication about death and dying. An IRB would be justiï¬ed in requiring full board approval because of the sensitive nature of the topic and because patients and families are implicated in the research, even though they will not be interviewed directly. Verbal consent is appropriate when semistructured interviews are conducted. The IRB should ask investigators to outline procedures for obtaining informed consent and strategies for protecting conï¬dentiality, including the disposition of audiotapes if they are used for interviews. The change in the semistructured interview guide should not require full board approval by the IRB. The modiï¬ed semistructured interview guide should be submitted to the IRB when the study is scheduled for annual review. However, the IRB should be notiï¬ed if there are any substantive changes in the research design involving major alterations of the methods or the study population (e.g., if the investigator decides to include interviews with patients and family members who are hospice clients after the study has begun).
56 PROTECTING PARTICIPANTS AND FACILITATING SOCIAL AND BEHAVIORAL SCIENCES RESEARCH BOX 2-5 Structured Interview (Sample Survey) Examples Consumer Telephone Survey To estimate consumer conï¬dence in the economy, a 7-minute telephone interview is conducted of a sample of 1,000 adults in households whose phone numbers are randomly generated by computer software. One adult is selected to report on household plans for purchase of major appliances, savings plans, and opinions about economic prospects for the household and the nation as a whole over the next 6 months. Sample households are repeatedly called until contact is made; interviewers inform respondents that the survey is completely voluntary and address concerns that reluctant respondents may have about participating; no incentive is offered; respondents who initially refuse are called again to seek reconsideration of participating. No names or addresses are collected, and only basic background characteristics are obtained (number of household members, type of household, household income in broad categories). Data are deposited with an archive for secondary use. Commentary This type of survey is minimal risk. It could be exempted from review or reviewed by an expedited procedure. Consent is tacit as is usual in telephone surveys with content that is not stressful and when respondents are informed that they may terminate the interview at any time. Longitudinal In-Person Health and Retirement Survey To study retirement behavior and health of older adults, a sample of 12,000 adults aged 51-62 in the base year is drawn and interviewed at 2-year intervals (spouses are also interviewed); a new sample is drawn periodically. The interviews are in person; advance letters inform respondents about the survey; the interviews are 1 hour in length each; topics include detailed work history, income, beneï¬ts, health status and history, retirement plans and expectations, and other characteristics; incentives are used to promote participation. Data are linked with administrative records, including social security earnings records and descriptions of employer pension and health beneï¬t plans. Some of the data are provided for public use; access to the full microdata requires special arrangements. Commentary This survey is large, complex, and clearly of more than minimal risk because of the sensitivity of the questionnaire content and the risk of breach of conï¬dentiality. Key issues are protecting the conï¬dentiality of the data for secondary use, developing an effective consent process that does not unnecessarily discourage response, and determining an appropriate incentive level to encourage participation.
BASIC CONCEPTS 57 BOX 2-6 Secondary Analysis Examples Analysis of Changes in Poverty Levels with the Survey of Income and Pro- gram Participation (SIPP) The survey is collected by the Census Bureau, which processes the data on public-use ï¬les to minimize the risk of re-identiï¬cation of respondents. Commentary Even though SIPP obtains highly detailed information on sensitive topics (e.g., detailed sources of income), this is a minimal-risk study that can be exempted from review. The Census Bureau is known to be a leader in conï¬dentiality protection; also, the survey is voluntary, and the Bureau collects the data in an ethical manner. There is no more protection that an IRB can provide for the respondents than the Bureau has already provided in preparing the public-use ï¬le. Analysis of School Transcript Records The purpose of the study is to correlate SAT scores with college grades for recent graduates. The researcher obtains the data, stripped of identiï¬ers, from university registrars; conducts the analysis; and returns the data to the universities. Commentary This type of analysis is minimal risk given that the researcher has no way of linking student records to individual students. The principal concern is whether the students gave consent for their records to be used for research; another concern is whether students might be identiï¬able by inference.