National Academies Press: OpenBook

Studies of Welfare Populations: Data Collection and Research Issues (2002)

Chapter: 4 Paying Respondents for Survey Participation

« Previous: 3 High Response Rates for Low-Income Population In-Person Surveys
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

4
Paying Respondents for Survey Participation

Eleanor Singer and Richard A.Kulka

THE PROBLEM: SURVEYING LOW-INCOME POPULATIONS

To evaluate the effects of recent changes in welfare policy on the lives of people living at or below the poverty level, it is often necessary to survey a representative sample. As the chapter in this volume by Groves and Couper makes clear, achieving such a representative sample can be problematic both because members of low-income groups are hard to locate—they are more mobile, more likely to live in multifamily households, and less likely than the more affluent to have telephones—and because they may not be highly motivated to participate in surveys. Incentives—especially monetary incentives—are particularly useful in countering the second difficulty, as a supplement or complement to other efforts at persuasion. In this paper, we briefly consider why people participate in surveys (or fail to do so) and then review the use of incentives in counteracting certain kinds of nonresponses. We also review separately those findings that appear to be particularly relevant for low-income populations. Finally, we consider two special issues: The potential consequences of refusal conversion payments for respondents and interviewers, and the cost effectiveness of prepaid incentives.

Why Do People Participate in Surveys?

Porst and von Briel (1995) point out that although a great deal is known about survey respondents—their demographic characteristics, as well as their answers to thousands of different survey questions—little is known about why

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

they choose to participate. Based on a content analysis of open-ended responses, their study of 140 participants in 5 waves of a German Methods Panel identifies 3 pure types of participants: (1) those who respond for altruistic reasons (e.g., the survey is useful for some purpose important to the respondent, or the respondent is fulfilling a social obligation—31 percent of respondents); (2) those who respond for survey-related reasons (e.g., they are interested in the survey topic, or find the interviewer appealing—38 percent); and (3) those who cite what the authors call personal reasons (e.g., they promised to do it—30 percent). In reality, of course, most people participate for a variety of reasons.

More recently, Groves et al. (2000) outlined a theory describing the decision to participate in a survey as resulting from a series of factors—some survey specific, such as topic and sponsorship, others person specific, such as concerns about privacy, still others specific to the respondent’s social and physical environment—each of which may move a particular person toward or away from cooperation with a specific survey request. Furthermore, these factors assume different weights for different persons, and they become salient for a specific individual—the potential respondent—when an interviewer calls to introduce the survey and request participation.

From this perspective, monetary as well as nonmonetary incentives are an inducement offered by the survey designer to compensate for the relative absence of factors that might otherwise stimulate cooperation—for example, interest in the survey topic or a sense of civic obligation. Although other theoretical frameworks such as social exchange theory (cf. Dillman, 1978), the norm of reciprocity (Gouldner, 1960), and economic exchange (e.g., Biner and Kidd, 1994) also can be used to explain the effectiveness of incentives, the present perspective is able to account for the differential effects of incentives under different conditions (e.g., for respondents with differing interest in the survey topic or with different degrees of community activism) in a way that other theories cannot easily do.

Incentives and Hard-to-Reach Populations

As indicated above, members of a group may be hard to interview both because they are difficult to locate or to find at home and because they have little motivation to participate in a survey. There is no empirical evidence that incentives are helpful in overcoming the first problem in a random digit dial (RDD) survey, nor any theoretical justification for believing that they would or should be. Thus, if the primary problem is one of finding people at home for such a survey, incentives may not be very useful. However, an experiment by Kerachsky and Mallar (1981) with a sample of economically disadvantaged youth suggests that prepayment may be helpful in locating members of a list sample, especially in later waves of a longitudinal survey. One reason, apparently, is that prepayment (and perhaps promised incentives from a trusted source) may be useful in persuading friends or relatives to forward the survey organization’s advance

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

letter or to provide interviewers with a current telephone number for the designated respondent.

The remainder of this chapter is devoted to reviewing the evidence pertaining to the second reason for survey nonresponse—namely, the situation in which the respondent has little intrinsic motivation to respond to the survey request. This situation is likely to characterize many low-income respondents, especially those who no longer receive welfare payments because of changes in federal and state legislation. Hence, the findings reported in this chapter about the effectiveness of prepaid monetary incentives are especially likely to apply to this population.

WHAT DO WE KNOW ABOUT THE EFFECTS OF INCENTIVES?

In this section we review what is known about the intended effects of incentives on response rates in mail as well as interviewer-mediated surveys, drawing on two existing meta- analyses (Church, 1993; Singer et al., 1999a) as well as subsequent work by the same and other authors. We specifically consider the usefulness of lotteries as an incentive and the use of incentives in panel studies. We also review what is known about unintended consequences of incentives such as effects on item nonresponse and response bias.

Effects on Response Rates

In an effort to counter increasing tendencies toward noncooperation, survey organizations are offering incentives to respondents with increasing frequency, some at the outset of the survey, as has been done traditionally in mail surveys, and some only after the person has refused, in an attempt to convert the refusal.

The use of incentives has a long history in mail surveys (for reviews, see Armstrong, 1975; Church, 1999; Cox, 1976; Fox et al., 1988; Heberlein and Baumgartner, 1978; Kanuk and Berenson, 1975; Levine and Gordon, 1958; Linsky, 1975; Yu and Cooper, 1983). In such surveys, incentives are one of two factors, the other being number of contacts, that have been found to increase response rates consistently.

A meta-analysis of the experimental literature on the effects of incentives in mail surveys by Church (1999) classifies incentives along two dimensions: whether the incentive is a monetary or nonmonetary reward, and whether it is offered with the initial mailing or made contingent on the return of the questionnaire. Analyzing 38 mail surveys, Church concluded that:

  • Prepaid incentives yield higher response rates than promised incentives;

  • The offer of contingent (promised) money and gifts does not significantly increase response rates;

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
  • Prepaid monetary incentives yield higher response rates than gifts offered with the initial mailing; and

  • Response rates increase with increasing amounts of money.

Studies using prepaid monetary incentives yielded an average increase in response rates of 19.1 percentage points, representing a 65-percent average increase in response. Gifts, on the other hand, yielded an average increase of only 7.9 percentage points. The average value of the monetary incentive in the mail surveys analyzed by Church was $1.38 (in 1989 dollars); the average value of the gift could not be computed, given the great diversity of gifts offered and the absence of information on their cost. Reports similar to those of Church are reported by Hopkins and Gullikson (1992).

Incentives are also used increasingly in telephone and face-to-face surveys, and the question arises as to whether their effects differ from those found consistently in mail surveys. A meta-analysis of 39 experiments by Singer et al. (1999a) indicates that they do not, although the percentage point gains per dollar expended are much smaller, on average (and the levels of incentives paid significantly higher), than those reported by Church. Their main findings are as follows:

  • Incentives improve response rates in telephone and face-to-face surveys, and their effect does not differ by mode of interviewing. Each dollar of incentive paid results in about a third of a percentage point difference between the incentive and the zero-incentive condition. As in the analyses by Church (1999) and Yu and Cooper (1983), the effects of incentives are linear: within the range of incentives used, the greater the incentive, the greater the difference in response rates between the lowest and the higher incentive conditions.

  • Prepaid incentives result in higher response rates than promised incentives, but the difference is not statistically significant. However, prepaid monetary incentives resulted in significantly higher response rates in the four studies in which it was possible to compare prepaid and promised incentives within the same study.

  • Money is more effective than a gift, even controlling for the value of the incentive.

  • Increasing the burden of the interview increases the difference in response rates between an incentive and a zero-incentive condition. However, incentives have a significant effect even in low-burden studies.

  • Incentives have significantly greater effects in surveys where the response rate without an incentive is low. That is, they are especially useful in compensating for the absence of other motives to participate. They are also most effective in the absence of other persuasion efforts. A number of studies have found that the difference in response rates between the group that received the incentive and the group that did not receive an incentive diminished after repeated follow-up attempts.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Lotteries as Incentives

Some researchers, convinced of the value of incentives but reluctant to use prepaid incentives for all respondents, have advocated the use of lotteries as an incentive for stimulating response. This might be thought desirable, for example, in surveys of women on welfare in those states where incentives are counted against the value of the benefits they receive. The studies reported in the literature—all mail surveys or self-administered questionnaires distributed in person—have yielded inconsistent findings (e.g., positive effects by Balakrishnan et al., 1992; Hubbard and Little, 1988; Kim et al., 1995; and McCool, 1991; no effects in four studies reviewed by Hubbard and Little, 1988, or in the experiment by Warriner et al., 1996). A reasonable hypothesis would seem to be that lotteries function as cash incentives with an expected value per respondent (e.g., a $500 prize divided by 10,000 respondents would amount to an incentive of 5 cents per respondent), and that their effect on response rates would be predicted by this value. Thus, the effect of lotteries would generally be small, both because the expected value per respondent is small, and because they are essentially promised, rather than prepaid, incentives.

Incentives in Panel Studies

Many studies of welfare leavers are panel studies—that is, they reinterview the same household, or the same respondent, more than once over a period of time. Assuring participation is especially important for panel studies because participation at baseline usually sets a ceiling for the retention rate over the life of the panel.1 For this reason, investigators often advocate using sizable incentives at the first wave of a panel study. An incentive experiment was carried out at Wave 1 of the 1996 Survey of Income and Program Participation (SIPP), a longitudinal survey carried out by the U.S. Census Bureau to provide national estimates of sources, amounts, and determinants of income for households, families, and persons. SIPP primary sample units were divided into three groups to receive $0, $10, and $20. James (1997) found that the $20 incentive significantly lowered nonresponse rates in Waves 1 to 3 compared with both the $10 and the $0 conditions, but the $10 incentive showed no effect relative to the zero-incentive group. Mack et al. (1998) reported on the results through Wave 6 using cumulative response rates, including an analysis of the effects of incentives on households differing by race, poverty status, and education in Wave 1. They found that an incentive of $20 reduced household, person, and item (gross wages) nonresponse rates in the initial interview and that household nonresponse rates

1  

Some investigators (see, e.g., Presser, 1989) recommend attempting to interview in later waves the nonrespondents to an earlier wave, but often this is not done. Even when it is, cooperation on a subsequent wave is generally predicted by prior cooperation.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

remained significantly lower, with a cumulative 27.6 percent nonresponse rate in the $0 incentive group, 26.7 percent in the $10 group, and 24.8 percent in the $20 group at Wave 6, even though no further incentive payments were made. (The SIPP does not attempt to reinterview households that do not respond in Wave 1 or that have two consecutive noninterviews.) Differences between the $10 incentive and the no-incentive group were not statistically significant. A subsequent experiment with paying incentives in Waves 8 and 9 of the 1996 SIPP to all Wave 7 and 8 nonrespondents (Martin et al., 2001) found that both a $20 and a $40 prepayment significantly increased the response rate above that in the $0 group; there was no significant difference between the two incentive groups. (Differential responsiveness to incentives by respondents differing in economic status is discussed in the later section on Findings for Low-Income Populations.)

Research on the Health and Retirement Survey (HRS) suggests that respondents who are paid a refusal conversion incentive during one wave do not refuse at a higher rate than other converted refusers when reinterviewed during the next wave (Lengacher et al., 1995). Unlike the SIPP, all respondents to the HRS receive an incentive at each wave, but these are much lower than the refusal conversion payments.

In sum, although the evidence currently available is still quite limited, that which is available suggests that the use of incentives in panel studies to increase initial response rates, convert refusals, and reduce subsequent attrition can be quite effective. Moreover, although in this context it is often assumed that once incentives are paid one must continue to offer them in all subsequent waves of data collection, these studies suggest that the effects of incentives on nonresponse and attrition in panel surveys can be sustained, even when incentives are not paid in subsequent waves of the study.

Effects on Respondents or Effects on Interviewers?

Are the consistent effects of incentives in telephone and face-to-face interviews attributable to their effect on respondents, or are they, perhaps, mediated by their effect on interviewers? Clearly this question does not arise with respect to mail surveys, where incentives also have been consistently effective, but it seems important to try to answer it with respect to interviewer-mediated surveys. It is possible, for example, that interviewers expect respondents who have received an incentive to be more cooperative, and that they behave in such a way as to fulfill their expectations.2 Or they may feel more confident about approaching

2  

For evidence concerning interviewer expectation effects, see Hyman (1954); Sudman et al. (1977); Singer and Kohnke-Aguirre (1979); Singer et al. (1983); and Hox (1999). Lynn (1999) reports an experiment in which interviewers believed respondents who had received an incentive responded at a lower rate, whereas their response rate was in fact significantly higher than those who received no incentive. However, these interviewer beliefs were measured after, rather than before, the survey.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

a household that has received an incentive in the mail, and therefore be more effective in their interaction with the potential respondent.

To separate the effects of incentives on interviewers from their effects on respondents, Singer et al. (2000) randomly divided all sample numbers in an RDD survey that could be linked to addresses into three groups. One third of the group was sent an advance letter and $5; interviewers were kept blind to this condition. Another third also received a letter plus $5, and still another third received the letter only. Interviewers were made aware of these last two conditions by information presented on their Computer-Assisted Telephone Interview (CATI) screens.

The results of this experiment are shown in Table 4–1. Large differences were observed between the letter-only and the letter-plus-incentive conditions, but there is no evidence that this is due to the effect of incentives on interviewers. Only one of the differences between the conditions in which interviewers were aware of the incentive and those in which they were not aware reaches statistical significance, and here the results are in a direction opposite of that hypothesized. Thus prepayment of a $5 incentive substantially increases cooperation with an RDD survey, and the incentive appears to exert its effect directly on the respondent rather than being mediated through interviewer expectations. This conclusion is in accordance with research by Stanley Presser and Johnny Blair, at the University of Maryland, who also found substantial increases in response rates as a result of small prepayments to respondents to which interviewers were blind (personal communication, n.d.).

UNINTENDED CONSEQUENCES OF INCENTIVES

Effects on Item Nonresponse

One question often raised about the use of incentives in surveys is whether they bring about an increase in the response rate at the expense of response quality. This does not appear to be the case. On the contrary, what evidence there is suggests that the quality of responses given by respondents who receive a prepaid or a refusal conversion incentive does not differ from responses given by those who do not receive an incentive. They may, in fact, give better quality answers, in the sense that they have less item-missing data and provide longer open-ended responses (Baumgartner et al., 1998; Singer et al., 2000; Shettle and Mooney, 1999; but cf. Wiese, 1998). Experiments reported by Singer et al. (2000) indicate that promised and prepaid incentives reduce the tendency of older people and nonwhites to have more item-missing data, resulting in a net reduction in item nonresponse.

Findings reported by Mason and Traugott (1999) suggest that persistent efforts to persuade reluctant respondents to participate may produce more respondents at the price of more missing data. But these authors did not use incen-

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

TABLE 4–1 Response and Cooperation Rates by Advance Letters and Letters Plus Prepaid Incentive, Controlling for Interviewer Expectations

 

Response Ratea,b

Cooperation Rateb,c

 

Interviewed %

Not Interviewed % (n)

Interviewed %

Not Interviewed % (n)

May 1998

Letter only

62.9

37.1 (62)

68.4

31.6 (57)

Letter+$5, interviewers blind

75.4

24.6 (69)

86.7

13.3 (60)

Letter+$5, interviewers not blind

78.7

21.3 (61)

82.8

17.2 (58)

Ltr only vs. ltr+$5

X2=4.13, df=1, p<.05

X2=6.27, df=1, p<.05

Blind vs. not blind

n.s.

n.s.

June 1998

Letter only

58.2

41.8 (55)

62.8

37.2 (51)

Letter+$5, interviewers blind

73.8

26.2 (61)

86.5

13.5 (52)

Letter+$5, interviewers not blind

74.6

25.4 (59)

83.0

17.0 (53)

Ltr only vs. ltr+$5

X2=4.52, df=1, p<.05

X2=9.56, df=1, p<.01

Blind vs. not blind

n.s.

n.s.

July 1998

Letter only

61.8

38.2 (55)

72.3

27.7 (47)

Letter+$5, interviewers blind

81.3

18.6 (59)

87.3

12.7 (55)

Letter+$5, interviewers not blind

69.6

30.4 (56)

72.2

27.8 (54)

Ltr only vs. ltr+$5

X2=3.47, df=1, p=.06

n.s.

Blind vs. not blind

n.s.

X2=5.83, df=1, p<.10

August 1998

Letter only

63.8

36.2 (58)

69.8

30.2 (53)

Letter+$5, interviewers blind

75.0

25.0 (68)

81.0

19.0 (63)

Letter+$5, interviewers not blind

76.7

23.3 (60)

85.2

14.8 (54)

Ltr only vs. ltr+$5

X2=2.85, df=1, p=.09

X2=3.75, df=1, p=.05

Blind vs. not blind

n.s.

n.s.

SOURCE: Singer et al. (2000).

aIncludes noncontacts in denominator.

bAfter refusal conversion.

cExcludes noncontacts from denominator.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

tives, and motivational theory suggests that people who are rewarded for their participation would continue to give good information, whereas those who feel harassed into participation may well retaliate by not putting much effort into their answers. However, there is no evidence about the effect of incentives on validity or reliability, and this is an important research question.

Effects on Response Distributions

Even more troubling, potentially, than an effect on item missing data is the effect of incentives on the distribution of responses. Does offering or paying incentives to people who might otherwise refuse affect their answers to the survey questions?

It is useful to think about the reasons why effects on response distributions might occur. One is that the use of incentives brings into the sample people whose characteristics differ from those who otherwise would be included, and their answers differ because of those differing characteristics. If that is the case, the apparent effect on response distributions is really due to a change in the composition of the sample, and should disappear once the appropriate characteristics are controlled. An example of the first process is presented by Berlin et al. (1992), who demonstrate that the apparent effect of a monetary incentive on literacy scores can be accounted for by the disproportionate recruitment of respondents with higher educational levels into the zero-incentive group. There was no significant relationship between incentive level and the proportion of items attempted, indicating that the incentive influenced the decision to participate, but not performance on the test. Another example is presented by Merkle et al. (1998) in their report of an experimental effort to increase the response rate to exit polls by having interviewers in a random sample of precincts carry clip-boards and folders clearly identifying them as associated with the major media and handing out pens with the same logo. Although the response rate was increased by these methods (not necessarily by the incentive alone), the responses were actually distorted because a greater number of Democratic voters were brought into the sample—apparently as a result of the clearer identification of the poll with the media. Effects of incentives on sample composition are discussed further in the following section.

A second reason incentives might influence responses is if they influence people’s opinions directly, or at any rate the expression of those opinions. A striking example of such influence (not, however, involving an incentive) is reported by Bischoping and Schuman (1992) in their analysis of discrepancies among Nicaraguan preelection polls in the 1990 election and the failure of many to predict the outcome of the election accurately. Bischoping and Schuman speculate that suspicions that preelection polls had partisan aims may have prevented many Nicaraguans from candidly expressing their voting intentions to interview-

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

ers. They tested this hypothesis by having interviewers alternate the use of three different pens to record responses: one carried the slogan of the Sandinista party; another, that of the opposition party; the third pen was neutral. The expected distortions of responses were observed in the two conditions that clearly identified the interviewers as partisan. Even in the third, neutral, condition, distortion occurred. The authors claim that polls apparently were not perceived as neutral by many respondents. In the Nicaraguan setting, after a decade of Sandinista rule, a poll lacking partisan identification was evidently regarded as likely to have an FSLN (Sandinista) connection (p. 346); the result was to bias the reporting of vote intentions, and therefore the results of the preelection polls, which predicted an overwhelming Sandinista victory when in fact the opposition candidate won by a large majority.

Still a third way in which incentives might affect responses is suggested by theory and experimental findings about the effects of mood (Schwarz and Clore, 1996). If incentives put respondents in a more optimistic mood, then some of their responses may be influenced as a result. Using 17 key variables included in the Survey of Consumer Attitudes, Singer et al. (2000) looked at whether the response distributions varied significantly by (1) the initial incentive or (2) refusal conversion payments, controlling for demographic characteristics.3

The offer of an initial incentive was associated with significantly different response distributions (at the .05 level) on 4 of the 17 variables; a refusal conversion payment also was associated with significantly different response distributions on 4 of them. One variable was affected significantly by both types of incentives.4 In five of these cases, the responses given with an incentive were more optimistic than those given without an incentive; in two cases, they were more pessimistic. In the remaining case, respondents who received an incentive were somewhat more likely to respond good and bad, and somewhat less likely to give an equivocal reply. Thus, there is a suggestion that respondents to the Survey of Consumer Attitudes who receive an incentive may give somewhat more optimistic responses than those who do not. Similar findings have been reported by Brehm (1994) and James and Bolstein (1990). However, such effects were not observed by Shettle and Mooney (1999) in their experimental investigation of incentives in a survey of college graduates, which found only 8 significant differ-

3  

They used the multinomial logit specification in CATMOD, which allows researchers to perform modeling of data that can be represented by a contingency table. CATMOD fits linear models to functions of response frequencies and can use linear modeling, log-linear modeling, logistic regression, and repeated measurement analysis. A more complete description can be found in: SAS Institute Inc., 1989, SAS/STAT Users Guide, Version 6, Fourth Edition, Volume 1, Cary, NC: SAS Institute Inc.

4  

These counts are based on the bivariate distributions, without controls for demographic characteristics. The effects do not disappear with such controls; indeed, three additional variables show such effects with demographic controls.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

ences (at the .05 level) in response distributions to 148 questions—a number that does not differ from that expected on the basis of chance.

EFFECTS IN SURVEYS OF LOW-INCOME POPULATIONS

The question of particular interest to this audience is how effective monetary and other incentives are in recruiting and retaining members of low-income populations. In a 1995 paper presented to a Council of Professional Associations on Federal Statistics (COPAFS) workshop, Kulka reported some evidence suggesting that monetary incentives might be especially effective in recruiting into the sample low-income and minority respondents, groups that ordinarily would be underrepresented in a probability sample. Reviewing a number of experimental studies that provided evidence on the issue of sample composition, including the studies discussed by Kulka, Singer et al. (1999a) found that in three studies, there was an indication that paying an incentive might be useful in obtaining higher numbers of respondents in demographic categories that otherwise tend to be underrepresented in sample surveys (e.g., low-income or nonwhite race).5 Five other studies reported no significant effects of incentives on sample composition, and in one study the results were mixed.

Since then, additional evidence has accumulated suggesting that monetary incentives can be effective in recruiting and retaining minority respondents. Mack et al. (1998) found that the use of a $20 incentive in the first wave of a SIPP panel was much more effective in recruiting and retaining black households and households in poverty than it was in recruiting and retaining nonblack and nonpoverty households.6 Martin et al. (2001) found that $20 was more effective in converting black and “other race” nonrespondents than in converting white respondents. These results agree with findings reported by Juster and Suzman (1995). They report that a special Nonresponse Study, in which a sample of people who refused normal refusal conversion efforts on the Health and Retirement Survey were offered $100 per individual or $200 per couple to participate,7 brought into the sample a group of people distinctly different from other participants: they were more likely to be married, in better health, and, particularly, they had about 25 percent more net worth and a 16 percent higher income than other refusal conver-

5  

To our knowledge, however, no high-quality studies are available yet that explore potential differences in the effectiveness of incentives by ethnicity or language per se.

6  

However, Sundukchi (1999) reports that an incentive paid in Wave 7 to all low-income households that had received an incentive in Wave 1 reduced the nonresponse rate among nonblack low-income households, but not among black low-income households.

7  

In that study, all nonrespondents were sent the incentive offer by FedEx mail; hence, it was not possible to separate the effect of the monetary incentive from the special mailing. In a subsequent small-scale experiment, money had a significant effect on converting refusals, whereas a FedEx mailing did not (Daniel Hill, personal communication n.d.).

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

sion households or those who never refused. Finally, analyses by Singer et al. (2000) indicate that a $5 incentive paid in advance to a random half of RDD households for which an address could be located brought a disproportionate number of low-education respondents into the sample; there were no significant differences on other demographic characteristics.

In other words, these studies suggest that, while monetary incentives are effective with all respondents, less money is required to recruit and retain low-income (and minority) groups than those whose income is higher, and for whom the tradeoff between the time required for the survey and the incentive offered may be less attractive when the incentive is small. It should be noted that few, if any, of these studies (Mack et al., 1998, is a notable exception) have explicitly manipulated both the size of the incentive and the income level of the population; the findings reported here are based on ex post facto analyses for different subgroups, or on analyses of the composition of the sample following the use of incentives.

A number of other studies also have reported on the effects of incentives on sample composition. In some of these, it appears that incentives can be used to compensate for lack of salience of, or interest in, the survey by some groups in the sample. For example, the survey reported on by Shettle and Mooney (1999), the National Survey of College Graduates, is believed to be much more salient to scientists and engineers than to other college graduates, and in the 1980s the latter had a much lower response rate. Although this was also true in the 1992 pretest for the 1993 survey, the bias was less in the incentive than in the nonincentive group (7.1 percentage-point underreporting, compared with 9.8 percentage points), though not significantly so.8 Similar findings are reported by Baumgartner and Rathbun (1997), who found a significant impact of incentives on response rate in the group for which the survey topic had little salience, but virtually no impact in the high-salience group, and by Martinez-Ebers (1997), whose findings suggest that a $5 incentive, enclosed with a mail questionnaire, was successful in motivating less satisfied parents to continue their participation in a school-sponsored panel survey. Berlin et al. (1992) found that people with higher scores on an assessment of adult literacy, as well as people with higher educational levels, were overrepresented in their zero-incentive group. Groves et al. (2000) reported a similar result; in their study, the impact of incentives on response rates was significantly greater for people low on a measure of community involvement than for those high on community involvement, who tend to participate at a higher rate even without monetary incentives. In these studies, incentives function by raising the response rate of those with little interest, or low civic involvement; they do

8  

Shettle and Mooney (1999) conclude that the incentive does not reduce nonresponse bias in their study. It is true that after extensive followups, there is no difference at all between the incentive and the no-incentive groups. Nevertheless, the trends prior to phone followup are in the expected direction.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

not reduce the level of participation of the highly interested or more altruistic groups.

In these studies, certain kinds of dependent variables would be seriously mismeasured if incentives had not been used. In the case of Groves et al. (2000), for example, the conclusions one would reach about the distribution of community involvement would be in error if drawn from a survey that did not use incentives. Nevertheless, questions remain about how representative of their group as a whole those brought into the sample by incentives are, and this is true for low-income and minority respondents, as well. In other words, low-income respondents brought into the sample by the lure of an incentive may well differ from those who participate for other reasons. But even if prepaid incentives simply add more respondents to the total number interviewed, without reducing the nonresponse bias of the survey, they still may prove to be cost effective if they reduce the effort required to achieve a desired sample size. The theory of survey participation outlined at the beginning of this paper (Groves et al. 2000) suggests that the representativeness of the sample will be increased by using a variety of motivational techniques, rather than relying on a single one.

ISSUES IN THE USE OF DIFFERENTIAL INCENTIVES

Some of the research reported in the previous section suggests that it may make economic sense to offer lower incentives to people with lower incomes and higher incentives to those who are economically better off. Another instance of differential incentives is the use of refusal conversion payments, in which respondents who have expressed reluctance, or who have actually refused, are offered payment for their participation whereas cooperative respondents are not. In both of these situations, the question arises how respondents who received lower, or no, rewards would feel if they learned of this practice, and how this might affect their future participation in this or another survey.

Effects of Disclosure of Differential Incentives on Perceptions of Fairness

From an economic perspective, the fact that some people refuse to be interviewed may be an indication that the survey is more burdensome for them and that therefore the payment of incentives to such respondents (but not others) is justified. Nevertheless, some researchers are concerned that using incentives in this way will be perceived as inequitable by cooperative respondents, and that if they learn of the practice, this will adversely affect their willingness to cooperate in future surveys (Kulka, 1995).

These unintended consequences were the focus of two studies (Singer et al., 1999b; Groves et al., 1999). The first was conducted as part of the Detroit Area Study (DAS), using face-to-face interviews, and the second was done in the

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

laboratory with community volunteers, using self-administered responses to videotaped vignettes.

In the first study, respondents were asked a series of questions concerning their beliefs about survey organization practices with respect to incentives. Three-quarters believed that such organizations offer monetary incentives to respondents to encourage participation (8.9 percent said they did not know). Those who received a prepaid $5 incentive (a random two-thirds of the survey sample) were significantly more likely than those who received no such payment to say that at least some survey organizations use incentives. Indeed, beliefs about this practice appeared to increase with the total amount ($0, $5, $25, or $30) of the incentive the respondent received or was offered, with 94 percent of those who received $30 expressing the belief that at least some survey organizations use incentives.9

All respondents also were asked the following question: “Some people do not want to be interviewed. However, to get accurate results, everyone chosen for the survey needs to be interviewed. Otherwise, the data may mislead people in the government who use the conclusions to plan important programs that affect everyone. Do you think it’s fair or unfair for people who refuse to be interviewed to receive money if other people don’t?” Despite the extensive justification for differential payment included here, 74 percent said they considered the practice unfair.

Near the end of the survey, in a more stringent test of whether the payment of differential incentives was perceived as fair or unfair, a random half of the respondents were informed that because of the importance of including everyone in the sample, some of those who had expressed reluctance to participate had been offered $25, while others had received nothing; they were asked whether they considered this practice fair or unfair. Again, almost three-quarters (72.4 percent) said they considered the practice unfair.

Effects of Disclosure of Differential Incentives on Willingness to Participate

Singer et al. (1999b) hypothesized that those to whom the payment of differential incentives was disclosed would be less willing to participate in a future survey.

9  

The finding that respondent beliefs about survey organization practices are affected by their own experience parallels findings reported elsewhere (Singer et al. 1998c). In that Singer et al. study, 31 percent of respondents to the Survey of Consumer Attitudes who had not been offered any incentives 6 months earlier said, in 1997, that respondents should get paid for participating in that type of survey; 51 percent of those offered $5 said, 6 months later, that they thought respondents should get paid; and 77 percent of respondents who received $20 or $25 as a refusal conversion payment said respondents should get paid.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

In the laboratory study described in the previous section, subjects were significantly more likely to say they would not be willing to participate in a survey where some respondents received a payment for participating but others did not. However, the difference was reduced to insignificance when an explanation for the payment was offered by the interviewer.

In the field study, there were no differences in expressed willingness to participate between those to whom differential payments had been disclosed and those to whom they had not. About a quarter of each group said they definitely would be willing to participate in another survey by the same organization. Even those to whom differential incentive payments were disclosed and who perceived these payments as unfair did not differ significantly in their expressed willingness to participate in a subsequent survey by the same organization, although the trend in responses was as predicted: 25.8 percent versus 32.8 percent expressed such willingness.10 The investigators speculated that rapport with the interviewer might have mitigated the deleterious effects of disclosing differential incentives that previously had been observed in the laboratory experiment (Groves et al. 1999).

A little more than a year later, all the original DAS respondents for whom an address could be located were sent a mail questionnaire on the topic of assisted suicide, ostensibly from a different survey organization. There were no significant differences in participation between those to whom differential payments had been disclosed a year earlier and those to whom they had not.

Thus, the data indicate that most respondents believe survey organizations are currently using incentives to encourage survey participation; that these beliefs are affected by personal experience; that only half of those who are aware of the use of incentives believe that payments are distributed equally to all respondents; and that a large majority of respondents perceive the practice of paying differential incentives as unfair. However, disclosure of differential payments had no significant effect on expressed willingness to participate in a future survey, nor were respondents to whom differential incentives had been disclosed significantly less likely to respond to a new survey request, from an ostensibly different organization a year later, although again the differences were in the hypothesized direction.

10  

However, as we would expect, the perception of fairness is directly and significantly related to whether or not respondents had themselves received a refusal conversion payment. Among those who did not receive such a payment, 74.5 percent (of 200) considered this practice unfair. Among those who did receive a refusal conversion payment, only 55 percent (of 20) considered the practice unfair; this difference is significant at the .06 level.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

ARE PREPAID INCENTIVES COST EFFECTIVE?

For a variety of reasons, including those discussed in the previous section, prepaid incentives to everyone in the sample may be preferable to refusal conversion or other differential payments.

One reason is that interviewers like them. Knowing the household is in receipt of an advance payment, modest though it may be, interviewers feel entitled to ask the respondent to reciprocate with an interview. Furthermore, prepaid incentives are equitable. They reward equally everyone who happens to fall into the sample, and they reward them for the right behavior—that is, for cooperation, rather than refusal. Both of these advantages are likely to make modest prepaid incentives an attractive alternative to refusal conversion payments in many types of surveys. There is also indirect evidence that the use of refusal conversion payments to persuade reluctant respondents leads to increasing reliance on such payments within an organization, in all likelihood because of their effects on interviewer expectations.

Still, the question arises whether such incentives are cost effective. It would appear that paying a small number of refusal conversion payments to reluctant respondents would be cheaper than paying everyone, even if those initial payments are smaller.

Several studies have concluded that prepaid incentives are cost effective in mail surveys. For such surveys, the comparison ordinarily has been among incentives varying in amount or in kind, or in comparison with no incentive at all, rather than with refusal conversion payments. Two recent investigations of cost effectiveness, by James and Bolstein (1992) and by Warriner et al. (1996), have included information on the relative effectiveness of various incentives. James and Bolstein (1992) found that a prepaid incentive of $1 was the most cost effective, yielding nearly as high a return as larger amounts for about one-quarter of the cost. Warriner et al. (1996:9) conclude that for their study, a $5 prepaid incentive was the optimal amount, resulting in a saving of 40 cents per case (because the same response rate could be achieved as in a no-incentive, two-follow-up condition). The $2 incentive resulted in costs per case only a dollar less than the $5 incentive, while yielding a response rate 10 percentage points lower. Similar findings have been reported by Asch et al. (1998) in a mail survey of physicians.

For interviewer-mediated studies, as noted earlier, the comparison is much more likely to be with refusal conversion payments. The answer is likely to depend on the nature of the study and the importance of a high response rate, on how interesting the study is to respondents (i.e., how many of them are willing to participate even without a prepaid incentive), on whether prepaid incentives reduce the effort required, and on a variety of other factors.

Several face-to-face surveys have reported that promised monetary incentives are cost effective. Berlin et al. (1992), for example, reported that use of a

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

$20 promised incentive in a field-test experiment with the National Adult Literacy Survey, which entails completion of a test booklet by the respondent, resulted in cost savings on a per interview basis when all field costs were taken into account. Similarly, Chromy and Horvitz (1978) reported (in a study of the use of monetary incentives among young adults in the National Assessment of Educational Progress) that when the cost of screening for eligible respondents is high, the use of incentives to increase response rates actually may reduce the cost per unit of data collected.

Singer, Van Hoewyk, and Couper11 investigated this problem in the Survey of Consumer Attitudes (SCA). They found that a $5 incentive included with an advance letter significantly reduced the number of calls required to close out a case (8.75 calls when an incentive was sent, compared with 10.22 when it was not; p=.05), and significantly reduced the number of interim refusals (.282 refusals when an incentive was sent, compared with .459 when it was not). As expected, there was no significant difference between the incentive and the no-incentive condition in calls to first contact. The outcome of the first call indicates that compared with the letter only, the addition of a $5 incentive results in more interviews, more appointments, and fewer contacts in which resistance is encountered.

Given the size of the incentive and the average cost per call aside from the incentive, sending a prepaid incentive to respondents for whom an address could be obtained was cost effective for the SCA. However, as we have tried to indicate, this conclusion depends on the size of the incentive as well as the structure of other costs associated with a study for a given organization, and should not be assumed to be invariant across organizations and incentives.

An argument that can be raised against the use of prepaid incentives is that they may undermine more altruistic motives for participating in surveys. Indeed, we have found that prepaid incentives have smaller effects on survey participation for people who score high on a measure of community activism (Groves et al., 2000) than on people who score low on this characteristic. But this is because groups high in community activism already respond at a high rate. There is no evidence (because we did not test this hypothesis) that people high on community activism who are offered a prepaid incentive respond at a lower rate than they would have had they not been offered the incentive, nor do we know whether such an effect would appear on a later survey. Although anecdotal evidence shows that some people are offended by the offer of an incentive, going so far as to return the incentive to the survey organization, by all accounts such negative reactions are few.

11  

This discussion is based on unpublished analyses by Van Hoewyk, Singer, and Couper of data from the Survey of Consumer Attitudes during 8 months in 1998.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

Prepaid incentives have been common in mail surveys for many years, although the amounts used are ordinarily quite modest (see Church, 1999). We suspect that the use of such incentives will increase in interviewer-mediated surveys as well. Such incentives are likely to be especially appropriate when other reasons that might move potential respondents to participate are weak or lacking, and when the names and addresses (or telephone numbers) of such potential respondents are known.

RECOMMENDATIONS AND CONCLUSIONS

The workshop for which this chapter was prepared is focused specifically on collecting better data from low-income and welfare populations, and one of the clear challenges associated with surveying such populations is how to achieve high enough levels of participation to minimize bias due to nonresponse. Increasingly, respondent incentives have been proposed as a valuable tool in achieving this goal. Thus, the basic question addressed in this chapter is whether the payment of respondent incentives is indeed an effective means of reducing nonresponse, both for surveys in general and, especially, in surveys conducted with low-income and welfare populations.

As noted in the paper, a substantial research literature consistently has demonstrated the value of incentive payments to survey respondents for increasing cooperation and improving speed and quality of response in a broad range of data collection efforts, most notably in mail surveys. Because mail surveys are of limited utility in studies of welfare reform or low-income populations, experiments involving the use of incentives in face-to-face or telephone interviews are of greatest relevance to answering this basic question. These experiments are more recent in vintage, sparser in coverage, and not entirely consistent in their findings.12

Thus, although it is tempting to generalize from the findings presented here, it is important to note that many of the results are based on only a few studies and may not apply to other populations or situations, including especially those of particular interest here (i.e., surveys of low-income and welfare populations on questions related to welfare reform). Thus, if at all possible, we urge pretesting of the particular incentive plan proposed with the population targeted by one’s

12  

Such inconsistencies are not largely due to differences in sample sizes, that is an inability to detect significant differences between incentive and nonincentive groups (or other relevant comparisons) because the sample sizes in these studies were too low. Sample sizes were provided for each of the studies cited in their original reports. Although we have not repeated them here, they were, with very few exceptions, adequate to detect reasonable expected differences between experimental groups.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

survey and the instrumentation and other survey methods to be employed, rather than relying exclusively on this research literature.

Nevertheless, with these cautions, a few basic conclusions, guidelines, and recommendations can be gleaned from the evidence accumulated to date:

  1. Consistent with an extensive literature on the use of incentives with mail surveys, prepaid monetary incentives seem to be useful in recruiting low-income and minority respondents into interviewer-mediated surveys, even when the burden imposed on participants is relatively low. The use of incentives probably should be part of the design and strategy for all such surveys. However, they should not be used as substitutes for other best-practice persuasion strategies designed to increase participation, such as explanatory advance letters, endorsements by people or organizations important to the population being surveyed, assurances of confidentiality, and so on.

  2. How much money to offer respondents in these circumstances is not at all clear from the evidence currently available. Less money appears to be needed to recruit lower income respondents into a survey than those with higher incomes, but the optimal amount likely will depend on factors such as the length of the interview and the salience of the topic, and may also change over time. To determine the appropriate incentive amount for a given study, we reiterate our prior admonition that there is no real substitute for a careful pretest of various incentive amounts within the specific population and design context proposed for a given survey.

  3. Although it is tempting to speculate on this issue, and we often have been asked to venture an educated guess on what an appropriate range might be for incentives in studies of welfare and low-income populations, we believe that doing so would not be prudent for a number of reasons. In particular, as we have noted, the experimental literature that provides evidence directly relevant to this question is relatively sparse, idiosyncratic, and inconsistent, and the dynamics associated with providing incentives to these populations quite likely are both fluid and in large part specific to location, economy, and even cultural factors.

As a general guideline, the Office of Management and Budget (OMB) has most recently approved the use of respondent incentives in the $20–$30 range based on empirical experimental tests conducted with specific target populations similar to those of interest here, but incentive amounts both higher and lower than these also have been approved and successfully implemented.

  1. Prepaid respondent incentives are especially important in panel surveys (a design favored by many studies of low-income populations and studies of welfare reform because of the questions of particular interest in such studies) because of the critical need to recruit a high proportion of the eligible population into the initial round of measurement. When it is possible to send payment in advance to

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

at least a portion of the sample, the amount of cash interviewers must carry with them is reduced. Although such concerns have not been systematically validated either empirically or by anecdotal evidence from survey practitioners (see Kulka, 1995), the potential for putting either respondents or interviewers at increased risk of crime through the use of incentives is at least partially offset by this approach, along with accruing the well-established benefits of prepayment.

  1. For a number of practical reasons, including restrictions on the use of state and federal monies to compensate survey participants (especially those receiving state aid), the use of lotteries as an incentive strategy has considerable appeal. However, lotteries rather consistently appear to be less effective than individual prepaid incentives in stimulating survey response.

  2. It is possible that the use of prepaid incentives will change responses to at least some questions by affecting a respondent’s mood (i.e., making the respondent more optimistic about the survey’s content). Although evidence of this phenomenon is mixed, it is worth evaluating this possibility empirically through an experiment whenever it is feasible to do so.

  3. Although the use of incentives strictly or primarily for refusal conversion is fairly widespread in current survey practice, incentives should be used sparingly as a refusal conversion technique. Respondents regard this practice as unfair or inequitable, although there is no evidence that such differential payments reduce future willingness to participate in surveys, including termination of payments in subsequent waves of a panel survey in which an incentive was previously provided. However, there are suggestions that the routine use of refusal conversion payments may condition interviewers to expect (and depend on) them, and that this may have a negative impact on overall interviewer performance.

  4. Finally, several issues broadly related to the protection of human subjects are sometimes raised in connection with using respondent incentives. First, specific to welfare populations is the issue of whether incentives count against the value of benefits received. Although the legislative and regulatory bases for such restrictions vary by state, and there is at least anecdotal evidence that some states have been reluctant to authorize the use of incentives in their surveys for this reason, such restrictions do not yet appear to be widespread, and researchers and officials in some states have indicated that such restrictions can be waived by the state in any case.

Second, it is well known that the OMB has had a longstanding policy that has strongly discouraged the use of incentives in federal statistical surveys. Although these policies are currently in review, recent drafts of OMB’s Implementing Guidance prepared to support the Paperwork Reduction Act of 1995 provide more specific guidelines to federal agencies on the use of incentives, when incentives might be justified, and the types of documentation or evidence required to support a request for incentives. Specifically, these guidelines make clear that: (1) incentives are not intended to pay respondents for their time; (2) noncash or

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

monetary incentives of modest size ($20–$30) are preferred; and (3) one must demonstrate empirically that such payments will significantly increase response rates (and the resulting reliability and validity of the study), although the potential need for and efficacy of incentives for certain purposes and circumstances is clearly acknowledged.

Third, some welfare reform researchers have noted a recent and potentially growing problem with Institutional Review Boards (IRBs), some of which have argued that the use of incentives (especially large incentives) may be regarded as coercive, especially among low-income respondents, thereby posing a credible threat to truly informed consent. That is, having been offered (or paid) an incentive to participate in a study, potential respondents feel they cannot really refuse, even if they are reluctant to do so for other reasons. Although assessing this potential human subject threat is clearly within the purview of IRB review, most incentive payments used to date have in fact been fairly modest in size. These are often characterized as tokens of appreciation rather than compensation for time spent. Most IRBs to date have determined that these token incentives are not so large as to constitute coercion, provided that such incentives are not cited as part of informed consent or as one of the benefits of participation in the study.

REFERENCES

Armstrong, J.S. 1975 Monetary incentives in mail surveys. Public Opinion Quarterly 39:111–116.

Asch, D.A., N.A.Christakis, and P.A.Ubel 1998 Conducting physician mail surveys on a limited budget: A randomized trial comparing $2 vs. $5 incentives. Medical Care 36(1):95–99.


Balakrishnan, P.V., S.K.Chawla, M.F.Smith, and B.P.Micholski 1992 Mail survey response rates using a lottery prize giveaway incentive. Journal of Direct Marketing 6:54–59.

Baumgartner, Robert, and Pamela Rathbun 1997 Prepaid Monetary Incentives and Mail Survey Response Rates. Unpublished paper presented at the Annual Conference of the American Association of Public Opinion Research, Norfolk, VA, May 15–18.

Baumgartner, Robert, Pamela Rathbun, Kevin Boyle, Michael Welsh, and Drew Laughlan 1998 The Effect of Prepaid Monetary Incentives on Mail Survey Response Rates and Response Quality. Unpublished paper presented at the Annual Conference of the American Association of Public Opinion Research, St. Louis, May 14–17.

Berlin, Martha, Leyla Mohadjer, Joseph Waksberg, Andrew Kolstad, Irwin Kirsch, D.Rock, and Kentaro Yamamoto 1992 An experiment in monetary incentives. Pp. 393–398 in Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Biner, Paul M., and Heath J.Kidd 1994 The interactive effects of monetary incentive justification and questionnaire length on mail survey response rates. Psychology and Marketing 11:483–492.

Bischoping, Katherine, and Howard Schuman 1992 Pens and polls in Nicaragua: An analysis of the 1990 preelection surveys. American Journal of Political Science 36:331–350.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

Brehm, John 1994 Stubbing our toes for a foot in the door? Prior contact, incentives and survey response. International Journal of Public Opinion Research 6(1):45–63.

Chromy, James R., and Daniel G.Horvitz 1978 The Use of Monetary Incentives in National Assessment Household Surveys. Journal of the American Statistical Association 73(363):473–78.

Church, Allan H. 1999 Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly 57:62–79.

Cox, Eli P. 1976 A cost/benefit view of prepaid monetary incentives in mail questionnaires. Public Opinion Quarterly 40:101–104.


Dillman, Don A. 1978 Mail and Telephone Surveys: The Total Design Method. New York: John Wiley and Sons.


Fox, Richard J., Melvin Crask, and Kim Jonghoon 1988 Mail Survey Response Rate: A Meta-Analysis of Selected Techniques for Inducing Response. Public Opinion Quarterly 52:467–491.


Groves, Robert M., Eleanor Singer, Amy D.Corning, and Ashley Bowers 1999 A laboratory approach to measuring the effects on survey participation of interview length, incentives, differential incentives, and refusal conversion. Journal of Official Statistics 15:251–268.

Groves, Robert M., Eleanor Singer, and Amy D.Corning 2000 Leverage-salience theory of survey participation: Description and an illustration. Public Opinion Quarterly 64:299–308.


Heberlein, Thomas A., and Robert Baumgartner 1978 Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. American Sociological Review 43:447–462.

Hopkins, K.D., and A.R.Gullickson 1992 Response rates in survey research: A meta-analysis of monetary gratuities. Journal of Experimental Education 61:52–56

Hox, Joop 1999 The influence of interviewer’s attitude and behavior on household survey nonresponse: An international comparison. Unpublished paper presented at the International Conference on Survey Nonresponse, Portland, OR, October 28–31.

Hubbard, Raymond, and Eldon L.Little 1988 Promised contributions to charity and mail survey responses: Replication with extension. Public Opinion Quarterly 52:223–230.

Hyman, Herbert H. 1954 Interviewing in Social Research. Chicago: University of Chicago Press.


James, Jeannine M., and Richard Bolstein 1990 The effect of monetary incentives and follow-up mailings on the response rate and response quality in mail surveys. Public Opinion Quarterly 54:346–361.

1992 Large monetary incentives and their effect on mail survey response rates. Public Opinion Quarterly 56:442–453.

James, Tracy 1997 Results of the Wave 1 incentive experiment in the 1996 survey of income and program participation. Pp. 834–839 in Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

Juster, F.Thomas, and Richard Suzman 1995 An overview of the health and retirement study. The Journal of Human Resources 30(5):S7–S56.

Kanuk, L., and C.Berenson 1975 Mail surveys and response rates: A literature review. Journal of Marketing Research 12:440–453.

Kerachsky, Stuart J., and Charles D.Mallar 1981 The effects of monetary payments on survey responses: Experimental evidence from a longitudinal study of economically disadvantaged youths. Pp. 258–263 in Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Kim, K., C.Lee, and Y.Whang 1995 The effect of respondent involvement in sweepstakes on response rates in mail surveys. Pp. 216–220 in Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Kulka, Richard A. 1995 The use of incentives to survey hard-to-reach respondents: A brief review of empirical research and current practice. Pp. 256–299 in Seminar on New Directions in Statistical Methodology (Statistical Policy Working Paper 23), Part 2 of 3. Washington, DC: Federal Committee on Statistical Methodology, Statistical Policy Office, Office of Information and Regulatory Affairs, Office of Management and Budget.


Lengacher, Jennie E., Colleen M.Sullivan, Mick P.Couper, and Robert M.Groves 1995 Once Reluctant, Always Reluctant? Effects of Differential Incentives on Later Survey Participation in a Longitudinal Study . Unpublished paper presented at the Annual Conference of the American Association for Public Opinion Research, Fort Lauderdale, FL, May 18–21.

Levine, S., and G.Gordon 1958 Maximizing returns on mail questionnaires. Public Opinion Quarterly 22:568–575.

Linsky, Arnold S. 1975 Stimulating responses to mailed questionnaires: A review. Public Opinion Quarterly 39:82–101.

Lynn, Peter 1999 Is the Impact of Respondent Incentives on Personal Interview Surveys Transmitted via the Interviewers? Unpublished manuscript. Institute for Social and Economic Research, University of Essex, Colchester.


Mack, Stephen, Vicki Huggins, Donald Keathley, and Mahdi Sundukchi 1998 Do monetary incentives improve response rates in the survey of income and program participation? Pp. 529–534 in Proceedings of the Section on Survey Research Methods. Alexandria, VA: American Statistical Association.

Martin, Elizabeth, Denise Abreu, and Franklin Winters 2001 Money and motive: Effects of incentives on panel attrition in the Survey of Income and Program Participation. Journal of Official Statistics 17:267–284.

Martinez-Ebers, Valerie 1997 Using monetary incentives with hard-to-reach populations in panel surveys. International Journal of Public Opinion Research 9:77–86.

Mason, Robert, Virginia Lesser, and Michael W.Trangott 1999 Impact of missing values from converted refusals on nonsampling error. Unpublished paper presented at the International Conference on Survey Nonresponse, Portland, OR, October 28–31.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×

McCool, Steven F. 1991 Using probabilistic incentives to increase response rates to mail return highway intercept diaries. Journal of Travel Research 30:17–19.

Merkle, Daniel, Murray Edelman, Kathy Dykeman, and Chris Brogan 1998 An Experimental Study of Ways to Increase Exit Poll Response Rates and Reduce Survey Error. Unpublished paper presented at the Annual Conference of the American Association of Public Opinion Research, St. Louis, May 14–17.

Porst, Rolf, and Christa von Briel 1995 Waren Sie vielleicht bereit, sich gegenebenfalls noch einmal befragen zu lassen? Oder: Gründe für die Teilnahme an Panelbefragungen. In ZUMA-Arbeitsbericht, Nr. 95/04. Mannheim, Germ.

Presser, Stanley 1989 Collection and design issues: Discussion. Pp. 75–79 in Panel Surveys, D.Kaspryzk, G. Duncan, G.Kalton, and M.Singh, eds. New York: Wiley.


Schwarz, N., and G.L.Clore 1996 Feelings and phenomenal experiences. Pp. 433–465 in Social Psychology: Handbook of Basic Principles, E.T.Higgins and A.Kruglanski, eds. New York: Guilford.

Shettle, Carolyn, and Geraldine Mooney 1999 Monetary incentives in government surveys. Journal of Official Statistics 15:231–250.

Singer, Eleanor, Martin R.Frankel, and Marc B.Glassman 1983 The effect of interviewers’ characteristics and expectations on response. Public Opinion Quarterly 47:68–83.

Singer, Eleanor, Nancy Gebler, Trivellore Raghunathan, John Van Hoewyk, and Katherine McGonagle 1999a The effect of incentives in interviewer-mediated surveys. Journal of Official Statistics 15:217–230.

Singer, Eleanor, Robert M.Groves, and Amy D.Corning 1999b Differential incentives: Beliefs about practices, perceptions of equity, and effects on survey participation. Public Opinion Quarterly 63:251–260.

Singer, Eleanor, and Luane Kohnke-Aguirre 1979 Interviewer expectation effects: A replication and extension. Public Opinion Quarterly 43:245–260.

Singer, Eleanor, John Van Hoewyk, and Mary P.Maher 1998 Does the payment of incentives create expectation effects? Public Opinion Quarterly 62:152–164.

2000 Experiments with incentives in telephone surveys. Public Opinion Quarterly 64:171–188.

Sudman, Seymour, Norman M.Bradburn, Ed Blair, and Carol Stocking 1977 Modest expectations: The effects of interviewers’ prior expectations on responses. Sociological Methods and Research 6:171–182.

Sundukchi, M. 1999 SIPP 1996: Some results from the Wave 7 incentive experiment. Unpublished document, January 28, U.S. Census Bureau.


Warriner, Keith, John Goyder, Heidi Gjertsen, Paula Hohner, and Kathleen McSpurren 1996 Charities, No, Lotteries, No, Cash, Yes: Main Effects and Interactions in a Canadian Incentives Experiment. Unpublished paper presented at the Survey Non-Response Session of the Fourth International Social Science Methodology Conference, University of Essex, Institute for the Social Sciences, Colchester, UK, October 6–8.

Wiese, Cheryl J. 1998 Refusal conversions: What Is Gained? National Network of State Polls Newsletter 32:1–3


Yu, Julie, and Harris Cooper 1983 A quantitative review of research design effects on response rates to questionnaires. Journal of Marketing Research 2036–2044.

Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 105
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 106
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 107
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 108
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 109
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 110
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 111
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 112
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 113
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 114
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 115
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 116
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 117
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 118
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 119
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 120
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 121
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 122
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 123
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 124
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 125
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 126
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 127
Suggested Citation:"4 Paying Respondents for Survey Participation." National Research Council. 2002. Studies of Welfare Populations: Data Collection and Research Issues. Washington, DC: The National Academies Press. doi: 10.17226/10206.
×
Page 128
Next: 5 Adjusting for Missing Data in Low-Income Surveys »
Studies of Welfare Populations: Data Collection and Research Issues Get This Book
×
Buy Paperback | $74.00 Buy Ebook | $59.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

This volume, a companion to Evaluating Welfare Reform in an Era of Transition, is a collection of papers on data collection issues for welfare and low-income populations. The papers on survey issues cover methods for designing surveys taking into account nonresponse in advance, obtaining high response rates in telephone surveys, obtaining high response rates in in-person surveys, the effects of incentive payments, methods for adjusting for missing data in surveys of low-income populations, and measurement error issues in surveys, with a special focus on recall error. The papers on administrative data cover the issues of matching and cleaning, access and confidentiality, problems in measuring employment and income, and the availability of data on children. The papers on welfare leavers and welfare dynamics cover a comparison of existing welfare leaver studies, data from the state of Wisconsin on welfare leavers, and data from the National Longitudinal Survey of Youth used to construct measures of heterogeneity in the welfare population based on the recipient's own welfare experience. A final paper discusses qualitative data.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!