Click for next page ( 360


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 359
Reference Guide on Survey Research s hari seidman diamond Shari Seidman Diamond, J.D., Ph.D., is the Howard J. Trienens Professor of Law and Professor of Psychology, Northwestern University, and a Research Professor, American Bar Foundation, Chicago, Illinois. C onTenTs I. Introduction, 361 A. Use of Surveys in Court, 363 B. Surveys Used to Help Assess Expert Acceptance in the Wake of Daubert, 367 C. Surveys Used to Help Assess Community Standards: Atkins v. Virginia, 369 D. A Comparison of Survey Evidence and Individual Testimony, 372 II. Purpose and Design of the Survey, 373 A. Was the Survey Designed to Address Relevant Questions? 373 B. Was Participation in the Design, Administration, and Interpretation of the Survey Appropriately Controlled to Ensure the Objectivity of the Survey? 374 C. Are the Experts Who Designed, Conducted, or Analyzed the Survey Appropriately Skilled and Experienced? 375 D. Are the Experts Who Will Testify About Surveys Conducted by Others Appropriately Skilled and Experienced? 375 III. Population Definition and Sampling, 376 A. Was an Appropriate Universe or Population Identified? 376 B. Did the Sampling Frame Approximate the Population? 377 C. Does the Sample Approximate the Relevant Characteristics of the Population? 380 D. What Is the Evidence That Nonresponse Did Not Bias the Results of the Survey? 383 E. What Procedures Were Used to Reduce the Likelihood of a Biased Sample? 385 F. What Precautions Were Taken to Ensure That Only Qualified Respondents Were Included in the Survey? 386 359

OCR for page 359
Reference Manual on Scientific Evidence IV. Survey Questions and Structure, 387 A. Were Questions on the Survey Framed to Be Clear, Precise, and Unbiased? 387 B. Were Some Respondents Likely to Have No Opinion? If So, What Steps Were Taken to Reduce Guessing? 389 C. Did the Survey Use Open-Ended or Closed-Ended Questions? How Was the Choice in Each Instance Justified? 391 D. If Probes Were Used to Clarify Ambiguous or Incomplete Answers, What Steps Were Taken to Ensure That the Probes Were Not Leading and Were Administered in a Consistent Fashion? 394 E. What Approach Was Used to Avoid or Measure Potential Order or Context Effects? 395 F. If the Survey Was Designed to Test a Causal Proposition, Did the Survey Include an Appropriate Control Group or Question? 397 G. What Limitations Are Associated with the Mode of Data Collection Used in the Survey? 401 1. In-person interviews, 402 2. Telephone interviews, 403 3. Mail questionnaires, 405 4. Internet surveys, 406 V. Surveys Involving Interviewers, 409 A. Were the Interviewers Appropriately Selected and Trained? 409 B. What Did the Interviewers Know About the Survey and Its Sponsorship? 410 C. What Procedures Were Used to Ensure and Determine That the Survey Was Administered to Minimize Error and Bias? 411 VI. Data Entry and Grouping of Responses, 412 A. What Was Done to Ensure That the Data Were Recorded Accurately? 412 B. What Was Done to Ensure That the Grouped Data Were Classified Consistently and Accurately? 413 VII. Disclosure and Reporting, 413 A. When Was Information About the Survey Methodology and Results Disclosed? 413 B. Does the Survey Report Include Complete and Detailed Information on All Relevant Characteristics? 415 C. In Surveys of Individuals, What Measures Were Taken to Protect the Identities of Individual Respondents? 417 VIII. Acknowledgment, 418 Glossary of Terms, 419 References on Survey Research, 423 360

OCR for page 359
Reference Guide on Survey Research I. Introduction Sample surveys are used to describe or enumerate the beliefs, attitudes, or behavior of persons or other social units.1 Surveys typically are offered in legal proceedings to establish or refute claims about the characteristics of those individuals or social units (e.g., whether consumers are likely to be misled by the claims contained in an allegedly deceptive advertisement;2 which qualities purchasers focus on in making decisions about buying new computer systems).3 In a broader sense, a survey can describe or enumerate the attributes of any units, including animals and objects.4 We focus here primarily on sample surveys, which must deal not only with issues of population definition, sampling, and measurement common to all surveys, but also with the specialized issues that arise in obtaining information from human respondents. In principle, surveys may count or measure every member of the relevant population (e.g., all plaintiffs eligible to join in a suit, all employees currently working for a corporation, all trees in a forest). In practice, surveys typically count or measure only a portion of the individuals or other units that the survey is intended to describe (e.g., a sample of jury-eligible citizens, a sample of potential job applicants). In either case, the goal is to provide information on the relevant population from which the sample was drawn. Sample surveys can be carried out using probability or nonprobability sampling techniques. Although probability sampling offers important advantages over nonprobability sampling,5 experts in some fields (e.g., marketing) regularly rely on various forms of nonprobability sampling when conducting surveys. Consistent with Federal Rule of Evidence 703, courts generally have accepted such evidence.6 Thus, in this reference guide, both the probability sample and the nonprobability sample are discussed. The strengths of probability sampling and the weaknesses of various types of non- probability sampling are described. 1. Sample surveys conducted by social scientists “consist of (relatively) systematic, (mostly) standardized approaches to collecting information on individuals, households, organizations, or larger organized entities through questioning systematically identified samples.” James D. Wright & Peter V. Marsden, Survey Research and Social Science: History, Current Practice, and Future Prospects, in Handbook of Survey Research 1, 3 (James D. Wright & Peter V. Marsden eds., 2d ed. 2010). 2. See Sanderson Farms v. Tyson Foods, 547 F. Supp. 2d 491 (D. Md. 2008). 3. See SMS Sys. Maint. Servs. v. Digital Equip. Corp., 118 F.3d 11, 30 (1st Cir. 1999). For other examples, see notes 19–32 and accompanying text. 4. In J.H. Miles & Co. v. Brown, 910 F. Supp. 1138 (E.D. Va. 1995), clam processors and fishing vessel owners sued the Secretary of Commerce for failing to use the unexpectedly high results from 1994 survey data on the size of the clam population to determine clam fishing quotas for 1995. The estimate of clam abundance is obtained from surveys of the amount of fishing time the research survey vessels require to collect a specified yield of clams in major fishing areas over a period of several weeks. Id. at 1144–45. 5. See infra Section III.C. 6. Fed. R. Evid. 703 recognizes facts or data “of a type reasonably relied upon by experts in the particular field. . . .” 361

OCR for page 359
Reference Manual on Scientific Evidence As a method of data collection, surveys have several crucial potential advan- tages over less systematic approaches.7 When properly designed, executed, and described, surveys (1) economically present the characteristics of a large group of respondents or other units and (2) permit an assessment of the extent to which the measured respondents or other units are likely to adequately represent a rel- evant group of individuals or other units.8 All questions asked of respondents and all other measuring devices used (e.g., criteria for selecting eligible respondents) can be examined by the court and the opposing party for objectivity, clarity, and relevance, and all answers or other measures obtained can be analyzed for com- pleteness and consistency. The survey questions should not be the only focus of attention. To make it possible for the court and the opposing party to closely scru- tinize the survey so that its relevance, objectivity, and representativeness can be evaluated, the party proposing to offer the survey as evidence should also describe in detail the design, execution, and analysis of the survey. This should include (1) a description of the population from which the sample was selected, demon- strating that it was the relevant population for the question at hand; (2) a descrip- tion of how the sample was drawn and an explanation for why that sample design was appropriate; (3) a report on response rate and the ability of the sample to represent the target population; and (4) an evaluation of any sources of potential bias in respondents’ answers. The questions listed in this reference guide are intended to assist judges in identifying, narrowing, and addressing issues bearing on the adequacy of surveys either offered as evidence or proposed as a method for developing information.9 These questions can be (1) raised from the bench during a pretrial proceeding to determine the admissibility of the survey evidence; (2) presented to the contend- ing experts before trial for their joint identification of disputed and undisputed issues; (3) presented to counsel with the expectation that the issues will be addressed during the examination of the experts at trial; or (4) raised in bench trials when a motion for a preliminary injunction is made to help the judge evaluate 7. This does not mean that surveys can be relied on to address all questions. For example, if survey respondents had been asked in the days before the attacks of 9/11 to predict whether they would volunteer for military service if Washington, D.C., were to be bombed, their answers may not have provided accurate predictions. Although respondents might have willingly answered the question, their assessment of what they would actually do in response to an attack simply may have been inaccurate. Even the option of a “do not know” choice would not have prevented an error in prediction if they believed they could accurately predict what they would do. Thus, although such a survey would have been suitable for assessing the predictions of respondents, it might have provided a very inaccurate estimate of what an actual response to the attack would be. 8. The ability to quantitatively assess the limits of the likely margin of error is unique to prob- ability sample surveys, but an expert testifying about any survey should provide enough information to allow the judge to evaluate how potential error, including coverage, measurement, nonresponse, and sampling error, may have affected the obtained pattern of responses. 9. See infra text accompanying note 31. 362

OCR for page 359
Reference Guide on Survey Research what weight, if any, the survey should be given.10 These questions are intended to improve the utility of cross-examination by counsel, where appropriate, not to replace it. All sample surveys, whether they measure individuals or other units, should address the issues concerning purpose and design (Section II), population defini- tion and sampling (Section III), accuracy of data entry (Section VI), and disclo- sure and reporting (Section VII). Questionnaire and interview surveys, whether conducted in-person, on the telephone, or online, raise methodological issues involving survey questions and structure (Section IV) and confidentiality (Sec- tion VII.C). Interview surveys introduce additional issues (e.g., interviewer train- ing and qualifications) (Section V), and online surveys raise some new issues and questions that are currently under study (Section VI). The sections of this refer- ence guide are labeled to direct the reader to those topics that are relevant to the type of survey being considered. The scope of this reference guide is necessarily limited, and additional issues might arise in particular cases. A. Use of Surveys in Court Fifty years ago the question of whether surveys constituted acceptable evidence still was unsettled.11 Early doubts about the admissibility of surveys centered on their use of sampling12 and their status as hearsay evidence.13 Federal Rule of Evidence 10. Lanham Act cases involving trademark infringement or deceptive advertising frequently require expedited hearings that request injunctive relief, so judges may need to be more familiar with survey methodology when considering the weight to accord a survey in these cases than when presid- ing over cases being submitted to a jury. Even in a case being decided by a jury, however, the court must be prepared to evaluate the methodology of the survey evidence in order to rule on admissibility. See Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 589 (1993). 11. Hans Zeisel, The Uniqueness of Survey Evidence, 45 Cornell L.Q. 322, 345 (1960). 12. In an early use of sampling, Sears, Roebuck & Co. claimed a tax refund based on sales made to individuals living outside city limits. Sears randomly sampled 33 of the 826 working days in the relevant working period, computed the proportion of sales to out-of-city individuals during those days, and projected the sample result to the entire period. The court refused to accept the estimate based on the sample. When a complete audit was made, the result was almost identical to that obtained from the sample. Sears, Roebuck & Co. v. City of Inglewood, tried in Los Angeles Superior Court in 1955, is described in R. Clay Sprowls, The Admissibility of Sample Data into a Court of Law: A Case History, 4 UCLA L. Rev. 222, 226–29 (1956–1957). 13. Judge Wilfred Feinberg’s thoughtful analysis in Zippo Manufacturing Co. v. Rogers Imports, Inc., 216 F. Supp. 670, 682–83 (S.D.N.Y. 1963), provides two alternative grounds for admitting opinion surveys: (1) Surveys are not hearsay because they are not offered in evidence to prove the truth of the matter asserted; and (2) even if they are hearsay, they fall under one of the exceptions as a “present sense impression.” In Schering Corp. v. Pfizer Inc., 189 F.3d 218 (2d Cir. 1999), the Second Circuit distinguished between perception surveys designed to reflect the present sense impressions of respondents and “memory” surveys designed to collect information about a past occurrence based on the recollections of the survey respondents. The court in Schering suggested that if a survey is offered to prove the existence of a specific idea in the public mind, then the survey does constitute hearsay 363

OCR for page 359
Reference Manual on Scientific Evidence 703 settled both matters for surveys by redirecting attention to the “validity of the techniques employed.”14 The inquiry under Rule 703 focuses on whether facts or data are “of a type reasonably relied upon by experts in the particular field in form- ing opinions or inferences upon the subject.”15 For a survey, the question becomes, “Was the poll or survey conducted in accordance with generally accepted survey principles, and were the results used in a statistically correct way?”16 This focus on the adequacy of the methodology used in conducting and analyzing results from a survey is also consistent with the Supreme Court’s discussion of admissible scientific evidence in Daubert v. Merrell Dow Pharmaceuticals, Inc.17 Because the survey method provides an economical and systematic way to gather information and draw inferences about a large number of individuals or other units, surveys are used widely in business, government, and, increasingly, evidence. As the court observed, Federal Rule of Evidence 803(3), creating “an exception to the hearsay rule for such statements [i.e., state-of-mind expressions] rather than excluding the statements from the definition of hearsay, makes sense only in this light.” Id. at 230 n.3. See also Playtex Prods. v. Procter & Gamble Co., 2003 U.S. Dist. LEXIS 8913 (S.D.N.Y. May 28, 2003), aff’d, 126 Fed. Appx. 32 (2d Cir. 2005). Note, however, that when survey respondents are shown a stimulus (e.g., a commercial) and then respond to a series of questions about their impressions of what they viewed, those impressions reflect both respondents’ initial perceptions and their memory for what they saw and heard. Concerns about the impact of memory on the trustworthiness of survey responses appropriately depend on the passage of time between exposure and testing and on the likelihood that distorting events occurred during that interval. Two additional exceptions to the hearsay exclusion can be applied to surveys. First, surveys may constitute a hearsay exception if the survey data were collected in the normal course of a regularly conducted business activity, unless “the source of information or the method or circumstances of preparation indicate lack of trustworthiness.” Fed. R. Evid. 803(6); see also Ortho Pharm. Corp. v. Cosprophar, Inc., 828 F. Supp. 1114, 1119–20 (S.D.N.Y. 1993) (marketing surveys prepared in the course of business were properly excluded because they lacked foundation from a person who saw the original data or knew what steps were taken in preparing the report), aff’d, 32 F.3d 690 (2d Cir. 1994). In addition, if a survey shows guarantees of trustworthiness equivalent to those in other hearsay exceptions, it can be admitted if the court determines that the statement is offered as evidence of a material fact, it is more probative on the point for which it is offered than any other evidence that the proponent can procure through reasonable efforts, and admissibility serves the interests of justice. Fed. R. Evid. 807; e.g., Schering, 189 F.3d at 232. Admissibility as an exception to the hearsay exclusion thus depends on the trustworthiness of the survey. New Colt Holding v. RJG Holdings of Fla., 312 F. Supp. 2d 195, 223 (D. Conn. 2004). 14. Fed. R. Evid. 703 Advisory Committee Note. 15. Fed. R. Evid. 703. 16. Manual for Complex Litigation § 2.712 (1982). Survey research also is addressed in the Manual for Complex Litigation, Second § 21.484 (1985) [hereinafter MCL 2d]; the Manual for Com- plex Litigation, Third § 21.493 (1995) [hereinafter MCL 3d]; and the Manual for Complex Litigation, Fourth §11.493 (2004) [hereinafter MCL 4th]. Note, however, that experts who collect survey data, along with the professions that rely on those surveys, may differ in some of their methodological standards and principles. An assessment of the precision of sample estimates and an evaluation of the sources and magnitude of likely bias are required to distinguish methods that are acceptable from methods that are not. 17. 509 U.S. 579 (1993); see also General Elec. Co. v. Joiner, 522 U.S. 136, 147 (1997). 364

OCR for page 359
Reference Guide on Survey Research administrative settings and judicial proceedings.18 Both federal and state courts have accepted survey evidence on a variety of issues. In a case involving allega- tions of discrimination in jury panel composition, the defense team surveyed prospective jurors to obtain their age, race, education, ethnicity, and income distribution.19 Surveys of employees or prospective employees are used to support or refute claims of employment discrimination.20 Surveys provide information on the nature and similarity of claims to support motions for or against class certifica- tion.21 In ruling on the admissibility of scientific claims, courts have examined sur- veys of scientific experts to assess the extent to which the theory or technique has received widespread acceptance.22 Some courts have admitted surveys in obscenity cases to provide evidence about community standards.23 Requests for a change of venue on grounds of jury pool bias often are backed by evidence from a survey of jury-eligible respondents in the area of the original venue.24 The plaintiff in an antitrust suit conducted a survey to assess what characteristics, including price, affected consumers’ preferences. The survey was offered as one way to estimate damages.25 In a Title IX suit based on allegedly discriminatory scheduling of girls’ 18. Some sample surveys are so well accepted that they even may not be recognized as surveys. For example, some U.S. Census Bureau data are based on sample surveys. Similarly, the Standard Table of Mortality, which is accepted as proof of the average life expectancy of an individual of a particular age and gender, is based on survey data. 19. United States v. Green, 389 F. Supp. 2d 29 (D. Mass. 2005), rev’d on other grounds, 426 F.3d 1 (1st Cir. 2005) (evaluating minority underrepresentation in the jury pool by comparing racial composition of the voting-age population in the district with the racial breakdown indicated in juror questionnaires returned to court); see also People v. Harris, 36 Cal. 3d 36, 679 P.2d 433 (Cal. 1984). 20. John Johnson v. Big Lots Stores, Inc., No. 04-321, 2008 U.S. Dist. LEXIS 35316, at *20 (E.D. La. Apr. 29, 2008); Stender v. Lucky Stores, Inc., 803 F. Supp. 259, 326 (N.D. Cal. 1992); EEOC v. Sears, Roebuck & Co., 628 F. Supp. 1264, 1308 (N.D. Ill. 1986), aff’d, 839 F.2d 302 (7th Cir. 1988). 21. John Johnson v. Big Lots Stores, Inc., 561 F. Supp. 2d 567 (E.D. La. 2008); Marlo v. United Parcel Service, Inc., 251 F.R.D. 476 (C.D. Cal. 2008). 22. United States v. Scheffer, 523 U.S. 303, 309 (1998); United States v. Bishop, 64 F. Supp. 2d 1149 (D. Utah 1999); United States v. Varoudakis, No. 97-10158, 1998 WL 151238 (D. Mass. Mar. 27, 1998); State v. Shively, 268 Kan. 573 (2000), aff’d, 268 Kan. 589 (2000) (all cases in which courts determined, based on the inconsistent reactions revealed in several surveys, that the polygraph test has failed to achieve general acceptance in the scientific community). Contra, see Lee v. Martinez, 136 N.M. 166, 179–81, 96 P.3d 291, 304–06 (N.M. 2004). People v. Williams, 830 N.Y.S.2d 452 (2006) (expert permitted to testify regarding scientific studies of factors affecting the perceptual ability and memory of eyewitnesses to make identifications based in part on general acceptance demonstrated in survey of experts who study eyewitness identification). 23. E.g., People v. Page Books, Inc., 601 N.E.2d 273, 279–80 (Ill. App. Ct. 1992); State v. Williams, 598 N.E.2d 1250, 1256–58 (Ohio Ct. App. 1991). 24. E.g., United States v. Eagle, 586 F.2d 1193, 1195 (8th Cir. 1978); United States v. Tokars, 839 F. Supp. 1578, 1583 (D. Ga. 1993), aff’d, 95 F.3d 1520 (11th Cir. 1996); State v. Baumruk, 85 S.W.3d 644 (Mo. 2002); People v. Boss, 701 N.Y.S.2d 342 (App. Div. 1999). 25. Dolphin Tours, Inc. v. Pacifico Creative Servs., Inc., 773 F.2d 1506, 1508 (9th Cir. 1985). See also SMS Sys. Maint. Servs., Inc. v. Digital Equip. Corp., 188 F.3d 11 (1st Cir. 1999); Benjamin F. King, Statistics in Antitrust Litigation, in Statistics and the Law 49 (Morris H. DeGroot et al. eds., 365

OCR for page 359
Reference Manual on Scientific Evidence sports, a survey was offered for the purpose of establishing how girls felt about the scheduling of girls’ and boys’ sports.26 A routine use of surveys in federal courts occurs in Lanham Act27 cases, when the plaintiff alleges trademark infringement28 or claims that false advertising29 has confused or deceived consumers. The pivotal legal question in such cases virtually demands survey research because it centers on consumer perception and memory (i.e., is the consumer likely to be confused about the source of a product, or does the advertisement imply a false or mis- leading message?).30 In addition, survey methodology has been used creatively to assist federal courts in managing mass torts litigation. Faced with the prospect of conducting discovery concerning 10,000 plaintiffs, the plaintiffs and defendants in Wilhoite v. Olin Corp.31 jointly drafted a discovery survey that was administered 1986). Surveys have long been used in antitrust litigation to help define relevant markets. In United States v. E.I. du Pont de Nemours & Co., 118 F. Supp. 41, 60 (D. Del. 1953), aff’d, 351 U.S. 377 (1956), a survey was used to develop the “market setting” for the sale of cellophane. In Mukand, Ltd. v. United States, 937 F. Supp. 910 (Ct. Int’l Trade 1996), a survey of purchasers of stainless steel wire rods was conducted to support a determination of competition and fungibility between domestic and Indian wire rod. 26. Alston v. Virginia High Sch. League, Inc., 144 F. Supp. 2d 526, 539–40 (W.D. Va. 1999). 27. Lanham Act § 43(a), 15 U.S.C. § 1125(a) (1946) (amended 2006). 28. E.g., Herman Miller v. Palazzetti Imports & Exports, 270 F.3d 298, 312 (6th Cir. 2001) (“Because the determination of whether a mark has acquired secondary meaning is primarily an empiri- cal inquiry, survey evidence is the most direct and persuasive evidence.”); Simon Property Group v. MySimon, 104 F. Supp. 2d 1033, 1038 (S.D. Ind. 2000) (“Consumer surveys are generally accepted by courts as one means of showing the likelihood of consumer confusion.”). See also Qualitex Co. v. Jacobson Prods. Co., No. CIV-90-1183HLH, 1991 U.S. Dist. LEXIS 21172 (C.D. Cal. Sept. 3, 1991), aff’d in part & rev’d in part on other grounds, 13 F.3d 1297 (9th Cir. 1994), rev’d on other grounds, 514 U.S. 159 (1995); Union Carbide Corp. v. Ever-Ready, Inc., 531 F.2d 366 (7th Cir.), cert. denied, 429 U.S. 830 (1976). According to Neal Miller, Facts, Expert Facts, and Statistics: Descriptive and Experimental Research Methods in Litigation, 40 Rutgers L. Rev. 101, 137 (1987), trademark law has relied on the institutionalized use of statistical evidence more than any other area of the law. 29. E.g., Southland Sod Farms v. Stover Seed Co., 108 F.3d 1134, 1142–43 (9th Cir. 1997); American Home Prods. Corp. v. Johnson & Johnson, 577 F.2d 160 (2d Cir. 1978); Rexall Sundown, Inc. v. Perrigo Co., 651 F. Supp. 2d 9 (E.D.N.Y. 2009); Mutual Pharm. Co. v. Ivax Pharms. Inc., 459 F. Supp. 2d 925 (C.D. Cal. 2006); Novartis Consumer Health v. Johnson & Johnson-Merck Consumer Pharms., 129 F. Supp. 2d 351 (D.N.J. 2000). 30. Courts have observed that “the court’s reaction is at best not determinative and at worst irrelevant. The question in such cases is, what does the person to whom the advertisement is addressed find to be the message?” American Brands, Inc. v. R.J. Reynolds Tobacco Co., 413 F. Supp. 1352, 1357 (S.D.N.Y. 1976). The wide use of surveys in recent years was foreshadowed in Triangle Publica- tions, Inc. v. Rohrlich, 167 F.2d 969, 974 (2d Cir. 1948) (Frank, J., dissenting). Called on to determine whether a manufacturer of girdles labeled “Miss Seventeen” infringed the trademark of the magazine Seventeen, Judge Frank suggested that, in the absence of a test of the reactions of “numerous girls and women,” the trial court judge’s finding as to what was likely to confuse was “nothing but a surmise, a conjecture, a guess,” noting that “neither the trial judge nor any member of this court is (or resembles) a teen-age girl or the mother or sister of such a girl.” Id. at 976–77. 31. No. CV-83-C-5021-NE (N.D. Ala. filed Jan. 11, 1983). The case ultimately settled before trial. See Francis E. McGovern & E. Allan Lind, The Discovery Survey, Law & Contemp. Probs., Autumn 1988, at 41. 366

OCR for page 359
Reference Guide on Survey Research in person by neutral third parties, thus replacing interrogatories and depositions. It resulted in substantial savings in both time and cost. B. Surveys Used to Help Assess Expert Acceptance in the Wake of Daubert Scientists who offer expert testimony at trial typically present their own opinions. These opinions may or may not be representative of the opinions of the scientific community at large. In deciding whether to admit such testimony, courts apply- ing the Frye test must determine whether the science being offered is generally accepted by the relevant scientific community. Under Daubert as well, a relevant factor used to decide admissibility is the extent to which the theory or technique has received widespread acceptance. Properly conducted surveys can provide a useful way to gauge acceptance, and courts recently have been offered assistance from surveys that allegedly gauge relevant scientific opinion. As with any scien- tific research, the usefulness of the information obtained from a survey depends on the quality of research design. Several critical factors have emerged that have limited the value of some of these surveys: problems in defining the relevant target population and identifying an appropriate sampling frame, response rates that raise questions about the representativeness of the results, and a failure to ask questions that assess opinions on the relevant issue. Courts deciding on the admissibility of polygraph tests have considered results from several surveys of purported experts. Surveys offered as providing evidence of relevant scientific opinion have tested respondents from several populations: (1) professional polygraph examiners,32 (2) psychophysiologists (members of the Society for Psychophysiological Research),33 and (3) distinguished psychologists (Fellows of the Division of General Psychology of the American Psychological Association).34 Respondents in the first group expressed substantial confidence in the scientific accuracy of polygraph testing, and those in the third group expressed substantial doubts about it. Respondents in the second group were asked the same question across three surveys that differed in other aspects of their methodology (e.g., when testing occurred and what the response rate was). Although over 60% of those questioned in two of the three surveys characterized the polygraph as a useful diagnostic tool, one of the surveys was conducted in 1982 and the more recent survey, published in 1984, achieved only a 30% response rate. The third 32. See plaintiff’s survey described in Meyers v. Arcudi, 947 F. Supp. 581, 588 (D. Conn. 1996). 33. Susan L. Amato & Charles R. Honts, What Do Psychophysiologists Think About Polygraph Tests? A Survey of the Membership of SPR, 31 Psychophysiology S22 [abstract]; Gallup Organization, Survey of Members of the Society for Psychological Research Concerning Their Opinions of Polygraph Test Interpretation, 13 Polygraph 153 (1984); William G. Iacono & David T. Lykken, The Validity of the Lie Detector: Two Surveys of Scientific Opinion, 82 J. Applied Psychol. 426 (1997). 34. Iacono & Lykken, supra note 33. 367

OCR for page 359
Reference Manual on Scientific Evidence survey, also conducted in 1984, achieved a response rate of 90% and found that only 44% of respondents viewed the polygraph as a useful diagnostic tool. On the basis of these inconsistent reactions from the several surveys, courts have deter- mined that the polygraph has failed to achieve general acceptance in the scientific community.35 In addition, however, courts have criticized the relevance of the population surveyed by proponents of the polygraph. For example, in Meyers v. Arcudi the court noted that the survey offered by proponents of the polygraph was a survey of “practitioners who estimated the accuracy of the control ques- tion technique [of polygraph testing] to be between 86% and 100%.”36 The court rejected the conclusions from this survey on the basis of a determination that the population surveyed was not the relevant scientific community, noting that “many of them . . . do not even possess advanced degrees and are not trained in the scientific method.”37 The link between specialized expertise and self-interest poses a dilemma in defining the relevant scientific population. As the court in United States v. Orians recognized, “The acceptance in the scientific community depends in large part on how the relevant scientific community is defined.”38 In rejecting the defendants’ urging that the court consider as relevant only psychophysiologists whose work is dedicated in large part to polygraph research, the court noted that Daubert “does not require the court to limit its inquiry to those individuals that base their liveli- hood on the acceptance of the relevant scientific theory. These individuals are often too close to the science and have a stake in its acceptance; i.e., their liveli- hood depends in part on the acceptance of the method.”39 To be relevant to a Frye or Daubert inquiry on general acceptance, the ques- tions asked in a survey of experts should assess opinions on the quality of the scientific theory and methodology, rather than asking whether or not the instru- ment should be used in a legal setting. Thus, a survey in which 60% of respon- dents agreed that the polygraph is “a useful diagnostic tool when considered with other available information,” 1% viewed it as sufficiently reliable to be the sole determinant, and the remainder thought it entitled to little or no weight, failed to assess the relevant issue. As the court in United States v. Cordoba noted, because “useful” and “other available information” could have many meanings, “there is little wonder why [the response chosen by the majority of respondents] was most frequently selected.”40 35. United States v. Scheffer, 523 U.S. 303, 309 (1998); United States v. Bishop, 64 F. Supp. 2d 1149 (D. Utah 1999); Meyers v. Arcudi, 947 F. Supp. 581, 588 (D. Conn. 1996); United States v. Varoudakis, 48 Fed. R. Evid. Serv. 1187 (D. Mass. 1998). 36. Meyers v. Arcudi, 947 F. Supp. at 588. 37. Id. 38. 9 F. Supp. 2d 1168, 1173 (D. Ariz. 1998). 39. Id. 40. 991 F. Supp. 1199 (C.D. Cal. 1998), aff’d, 194 F.3d 1053 (9th Cir. 1999). 368

OCR for page 359
Reference Guide on Survey Research A similar flaw occurred in a survey conducted by experts opposed to the use of the polygraph in trial proceedings. Survey respondents were asked whether they would advocate that courts admit into evidence the outcome of a polygraph test.41 That question calls for more than an assessment of the accuracy of the polygraph, and thus does not appropriately limit expert opinion to issues within the expert’s competence, that is, to the accuracy of the information provided by the test results. The survey also asked whether respondents agreed that the control ques- tion technique, the most common form of polygraph test, is accurate at least 85% of the time in real-life applications for guilty and innocent subjects.42 Although polygraph proponents frequently claim an accuracy level of 85%, it is up to the courts to decide what accuracy level would be required to justify admissibility. A better approach would be to ask survey respondents to estimate the level of accuracy they believe the test is likely to produce.43 Surveys of experts are no substitute for an evaluation of whether the testi- mony an expert witness is offering will assist the trier of fact. Nonetheless, courts can use an assessment of opinion in the relevant scientific community to aid in determining whether a particular expert is proposing to use methods that would be rejected by a representative group of experts to arrive at the opinion the expert will offer. Properly conducted surveys can provide an economical way to collect and present information on scientific consensus and dissensus. C. Surveys Used to Help Assess Community Standards: Atkins v. Virginia In Atkins v. Virginia,44 the U.S. Supreme Court determined that the Eighth Amendment’s prohibition of “cruel and unusual punishment” forbids the execu- tion of mentally retarded persons.45 Following the interpretation advanced in Trop v. Dulles46 that “The Amendment must draw its meaning from the evolving standards of decency that mark the progress of a maturing society,”47 the Court examined a variety of sources, including legislative judgments and public opinion polls, to find that a national consensus had developed barring such executions.48 41. See Iacono & Lykken, supra note 33, at 430, tbl. 2 (1997). 42. Id. 43. At least two assessments should be made: an estimate of the accuracy for guilty subjects and an estimate of the accuracy for innocent subjects. 44. 536 U.S. 304, 322 (2002). 45. Although some groups have recently moved away from the term “mental retardation” in response to concerns that the term may have pejorative connotations, mental retardation was the name used for the condition at issue in Atkins and it continues to be employed in federal laws, in cases determining eligibility for the death penalty, and as a diagnosis by the medical profession. 46. 356 U.S. 86 (1958). 47. Id. at 101. 48. Atkins, 536 U.S. at 313–16. 369

OCR for page 359
Reference Manual on Scientific Evidence would be admissible at trial while reserving the question of the weight the evi- dence would be given.230 The Seventh Circuit called this approach a commend- able procedure and suggested that it would have been even more desirable if the parties had “attempt[ed] in good faith to agree upon the questions to be in such a survey.”231 The Manual for Complex Litigation, Second, recommended that parties be required, “before conducting any poll, to provide other parties with an outline of the proposed form and methodology, including the particular questions that will be asked, the introductory statements or instructions that will be given, and other controls to be used in the interrogation process.”232 The parties then were encour- aged to attempt to resolve any methodological disagreements before the survey was conducted.233 Although this passage in the second edition of the Manual has been cited with apparent approval,234 the prior agreement that the Manual rec- ommends has occurred rarely, and the Manual for Complex Litigation, Fourth, recommends, but does not advocate requiring, prior disclosure and discussion of survey plans.235 As the Manual suggests, however, early disclosure can enable the parties to raise prompt objections that may permit corrective measures to be taken before a survey is completed.236 Rule 26 of the Federal Rules of Civil Procedure requires extensive disclosure of the basis of opinions offered by testifying experts. However, Rule 26 does not produce disclosure of all survey materials, because parties are not obligated to disclose information about nontestifying experts. Parties considering whether to commission or use a survey for litigation are not obligated to present a survey that produces unfavorable results. Prior disclosure of a proposed survey instrument places the party that ultimately would prefer not to present the survey in the posi- tion of presenting damaging results or leaving the impression that the results are not being presented because they were unfavorable. Anticipating such a situation, 230. Before trial, the presiding judge was appointed to the court of appeals, and so the case was tried by another district court judge 231. Union Carbide, 531 F.2d at 386. More recently, the Seventh Circuit recommended filing a motion in limine, asking the district court to determine the admissibility of a survey based on an examination of the survey questions and the results of a preliminary survey before the party undertakes the expense of conducting the actual survey. Piper Aircraft Corp. v. Wag-Aero, Inc., 741 F.2d 925, 929 (7th Cir. 1984). On one recent occasion, the parties jointly developed a survey administered by a neutral third-party survey firm. Scott v. City of New York, 591 F. Supp. 2d 554, 560 (S.D.N.Y. 2008) (survey design, including multiple pretests, negotiated with the help of the magistrate judge). 232. MCL 2d, supra note 16, § 21.484. 233. See id. 234. See, e.g., National Football League Props., Inc. v. New Jersey Giants, Inc., 637 F. Supp. 507, 514 n.3 (D.N.J. 1986). 235. MCL 4th, supra note 16, § 11.493 (“including the specific questions that will be asked, the introductory statements or instructions that will be given, and other controls to be used in the interrogation process.”). 236. See id. 414

OCR for page 359
Reference Guide on Survey Research parties do not decide whether an expert will testify until after the results of the survey are available. Nonetheless, courts are in a position to encourage early disclosure and dis- cussion even if they do not lead to agreement between the parties. In McNeilab, Inc. v. American Home Products Corp.,237 Judge William C. Conner encouraged the parties to submit their survey plans for court approval to ensure their evidentiary value; the plaintiff did so and altered its research plan based on Judge Conner’s recommendations. Parties can anticipate that changes consistent with a judicial suggestion are likely to increase the weight given to, or at least the prospects of admissibility of, the survey.238 B. Does the Survey Report Include Complete and Detailed Information on All Relevant Characteristics? The completeness of the survey report is one indicator of the trustworthiness of the survey and the professionalism of the expert who is presenting the results of the survey. A survey report generally should provide in detail: 1. The purpose of the survey; 2. A definition of the target population and a description of the sampling frame; 3. A description of the sample design, including the method of selecting respondents, the method of interview, the number of callbacks, respondent eligibility or screening criteria and method, and other pertinent information; 4. A description of the results of sample implementation, including the number of a. potential respondents contacted, b. potential respondents not reached, c. noneligibles, d. refusals, e. incomplete interviews or terminations, and f. completed interviews; 5. The exact wording of the questions used, including a copy of each version of the actual questionnaire, interviewer instructions, and visual exhibits;239 237. 848 F.2d 34, 36 (2d Cir. 1988) (discussing with approval the actions of the district court). See also Hubbard v. Midland Credit Mgmt, 2009 U.S. Dist. LEXIS 13938 (S.D. Ind. Feb. 23, 2009) (court responded to plaintiff’s motions to approve survey methodology with a critique of the proposed methodology). 238. Larry C. Jones, Developing and Using Survey Evidence in Trademark Litigation, 19 Memphis St. U. L. Rev. 471, 481 (1989). 239. The questionnaire itself can often reveal important sources of bias. See Marria v. Broaddus, 200 F. Supp. 2d 280, 289 (S.D.N.Y. 2002) (court excluded survey sent to prison administrators based 415

OCR for page 359
Reference Manual on Scientific Evidence 6. A description of any special scoring (e.g., grouping of verbatim responses into broader categories); 7. A description of any weighting or estimating procedures used; 8. Estimates of the sampling error, where appropriate (i.e., in probability samples); 9. Statistical tables clearly labeled and identified regarding the source of the data, including the number of raw cases forming the base for each table, row, or column; and 10. Copies of interviewer instructions, validation results, and code books.240 Additional information to include in the survey report may depend on the nature of sampling design. For example, reported response rates along with the time each interview occurred may assist in evaluating the likelihood that nonresponse biased the results. In a survey designed to assess the duration of employee preshift activities, workers were approached as they entered the workplace; records were not kept on refusal rates or the timing of participation in the study. Thus, it was impossible to rule out the plausible hypothesis that individuals who arrived early for their shift with more time to spend on preshift activities were more likely to participate in the study.241 Survey professionals generally do not describe pilot testing in their survey reports. They would be more likely to do so if courts recognized that surveys are improved by pilot work that maximizes the likelihood that respondents under- stand the questions they are being asked. Moreover, the Federal Rules of Civil Procedure may require that a testifying expert disclose pilot work that serves as a basis for the expert’s opinion. The situation is more complicated when a non- testifying expert conducts the pilot work and the testifying expert learns about the pilot testing only indirectly through the attorney’s advice about the relevant issues on questionnaire that began, “We need your help. We are helping to defend the NYS Department of Correctional Service in a case that involves their policy on intercepting Five-Percenter literature. Your answers to the following questions will be helpful in preparing a defense.”). 240. These criteria were adapted from the Council of American Survey Research Organiza- tions, supra note 76, § III.B. Failure to supply this information substantially impairs a court’s ability to evaluate a survey. In re Prudential Ins. Co. of Am. Sales Practices Litig., 962 F. Supp. 450, 532 (D.N.J. 1997) (citing the first edition of this manual). But see Florida Bar v. Went for It, Inc., 515 U.S. 618, 626–28 (1995), in which a majority of the Supreme Court relied on a summary of results prepared by the Florida Bar from a consumer survey purporting to show consumer objections to attorney solicitation by mail. In a strong dissent, Justice Kennedy, joined by three other Justices, found the survey inadequate based on the document available to the court, pointing out that the summary included “no actual surveys, few indications of sample size or selection procedures, no explanations of methodology, and no discussion of excluded results . . . no description of the statistical universe or scientific framework that permits any productive use of the information the so-called Summary of Record contains.” Id. at 640. 241. See Chavez v. IBP, Inc., 2004 U.S. Dist. LEXIS 28838 (E.D. Wash. Aug. 18, 2004). 416

OCR for page 359
Reference Guide on Survey Research in the case. Some commentators suggest that attorneys are obligated to disclose such pilot work.242 C. In Surveys of Individuals, What Measures Were Taken to Protect the Identities of Individual Respondents? The respondents questioned in a survey generally do not testify in legal proceed- ings and are unavailable for cross-examination. Indeed, one of the advantages of a survey is that it avoids a repetitious and unrepresentative parade of witnesses. To verify that interviews occurred with qualified respondents, standard survey practice includes validation procedures,243 the results of which should be included in the survey report. Conflicts may arise when an opposing party asks for survey respondents’ names and addresses so that they can re-interview some respondents. The party introducing the survey or the survey organization that conducted the research generally resists supplying such information.244 Professional surveyors as a rule promise confidentiality in an effort to increase participation rates and to encour- age candid responses, although to the extent that identifying information is col- lected, such promises may not effectively prevent a lawful inquiry. Because failure to extend confidentiality may bias both the willingness of potential respondents to participate in a survey and their responses, the professional standards for sur- vey researchers generally prohibit disclosure of respondents’ identities. “The use of survey results in a legal proceeding does not relieve the Survey Research Organization of its ethical obligation to maintain in confidence all Respondent- identifiable information or lessen the importance of Respondent anonymity.”245 Although no surveyor–respondent privilege currently is recognized, the need for surveys and the availability of other means to examine and ensure their trustwor- thiness argue for deference to legitimate claims for confidentiality in order to avoid seriously compromising the ability of surveys to produce accurate information.246 242. See Yvonne C. Schroeder, Pretesting Survey Questions, 11 Am. J. Trial Advoc. 195, 197–201 (1987). 243. See supra Section V.C. 244. See, e.g., Alpo Petfoods, Inc. v. Ralston Purina Co., 720 F. Supp. 194 (D.D.C. 1989), aff’d in part and vacated in part, 913 F.2d 958 (D.C. Cir. 1990). 245. Council of Am. Survey Res. Orgs., supra note 76, § I.A.3.f. Similar provisions are contained in the By-Laws of the American Association for Public Opinion Research. 246. United States v . Dentsply Int’l, Inc., 2000 U.S. Dist. LEXIS 6994, at *23 (D. Del. May 10, 2000) (Fed. R. Civ. P. 26(a)(1) does not require party to produce the identities of individual survey respondents); Litton Indus., Inc., No. 9123, 1979 FTC LEXIS 311, at *13 & n.12 (June 19, 1979) (Order Concerning the Identification of Individual Survey-Respondents with Their Questionnaires) (citing Frederick H. Boness & John F. Cordes, The Researcher–Subject Relationship: The Need for Protection and a Model Statute, 62 Geo. L.J. 243, 253 (1973)); see also Applera Corp. v. MJ Research, Inc., 389 F. Supp. 2d 344, 350 (D. Conn. 2005) (denying access to names of survey respondents); Lampshire 417

OCR for page 359
Reference Manual on Scientific Evidence Copies of all questionnaires should be made available upon request so that the opposing party has an opportunity to evaluate the raw data. All identifying infor- mation, such as the respondent’s name, address, and telephone number, should be removed to ensure respondent confidentiality. VIII. Acknowledgment Thanks are due to Jon Krosnick for his research on surveys and his always sage advice. v. Procter & Gamble Co., 94 F.R.D. 58, 60 (N.D. Ga. 1982) (defendant denied access to personal identifying information about women involved in studies by the Centers for Disease Control based on Fed. R. Civ. P. 26(c) giving court the authority to enter “any order which justice requires to protect a party or persons from annoyance, embarrassment, oppression, or undue burden or expense.”) (citation omitted). 418

OCR for page 359
Reference Guide on Survey Research Glossary of Terms The following terms and definitions were adapted from a variety of sources, including Handbook of Survey Research (Peter H. Rossi et al. eds., 1st ed. 1983; Peter V. Marsden & James D. Wright eds., 2d ed. 2010); Measurement Errors in Surveys (Paul P. Biemer et al. eds., 1991); Willem E. Saris, Computer-Assisted Interviewing (1991); Seymour Sudman, Applied Sampling (1976). branching. A questionnaire structure that uses the answers to earlier questions to determine which set of additional questions should be asked (e.g., citizens who report having served as jurors on a criminal case are asked different questions about their experiences than citizens who report having served as jurors on a civil case). CAI (computer-assisted interviewing). A method of conducting interviews in which an interviewer asks questions and records the respondent’s answers by following a computer-generated protocol. CAPI (computer-assisted personal interviewing). A method of conducting face-to-face interviews in which an interviewer asks questions and records the respondent’s answers by following a computer-generated protocol. CATI (computer-assisted telephone interviewing). A method of conducting telephone interviews in which an interviewer asks questions and records the respondent’s answers by following a computer-generated protocol. closed-ended question. A question that provides the respondent with a list of choices and asks the respondent to choose from among them. cluster sampling. A sampling technique allowing for the selection of sample elements in groups or clusters, rather than on an individual basis; it may significantly reduce field costs and may increase sampling error if elements in the same cluster are more similar to one another than are elements in dif- ferent clusters. confidence interval. An indication of the probable range of error associated with a sample value obtained from a probability sample. context effect. A previous question influences the way the respondent perceives and answers a later question. convenience sample. A sample of elements selected because they were readily available. coverage error. Any inconsistencies between the sampling frame and the target population. double-blind research. Research in which the respondent and the interviewer are not given information that will alert them to the anticipated or preferred pattern of response. 419

OCR for page 359
Reference Manual on Scientific Evidence error score. The degree of measurement error in an observed score (see true score). full-filter question. A question asked of respondents to screen out those who do not have an opinion on the issue under investigation before asking them the question proper. mall intercept survey. A survey conducted in a mall or shopping center in which potential respondents are approached by a recruiter (intercepted) and invited to participate in the survey. multistage sampling design. A sampling design in which sampling takes place in several stages, beginning with larger units (e.g., cities) and then proceeding with smaller units (e.g., households or individuals within these units). noncoverage error. The omission of eligible population units from the sampling frame. nonprobability sample. Any sample that does not qualify as a probability sample. open-ended question. A question that requires the respondent to formulate his or her own response. order effect. A tendency of respondents to choose an item based in part on the order of response alternatives on the questionnaire (see primacy effect and recency effect). parameter. A summary measure of a characteristic of a population (e.g., average age, proportion of households in an area owning a computer). Statistics are estimates of parameters. pilot test. A small field test replicating the field procedures planned for the full-scale survey; although the terms pilot test and pretest are sometimes used interchangeably, a pretest tests the questionnaire, whereas a pilot test generally tests proposed collection procedures as well. population. The totality of elements (individuals or other units) that have some common property of interest; the target population is the collection of ele- ments that the researcher would like to study. Also, universe. population value, population parameter. The actual value of some char- acteristic in the population (e.g., the average age); the population value is estimated by taking a random sample from the population and computing the corresponding sample value. pretest. A small preliminary test of a survey questionnaire. See pilot test. primacy effect. A tendency of respondents to choose early items from a list of choices; the opposite of a recency effect. probability sample. A type of sample selected so that every element in the population has a known nonzero probability of being included in the sample; a simple random sample is a probability sample. 420

OCR for page 359
Reference Guide on Survey Research probe. A followup question that an interviewer asks to obtain a more complete answer from a respondent (e.g., “Anything else?” “What kind of medical problem do you mean?”). quasi-filter question. A question that offers a “don’t know” or “no opinion” option to respondents as part of a set of response alternatives; used to screen out respondents who may not have an opinion on the issue under investigation. random sample. See probability sample. recency effect. A tendency of respondents to choose later items from a list of choices; the opposite of a primacy effect. sample. A subset of a population or universe selected so as to yield information about the population as a whole. sampling error. The estimated size of the difference between the result obtained from a sample study and the result that would be obtained by attempting a complete study of all units in the sampling frame from which the sample was selected in the same manner and with the same care. sampling frame. The source or sources from which the individuals or other units in a sample are drawn. secondary meaning. A descriptive term that becomes protectable as a trademark if it signifies to the purchasing public that the product comes from a single producer or source. simple random sample. The most basic type of probability sample; each unit in the population has an equal probability of being in the sample, and all possible samples of a given size are equally likely to be selected. skip pattern, skip sequence. A sequence of questions in which some should not be asked (should be skipped) based on the respondent’s answer to a previ- ous question (e.g., if the respondent indicates that he does not own a car, he should not be asked what brand of car he owns). stratified sampling. A sampling technique in which the researcher subdivides the population into mutually exclusive and exhaustive subpopulations, or strata; within these strata, separate samples are selected. Results can be com- bined to form overall population estimates or used to report separate within- stratum estimates. survey-experiment. A survey with one or more control groups, enabling the researcher to test a causal proposition. survey population. See population. systematic sampling. A sampling technique that consists of a random starting point and the selection of every nth member of the population; it is gener- ally analyzed as if it were a simple random sample and generally produces the same results.. target population. See population. 421

OCR for page 359
Reference Manual on Scientific Evidence trade dress. A distinctive and nonfunctional design of a package or product pro- tected under state unfair competition law and the federal Lanham Act § 43(a), 15 U.S.C. § 1125(a) (1946) (amended 1992). true score. The underlying true value, which is unobservable because there is always some error in measurement; the observed score = true score + error score. universe. See population. 422

OCR for page 359
Reference Guide on Survey Research References on Survey Research Paul P. Biemer, Robert M. Groves, Lars E. Lyberg, Nancy A. Mathiowetz, & Seymour Sudman (eds.), Measurement Errors in Surveys (2004). Jean M. Converse & Stanley Presser, Survey Questions: Handcrafting the Stan- dardized Questionnaire (1986). Mick P. Couper, Designing Effective Web Surveys (2008). Don A. Dillman, Jolene Smyth, & Leah M. Christian, Internet, Mail and Mixed- Mode Surveys: The Tailored Design Method (3d ed. 2009). Robert M. Groves, Floyd J. Fowler, Jr., Mick P. Couper, James M. Lepkowski, Eleanor Singer, & Roger Tourangeau, Survey Methodology (2004). Sharon Lohr, Sampling: Design and Analysis (2d ed. 2010). Questions About Questions: Inquiries into the Cognitive Bases of Surveys (Judith M. Tanur ed., 1992). Howard Schuman & Stanley Presser, Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording and Context (1981). Monroe G. Sirken, Douglas J. Herrmann, Susan Schechter, Norbert Schwarz, Judith M. Tanur, & Roger Tourangeau, Cognition and Survey Research (1999). Seymour Sudman, Applied Sampling (1976). Survey Nonresponse (Robert M. Groves, Don A. Dillman, John L. Eltinge, & Roderick J. A. Little eds., 2002). Telephone Survey Methodology (Robert M. Groves, Paul P. Biemer, Lars E. Lyberg, James T. Massey, & William L. Nicholls eds., 1988). Roger Tourangeau, Lance J. Rips, & Kenneth Rasinski, The Psychology of Survey Response (2000). 423

OCR for page 359