National Academies Press: OpenBook

Experimentation and Evaluation Plans for the 2010 Census: Interim Report (2008)

Chapter: 2 Initial Views on 2010 Census Experiments

« Previous: 1 Introduction
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 17
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 18
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 19
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 20
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 21
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 22
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 23
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 24
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 25
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 26
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 27
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 28
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 29
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 30
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 31
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 32
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 33
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 34
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 35
Suggested Citation:"2 Initial Views on 2010 Census Experiments." National Research Council. 2008. Experimentation and Evaluation Plans for the 2010 Census: Interim Report. Washington, DC: The National Academies Press. doi: 10.17226/12080.
×
Page 36

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

2 Initial Views on 2010 Census Experiments A GENERAL APPROACH TO THE SELECTION OF CENSUS EXPERIMENTS The Census Bureau provided the panel with a list of 52 topics for experimentation or evaluation, categorized into 11 general headings (see Appendix A). In addition to the topics themselves, the Census Bureau provided indications as to (a) whether modification of the relevant census processes have a high priority, (b) whether modification of the relevant census processes could potentially save substantial funds in the 2020 census, (c) whether results of an experiment could conclusively measure the effects on census data quality, (d) whether the issue addresses operations that are new since the 2000 census, and (e) whether data will be available to answer the particular questions posed. The panel found these topics and the associated assessments very helpful in focusing our work. The assessments of these topics, in particular, represent a considerable advance over the processes used to select the evaluations and experiments prior to the 2000 census. However, we think that the Census Bureau can go further, when preparing for the analogous 2020 CPEX program, by providing a more developed context for evaluating various topics for potential census experiments. It is difficult to develop priorities without some sense of the collection of census designs that are under serious consideration. For example, it was not useful, at least from a decennial census perspective, to test skip patterns for the long form in 2000 given that the likely design in 2010 was a short-form-only census (although it may have been useful in support of the American Community Survey). Similarly, it was not useful to test an administrative records census in the Administrative Records Census 2000 Experiment when that was a remote possibility for the 2010 census. We understand that it will not be possible for the Census Bureau to produce a single proposal for the general design of the next census when it is time to select the experiments and evaluations for the current census, but it should be possible to produce a relatively small number of leading alternative designs that are under consideration. To help define possible designs, fundamental questions like the following might be asked: • Could the telephone or the Internet be used more broadly as an alternative to mailing back census questionnaires for data collection? • Could administrative records or other data sources be used to better target various operations? • Could administrative records be used to augment last-resort or proxy enumeration in the latter stages of nonresponse follow-up? Having a set of designs that are under consideration helps to direct the experimentation toward resolving important issues that discriminate among the designs. 17

Although we realize that the following are not readily available, in the future it would also be useful to have, for both the current census processes and, to the extent possible, any alternative approaches: (1) estimates of census costs by component operation (and the recent history of costs)1 and (2) the potential impact on the quality of the collected data by component operation. The attribution of both coverage and characteristics error, to component operations or current processes, let alone suggested alternatives, on a national level, not to mention for demographic subgroups, would have been very difficult to achieve in past censuses. The planned census coverage measurement program in 2010 is hoping to make progress in assessing and attributing component coverage error to various sources. This is an important development because the Census Bureau could better justify priorities in undertaking various experiments by providing information on the impact on costs and quality of various alternatives. Furthermore, even if estimates of costs and impacts on accuracy are difficult to estimate, it should generally be possible to determine the major cost drivers and the leading sources of error. There are two other modifications to the Census Bureau’s list of topics that would have facilitated setting priorities. First, it would have been helpful if the list had been separated into candidates for evaluations and candidates for formal experiments. An experiment is, generally speaking, not possible until a reasonable alternative has been identified. Therefore, the listing of any alternative methodologies along with any knowledge of their potential advantages and disadvantages will facilitate the discussion of which issues should be focused on for either experimentation or evaluation. Second, a summary of the current state of research on some of the issues described would have been helpful (in Appendix A, the column on “new to census” is related to this). While some of these issues are extremely new, some, for example questionnaire design, are topics for which the Census Bureau has a history of relevant research. This information would have supported a more refined judgment of the likelihood that use of various alternative approaches might lead to important improvements. PRIORITY TOPICS FOR EXPERIMENTATION IN 2010 So, without an overall strategy for the design of the 2020 census, it was difficult for the panel to develop strict priorities for the topics that should and should not be examined through the use of experiments in the 2010 census. T his lack of a strategy could have been overcome to some degree with information on the potential impact on census costs and accuracy of replacing various census component processes with alternative processes. This is so because the overall goal of research on census methods has at its most basic level two main objectives: reducing costs and improving accuracy. However, this information is not available at this point and so the panel developed the following set of priority topics for experiments based on speculations concerning the possible designs of the 2020 census and qualitative information on the potential impact on costs and 1 It is useful to note here that the cost of the 2010 census is projected to be over $11 billion, which is approximately $100 per housing unit. Therefore, the use of any alternatives that have substantial cost savings is a crucial benefit in looking toward the 2020 census. 18

accuracy from the use of alternative processes. In the same vein, the primary goal of each experiment that we are recommending for priority consideration is to better understand the impacts on both census costs and census data quality resulting from the use of alternatives to current census methodology. The three recommendations in this chapter on experimentation should be considered by the Census Bureau as the three highest priority recommendations in this report. Throughout, the panel was mindful of the special context that the decennial census provides for experimentation, and therefore one additional criterion applied was whether experimentation for the topic under consideration would substantially benefit from a decennial census environment. To start, we put forward two topics for experimentation that were not given sufficient prominence in the list provided by the Census Bureau (see Appendix A).2 Internet data collection was not mentioned in the list, and the use of administrative records was mentioned very briefly (items A.6 and C.6 in Appendix A) as possibly playing a role in augmenting coverage measurement data collection, in otherwise identifying coverage problems, and in identifying and classifying duplicates. These are both very important mechanisms for improved data collection and improved evaluation. Before expanding on those two issues, we also mention that research and experimentation on the American Community Survey (ACS) were not mentioned prominently in the 2010 Census Program for Evaluations and Experiments (CPEX) plan. We understand that ACS research and testing are intended to be handled separately, possibly using an experimental methods panel to identify improvements in ACS methodology. However, there are important commonalities between the effectiveness of methods used to collect ACS data and the methods used to collect decennial census data that need to be exploited. It is very likely that more efficient and better research will be possible by combining perspectives from both operations. An explicit recognition of both the crucial need for an ACS research and experimentation program (this is recommended in National Research Council, 2007) and the potential for cross-fertilization of such an ACS program with the CPEX program would be extremely desirable. Furthermore, given that the ACS and the decennial census will be collecting data simultaneously, measurement of the possible impact of the ACS on decennial census data collection, especially coverage follow-up (CFU) and possibly the coverage measurement effort, would be worthwhile. Finally, as we discuss below, the possible impact of the different residence concepts used by the census and the ACS is a major concern that can and should be assessed as part of the 2010 CPEX. 2 Recall that the Census Bureau typically refers to a census experiment as a study involving field data collection—typically carried out simultaneously with the decennial census itself—in which alternatives to census processes currently in use are assessed for a subset of the population. Census evaluations are usually post hoc analyses of data collected as part of the decennial census process to determine whether individual steps in the census operated as expected. 19

Internet Data Collection The Internet is becoming the preferred mode for many households to conduct their banking, shopping, tax filing, and other official communications and interactions. It is anticipated that the Internet will also soon become a major medium for survey data collection. In the decennial census context, the Internet provides important advantages, including alternate ways of representing residence rules, increased facility for the presentation of questionnaires in foreign languages, real-time editing, and immediate transmission of data, which has important benefits for minimizing the overlap of census data collection operations. With respect to the representation of residence concepts, an Internet-based questionnaire could make it easier to display (and link to) additional examples and instructions for determining census residence; it could also guide respondents through a more detailed set of probe questions in order to more accurately determine household counts. An Internet option could provide linguistically challenged respondents with a wider array of questionnaire assistance tools and, perhaps, administration of the actual census questions in more languages than has been feasible under the financial and logistical constraints of paper administration. The experience in many other countries (see Appendix B for details) is that this alternative mode of response provides important benefits, which are likely to increase as 2020 advances. In particular, the recent 2006 Canadian experience is that the use of the Internet as a response option does improve the quality and timeliness of responses (Statistics Canada, 2007). As described in Appendix B, the Census Bureau has decided against the use of the Internet in 2010 for two principal reasons. First, it believes that it is unlikely to appreciably improve the rate of response given the results of the 2003 and 2005 National Census Tests. Second, there are issues related to security that need to be considered, including the potential for hackers to disrupt the data collection, in addition to any public perception problems that are related to security concerns.3 It is not our charge to evaluate the Census Bureau’s decision not to use the Internet for data collection in the 2010 census. However, it is obvious from the discussion in Appendix B that many countries are already strongly moving in this direction. More importantly, given the advantages listed above and the anticipation of greater advantages in the future, the Census Bureau needs to start now to prepare for use of the Internet as a major means for data collection in the 2020 census. An important step in this preparation is the inclusion of an experiment on Internet data collection in the 2010 census. 3 We note that there is generally little concern about biases in responses received by the Internet, for two reasons. First, there will always be multiple modes for response in the census given the heterogeneous population that is being counted. So mode bias is ubiquitous. Second, mode bias for the questions on the census short form will be relatively modest since there is little room for interpretation, except possibly for residence rules and race/ethnicity. 20

Regarding possible problems in access to and use of the Internet, the panel thinks that there may be alternative ways of interfacing with respondents that could facilitate Internet response, rather than using the mailed questionnaire as the initiating event. Regarding security concerns, Canada and other countries have been able to successfully mitigate security concerns, and it thus seems likely that the United States should be able to address this issue in time for 2020. While the testing of an Internet response option does not require a census context, a census context would be very useful, since complex counting rules, needed for unduplicating double counts, are more easily implemented in a complete count operation. Also, response frequency is substantially higher in the census than in test censuses. We therefore recommend that the Census Bureau include an experiment during the 2010 census that uses alternative mechanisms to facilitate Internet responses and measures the frequency of use for each, along with expeditiousness and quality of response. It may also be possible to ask the respondent if he or she would utilize an online foreign language version if available. RECOMMENDATION 1: The Census Bureau should include, in the 2010 census, a test of Internet data collection as an alternative means of enumeration. Such a test should investigate means of facilitating Internet response and should measure the impact on data quality, the expeditiousness of response, and the impact on the use of foreign language forms. Use of Administrative Records to Assist in Component Census Operations Administrative records are data collected as a by-product of the management of federal, state, or local governmental programs. Key examples for census applications include tax records, welfare records, building permit records, Medicare data, birth and death records, and data on immigration and emigration. Administrative records have a number of potential applications in the decennial census. These applications can be separated into those in which administrative records data are used indirectly and those in which administrative records data are used directly as decennial census data. Applications in which administrative records data are used indirectly include: • for improvement of the Master Address File (MAF): addresses found in a merged administrative records file that were not on the MAF could be visited for field validation. • to validate edit protocols:4 edit protocols that were used to make decisions about inconsistent information in responses could be based on (or evaluated using) administrative records. For example, a 22-year-old listed as living with his parents and in a prison could have his enumeration moved to the prison address through information found in administrative records. 4 An edit protocol is an automated rule that either generates an imputed response or changes a collected response based on the values of other responses. 21

• for coverage improvement: for households or individuals found on possibly more than one administrative list who were not enumerated in the census, fieldwork could be instigated at the indicated address; furthermore, addresses identified as being vacant could be checked to see if that assessment agrees with information in administrative records. • for coverage measurement and coverage evaluation: consistent with A.6 in Appendix A, administrative records could be used to improve the information collected in postenumeration survey interviews5; furthermore, administrative records could be used to allocate demographic analysis estimates6 to subnational regions; • to help target households for various purposes (see below). • for duplicate search: administrative records could be used to determine whether two records that have been matched actually represent the same person or to determine where the correct census residence is without resorting to fieldwork.7 Applications in which administrative records data are either used directly in the decennial census or in assessing coverage include: • as an alternative to last-resort proxy response: instead of asking a neighbor or landlord for information in situations in which a respondent is not located after six attempts, if information is available from administrative lists, that information could be used for the enumeration. • as an alternative to item and unit imputation: in the situations in which the Census Bureau uses either item or unit imputation (see National Research Council, 2004a, for a discussion of when unit imputation was used in the 2000 census), information from administrative records could be used as input to the imputation. • as a means for coverage evaluation: whereby a person that appears on two or three administrative lists and not in the census is proof of a census omission. In each of these applications, there could potentially be important benefits for the 2020 census, either in reducing field costs or in improving the quality of census data. We justify our optimism about the potential for applying administrative records to improve the above census component operations, and therefore the need to test those applications in the 2010 census, given the following considerations. First, there is clearly much useful information contained in various administrative records. The nonsurvey nature of the data collection gives a real chance of being able to provide useful information on hard-to- 5 A postenumeration survey is a survey taken after the census is concluded that is used to measure coverage errors. 6 Demographic analysis is an accounting scheme, roughly births plus immigrants minus deaths minus emigrants, for estimating the size of national demographic groups. 7 An evaluation of A.C.E. Revision II estimates of duplication in Census 2000 using administrative records information demonstrated the potential for use of this information (for details, see Mulry et al., 2007). Administrative records might be used to confirm whether enumerations that are linked by computerized search are the same persons when fieldwork was unable to provide confirmation. 22

count individuals. This advantage probably motivated the Census Bureau to attempt to use information from administrative records for coverage improvement, as in 1980 with the Non-Household Sources Check, and in 1990 with the Parolees and Probationers Check. Also, the Census Bureau will be using administrative records to generate some of the coverage follow-up interviews in 2010. On the other hand, there are also deficiencies in administrative records, including undercoverage of portions of the population. (See NRC, 1994: Chapter 5 for a discussion of the limitations of administrative records systems for census applications.) Some of the existing research has been on the use of administrative records as an alternative to taking a census, notably AREX 2000, which is not that useful in assessing the value of administrative records for census component operations. As mentioned previously, the population coverage for the more thorough of the schemes tested in AREX 2000 was between 96 and 102 percent relative to the Census 2000 counts for the five test site counties. However, the AREX 2000 and the census counted the same number of people at the housing unit level for only 51.1 percent of the households, and they counted within one person of the census for only 79.4 percent of the units. However, the Census Bureau has made substantial progress on administrative records since then. For example, E-StARS8, the Census Bureau’s name for a merged and unduplicated list of individuals from several administrative lists, was used to explain 85 percent of the discrepancies between the Maryland Food Stamp Registry recipients and estimates from the Census Supplementary Survey in 2001 (the pilot American Community Survey). Although there has been much progress in collecting a higher quality merged unduplicated list of individuals, there has been little research on the nine applications listed here, in which the objective is to use administrative records not as a surrogate census but to assist in carrying out specific component operations. The panel’s optimism is based not only on the information contained in administrative records, but also on the recognition that some of the component operations, especially last-resort enumeration, are understandably error-prone or are expensive (e.g., the coverage follow-up interview). Given that, administrative records do not have to be flawless to potentially provide a benefit. In addition, looking toward 2020, the quality of administrative records has been steadily improving over time. E-StARS, the Census Bureau’s merged list of unique administrative records for individuals and housing units, has about the right number of people. Also, the economic directorate of the Census Bureau has been using information from administrative records directly in establishment surveys for a long time. So there is reason for optimism that some of the applications listed could be substantially improved through the use of administrative records. 8 E-StARS is a nationwide multi- purpose research database, which combines administrative records from a variety of federal and state government sources and commercial databases with micro-data modeling to produce statistics for housing units and individuals that are comparable to decennial census results. 23

It is therefore important to determine, through either experiments or evaluations, which of the above (and other) applications of administrative records are most likely to be beneficial in the 2020 census, what needs to be done to implement such techniques nationally, and what the risks and benefits are. The basic idea would be to select several counties, merge and unduplicate all the relevant lists that can be collected for both individuals and addresses in those areas, and use the information from the merged file for some of the above purposes in comparison with the current census processes. In some cases, field verification would be needed to produce metrics for comparison—which is the main reason why this might fall into the experimentation rather than the evaluation category. However, in many cases much could be discovered without additional field data collection. Clearly, a census context is extremely helpful or essential for some of the above applications, such as for duplicate search. An additional complication is that administrative records are improving in quality year by year, and therefore any experiment or evaluation should take this possibility into account. (This suggestion is closely related to items C.2 and C.6 on the Census Bureau’s list of issues.) A particular means by which administrative records could be used to reduce field costs, at the price of possibly only a negligible reduction in data quality, is targeting. Targeting is the application of a census procedure to only a subset of the population. This subset of the population is selected through use of an algorithm that attempts to differentiate between people or households that are and are not likely to benefit from the application of the procedure. This algorithm is often supported by some external data source, and, in particular, administrative records should be studied as potentially playing this role. Administrative records offer opportunities to increase the scope and effectiveness of targeting, and in particular they may have important advantages for enumerating hard-to- count populations. (In a sense, the Census Bureau already uses targeting in several respects, including targeting of the advertising campaign, targeting areas for placement of “Be Counted” forms, and targeting areas for so-called blitz enumeration techniques.) Of course, any time one does not use a census enumeration process on some areas that is used elsewhere, some of the omitted areas may have slightly poorer quality data as a result. So, for example, if a block canvass is not used in a particular block, there is a chance that new housing units there will be missed and that the area will receive a lower count as a result. (It should be noted that the Census Bureau has previously considered targeting for use with block canvassing, but to this point it has rejected this idea.) However, if properly planned and implemented, targeting should increase overall census data accuracy and at the same time reduce costs. This is because, if the targeting is effective, the reduction in data quality due to the selective omission of a census process is likely to be very slight. The resources saved through the use of targeting can then be used in other ways to improve the overall census data quality. Furthermore, sometimes resources are already constrained, and for those situations the question may not be whether to use targeting, but how best to use it. Also, through use of an algorithm, there is no intentional bias against any given area. (It may also be worth mentioning that some suggest that targeting can be perceived as uncomfortably close to sampling for the count. This is clearly an incorrect perception; it is merely the allocation of scarce resources to those cases most likely to benefit from this additional effort at enumeration.) 24

Clearly, further research (either experimentation or evaluation) is needed before targeting can be used in the decennial census. Given the promise of targeting, the panel thinks that the Census Bureau should prioritize either experimentation or evaluations that assess the promise of various forms of targeting and therefore retain sufficient data to ensure that such evaluations can be carried out. (Targeting is included in items C.3 and E.2 on the Census Bureau’s list.) Creation of a Master Trace Sample, discussed in Chapter 3, is likely to satisfy this data need. RECOMMENDATION 2: The Census Bureau should develop an experiment (or evaluation) that assesses the utility of administrative records for assistance in specific census component processes—for example, for improvement of the Master Address File, for nonresponse follow-up, for assessment of duplicate status, and for coverage improvement. In addition, either as an experiment or through evaluations, the Census Bureau should collect sufficient data to support assessment of the degree to which targeting various census processes, using administrative records, could reduce census costs or improve census quality. Alternative Questionnaire Experiment The 1980, 1990, and 2000 censuses have all involved some type of alternative questionnaire experiment in the associated research programs. The reason is straightforward: anything that can be done to increase response to questionnaires when they are sent out will necessarily decrease the amount of work that must be done by enumerators in the field in following up with nonrespondents. Also, to the extent that the initial questionnaire can be made clear, the quality of the collected data should improve. It is therefore of high priority that an alternative questionnaire experiment should be employed in the 2010 CPEX. The Panel on Residence Rules in the Decennial Census (National Research Council, 2006: Finding 8.2) observed that “the Census Bureau often relies on small numbers (20 or less) of cognitive interviews or very large field tests (tens or hundreds of thousands of households, in omnibus census operational tests) to reach conclusions about the effectiveness of changes in census enumeration procedures.” That panel argued for the development of more mid-range, smaller scale tests. We concur; there are numerous questionnaire design issues for which smaller scale tests would be a preferable vehicle compared with a formal census experiment. In thinking about an alternative questionnaire experiment or experiments for the 2010 census, the question is: Which sets of possible changes to the census questionnaire most need (or would most benefit) from being conducted in the census environment? Race/Ethnicity as a Single Question On page 1 of the short-form-only questionnaire planned for use in the 2008 census dress rehearsal (see Figure 2-1), the two questions on race and Hispanic origin (questions 8 and 25

9) take up half of the second column and about 40 percent of the respondent-fillable space on the page. Likewise, the race and Hispanic origin questions take up about half of the space allotted to collect information on persons 2 through 6 in a household (the block for Person 2 is shown in Figure 2-2). In the short-form-only census planned for 2010, then, the largest share of the questionnaire is given to the questions on race and Hispanic origin; therefore, if a viable alternative exists, a major focus of a questionnaire experiment in the 2010 census should be one that focuses on the two questions on race and ethnicity, since the rate of response is typically associated with the perceived ease of compliance. Information on race is currently requested on the census questionnaire in response to the needs of the Voting Rights Act of 1965. In 1997, the Office of Management and Budget (OMB) developed standards for racial and ethnic classification to be used in the 2000 census, which resulted in 63 possible responses to account for multiple race identification. These standards will continue to apply to the 2010 census. Ethnicity, defined as either “of Hispanic origin” or “not of Hispanic origin,” was requested on a separate question in the 2000 census, resulting in 126 total race/ethnicity response categories. Evaluations have shown that the race/ethnicity questions used in 2000 (and in previous censuses) were associated with substantial confusion of race and ethnicity, often resulting in nonresponse, in some (seemingly) contradictory responses to the decennial census questions, and in high frequencies of response of “some other race” for Hispanic respondents (see, e.g., Census 2000 Topic Report #9, Race and Ethnicity in Census 2000, Census 2000 Testing, Experimentation, and Evaluation Program). Over the past 20 years, the Census Bureau has devoted considerable research to testing various approaches to the design of questions on race and ethnicity, trying alternative question wordings, formatting, and sequencing to elicit quality information (see, e.g., Rodriguez, 1994; McKay and de la Puente, 1995; de la Puente and McKay, 1995). The Census Bureau has included race/ethnicity as one of their 11 topic groups for possible experimentation or evaluation in 2010. However, the Bureau gives low priority to the issue of developing a combined race and ethnicity question (listed as item B.2 in Appendix A). We disagree with that assessment; race and ethnicity are not really separate notions for many respondents, and the confusion resulting from the use of separate questions might be substantially reduced through the use of a single race/ethnicity question. This notion has been previously tested by the Census Bureau (1997) with generally positive results. Furthermore, the tendency to report “some other race” rather than Hispanic is likely to be reduced through the use of a single question. The current race and ethnicity questions provide a number of examples of specific groups, including Filipino, Guamanian, or Samoan for race, and Puerto Rican and Cuban for ethnicity. There is no legal obligation stemming from the Voting Rights Act for the census questionnaire to include the mention of these various specific groups on the census short form. The argument in favor of including as many groups as the form will support is that this may increase response given personal feelings of affiliation with very 26

specific groups. Also, some argue that use of a streamlined questionnaire—that is, one that does not mention these individual groups—will increase the frequency of the mistaken response of “some other race.” However, we suspect that the response of “some other race” is much more a function of the separation of race and ethnicity into two questions. Furthermore, we think that the inclusion of the specific groups makes the entire census questionnaire appear more complex, which may lower the response rate. We acknowledge that there is great interest in the relative size of these numerically smaller race and ethnic groups for states and counties, but that information will now be available on the American Community Survey. We therefore think that the Census Bureau should include, as an experiment, the use of a single question on race and ethnicity. In addition, a streamlined version of this should also be tested, in which the only groups listed are (1) white, (2) black, (3) American Indian or Alaskan Native, (4) Asian, (5) Native Hawaiian or Pacific Islander, and (6) Hispanic, allowing for multiple responses in all of these categories.9 We think that this is a productive avenue for testing because of its potential improvement regarding data quality. However, progress will be difficult, since the best approach to collecting higher quality data without discouraging respondents is not obvious. Continued experimentation is therefore imperative. Finally, in addition to the test of a single race/ethnicity question, in-depth follow-up of a small sample of individuals who provide inconsistent responses to the 2010 questions should be planned.10 Without understanding respondent behavior induced by a given question wording, it is very difficult to come up with hypotheses about how to improve that wording. Therefore, it would be useful to contact 50 or 100 such individuals and through face-to-face interviews determine why they responded the way that they did. Representation of Residence Concepts In terms of physical space on the page, the items on race and ethnicity take up the greatest area due to the number of responses permitted. However, the largest single presentation of a question has been Question 1 on recent censuses: the count of residents at the household. The 2010 census will follow the basic concept laid out in the law authorizing the first census in 1790 of counting people at their “usual place of abode” (1 Stat. 105). Over time, this concept has evolved into one of counting people at their usual residence; this is distinct from counting them at their current residence or the location where they are when reached by the census. The Census Bureau has developed sets of residence rules to 9 It should be noted that this specific question format runs counter to a provision included in the fiscal year 2005 omnibus appropriations bill (and that was made binding on subsequent years), which requires the Bureau to include a “some other race” option. 10 Inconsistency is by necessity apparent since the responses for children with parents of different races or ethnicities may not be clear and, more importantly, since race and ethnicity responses are a matter of self- identification that does not need to be consistent. Apparently inconsistent responses include respondents who check a category indicating that they consider themselves to belong to a specific Hispanic group but at the same time also responding that they are not of Hispanic, Latino, or Spanish origin. 27

determine how to handle cases in which residential location may be ambiguous. Since the switch to reliance on the mail for most census data collection, the phrasing of Question 1 and the instructions that accompany it have been continually revised in order to guide census respondents to reporting their own residential situation in a way that is consistent with the Census Bureau’s residence rules. The National Research Council report Once, Only Once, and in the Right Place: Residence Rules in the Decennial Census (2006) comprehensively reviewed census residence rules past and present, assessing their adequacy in light of societal changes that can complicate clear definition of residence. These changes include the growth of both “sunbird” and “snowbird” populations that move to different areas based on seasonal weather changes, the changing nature of family structures (including children in shared custody arrangements), and the emergence of assisted living facilities for the elderly. T he 2006 report also considered long-standing historical challenges to accurate residence measurement, particularly concerning the large share of the nonhousehold (or group quarters) population living in places like college dormitories and correctional facilities. Based on its review, the study panel suggested additional areas of research. Primary among these was a call to collect “any residence elsewhere” information: allowing respondents to specify a specific street address for another location at which they consider themselves a resident, as well as a follow-up question about whether the respondent considers this other location to be their “usual residence” (National Research Council, 2006:Rec. 6.2). That panel specifically suggested that “any residence elsewhere” be asked of the general household population in a 2010 census experiment and that the resulting data be comprehensively reviewed in an evaluation report (National Research Council, 2006:Recs. 6.5, 8.4). It also suggested that the “any residence elsewhere” question be asked of all group quarters respondents in 2010 (National Research Council, 2006:Sec. 7-D); a similar “usual home elsewhere” question was asked on all group quarters questionnaires in 2000, but they were processed and considered valid only for particular group quarters types. A major reason for the importance of collection of “any residence elsewhere” information on a test basis for the general population is to help resolve a major outstanding concern about the transition from the traditional census long form to the ongoing American Community Survey. While the decennial census uses a “usual residence” concept, the ACS uses something closer to a “current residence” rule; specifically, residence in the ACS is defined using a “two-month rule” relative to the time of interview (see National Research Council, 2006:Box 8-2 and Sec. 8-C for extended discussion). The differences in census and ACS estimates that may be attributed to their differing residence standards is as yet unknown and is a concern on which solid data are critically important. To that end, National Research Council (2006:Rec. 8.3) suggested the twofold approach of testing the “any residence elsewhere” question in the 2010 census and testing a “usual residence”-type question on the ACS questionnaire as a separate ACS research activity. In addition to the “any residence elsewhere” query, National Research Council (2006:Rec. 6.5) suggested that additional methods for presenting residence rules and 28

concepts be included in a 2010 alternative questionnaire experiment. In particular, the panel suggested a shift away from the model of lengthy instructions before Question 1 and instead breaking the resident question into smaller, easier-to-parse questions. This work could build on alternative questionnaire presentations that the Census Bureau tested on a limited basis in its 2005 National Census Test and an ad hoc test in 2006. To be clear—and as is noted elsewhere in this report—National Research Council (2006) argued that the Census Bureau often relies too much on both very small and very large tests, and that some residence-related questions (e.g., specific cues to include on questionnaires or alternative means of developing rosters of household members) may be better handled by other testing means. However, the importance of Question 1, the potential gain in data accuracy, and the potential reduction in the need to dispatch an enumerator to conduct a coverage follow-up interview that could stem from even small changes on the question form all argue strongly for a residence component of a 2010 alternative questionnaire experiment. Other Content Issues Other content issues on the 2010 census form are also worth examining and might benefit from an experiment in 2010. The hope is that these various questionnaire wording issues could be folded in with an experiment on race and ethnicity, residence rules, or both. There may be too many issues for a single experiment and therefore there may be a need to further prioritize these issues before finalizing an alternative questionnaire experiment. • Coverage probes. Two coverage probes will be included on the 2010 census questionnaire for the first time. These are: (1) “Were there any additional people staying here April 1, 2010 that you did not include in Question 1?” and (2) “Does Person X sometimes live or stay somewhere else?” This is followed by a listing of situations that are sometimes reported in error. As implemented in 2010, this set of probes is primarily intended as a trigger for inclusion in the coverage follow-up operation, described below. The probes also serve to jog a respondent’s memory and prompt them to reevaluate their answer to the household resident count in Question 1 of the census form. It is worth considering whether more specific or differently worded probes are more effective at accomplishing either of these tasks, and whether they can be structured to provide auxiliary information that could be useful in editing census responses. For instance, a more detailed query about whether the respondent is at (or may be counted at) a seasonal residence, or a focused question on the residence of college-enrolled children, may prove to have advantages over the approach planned for 2010. • Motivation of respondents. The 2006 Canadian census questionnaire added brief descriptive statements at key places in order to anticipate respondents’ concern about a question’s justification in the census. By including these, Statistics Canada thinks that it has achieved some benefits in building respondent motivation to answer questions on the census form. For example, the 2006 census long-form questions on race and ancestry—which, in Canada, are not part of the short-form questions asked of everybody—are prefaced with the explanation: 29

The census has collected information on the ancestral origins of the population for over 100 years to capture the composition of Canada’s diverse population. The specific race question includes the reminder that this information is collected to support programs that promote equal opportunity for everyone to share in the social, cultural, and economic life of Canada. The last page of the Canadian short-form questionnaire includes a paragraph- length section labeled “Reasons Why We Ask the Questions,” noting, for example, that “Question 7 on languages is asked to implement programs that protect the rights of Canadians under the Canadian Charter of Rights and Freedoms. It also helps determine the need for language training and services in English and French.” It could be useful to measure the impact on the quality of response that would result from various attempts to represent similar motivational messages on the U.S. census form. • Group quarters. Given that some types of group quarters’ residences are subject to a high rate of duplication, in particular those in college dormitories (see Mule, 2002), it might be useful to evaluate the benefits of a “usual home elsewhere” question on the census questionnaire for all types of group quarters residences. (This is consistent with Recommendation 6.2 and Section 7-C in National Research Council, 2006.) This might facilitate real-time identification of census duplicates between residents of group quarters and residents of nongroup quarters. Finally, item G.1 on the Census Bureau’s list of research topics proposes administering the 2000 census questionnaire to a group of 2010 census respondents so that some insight can be drawn about the effectiveness of the complete bundle of changes between the 2010 and 2000 forms. This proposal to use the prior census questionnaire as a control group treatment has not always been carried out in past alternative questionnaire experiments. Implementing it is consistent with guidance from the previous National Research Council report (2006:Recommendation 6.8), and we concur that it should be done as part of a 2010 alternative questionnaire experiment. Deadline Messaging and Other Presentation Issues Deadline messaging includes a variety of ways of notifying the respondent on mailing materials that in order to be accepted the enclosed questionnaire has to be returned by a given date. By a compressed mailing schedule is meant that, instead of the approach used in the 2000 census, in which the questionnaire was mailed two weeks before Census Day, the households will receive the census questionnaire just a few days before Census Day. In the 2006 decennial short-form experiment,11 the use of deadline messaging, in conjunction with a compressed mailing schedule, resulted in a higher mail response rate (Martin, 2007). The deadline message was placed on the advance letter informing the 11 The decennial short-form experiment evaluated several potential improvements to the census mail form. These included a revised instruction about whom to list as Person 1, a series of questions to reduce and identify coverage errors, and a deadline for return of the form. 30

household of the upcoming appearance of the census questionnaire, on the envelope of the initial mailed questionnaire, on the initial questionnaire cover letter, and on the reminder postcard. However, the 2006 test could not determine whether the increased response was due to a specific form of the deadline message or whether it was due to the compressed mailing schedule. Therefore, some further work attempting to determine the specific cause of the increase in response would be extremely useful. More importantly, since increasing the initial response rates decreases the nonresponse follow-up fieldwork, which reduces census costs, this is important to investigate further. Additional research on the effectiveness of different dates for both the initial mailing of the census questionnaires and the mailing of the replacement questionnaires would also be useful to undertake. Item H.1 on the Census Bureau’s list argues that looking at this issue in a census environment is important, and the panel agrees, since response to mail materials differs in a census in comparison to either a test census or a survey environment. We have described a number of issues that relate to the content and the presentation of the census questionnaire, including race and ethnicity, residence rules, coverage probes, providing a motivation for the cooperation of respondents, collection of alternate address data for residents of group quarters, and deadline messaging. It may be that several of these issues can be jointly addressed in a single experiment by including these issues as separate factors in the experiment. One straightforward way of accomplishing this, which is much more cost-effective with respect to the burden on respondents, is through the use of a fractional factorial design, assuming that some of the higher level interactions between these factors are negligible (see Box and Hunter, 1961). RECOMMENDATION 3: The Census Bureau should include one or more alternate questionnaire experiments during the 2010 census to examine: • the representation of questions on race and ethnicity on the census questionnaire, particularly asking about race and Hispanic origin as a single question; • the representation of residence rules and concepts on the census questionnaire; and • the usefulness of including new or improved questions or other information on the questionnaire with regard to (1) coverage probes, (2) the motivation of census questions, (3) the request of information on usual home elsewhere on group quarters questionnaires, and (4) deadline messaging and mailing dates for questionnaires. In such experiments, both the 2000 and the 2010 census questionnaires should be included in the assessments. The Census Bureau should explore the possibility of joining the recommended experiments listed above into a single experiment, through use of fractional factorial experimental designs. A Possible Additional Experiment: Comparison of Telephone to Personal Interview for Coverage Follow-Up Interview 31

The current plans are to carry out a coverage follow-up interview in 2010 to collect additional information for six situations for which the number of residents is unclear based on the responses to the initial questionnaire (see Box 2-1). Since a large fraction (probably more than 20 percent) of U.S. households may satisfy one or more of these six situations, the costs of the resulting coverage follow-up interviews could be prohibitive. To reduce these costs, the Census Bureau is planning to follow up these households by telephone only (and therefore only for those households that provide a contact telephone number on the census questionnaire). This specific implementation of the coverage follow-up interview raises some concerns about the quality of the information received. First, we are concerned that the households that would most benefit from this follow-up will be those not likely to provide valid telephone numbers and consequently will be missed. For example, some of those that are harder to enumerate may make use of prepaid cell phones. Therefore, it would be useful to determine whether other wordings of the request for phone numbers would increase the response to this item. (This relates to the earlier issue of providing motivation for questions on the short form. This suggestion is related to items C.8, C.7, F.1, and F.2 on the Census Bureau’s list of issues.) Another concern stems from the fact that the coverage follow-up interview uses question wording similar to that on the census questionnaire, and there is thus a good chance of generating the same response as was initially received in the case of interviews resulting from coverage probes or from the identification of potential duplicates. One alternative to address this concern that might be worth examining is whether there is a way of communicating to the respondent the circumstances that generated the interview through a series of probes. A second way of addressing this concern is that higher quality answers, possibly using such probes, might be produced through use of a face-to-face interview, rather than a phone interview. While this would clearly be more expensive, knowing the impact on quality would be useful in designing the analogous data collection in 2020. Also, there are ways of reducing field interview costs to permit more face-to- face interviewing. For example, the targeting of households through the use of administrative records might reduce the workload to a manageable level, allowing for face-to-face interviews of selected households. If the decision is made not to include study of the coverage follow-up interview in a census experiment, the above concerns strongly argue for retention of all relevant information to be able to evaluate this process after the census is completed. CONCLUSION These are the panel’s suggestions for experiments to be carried out during the 2010 census. We look forward to assisting the Census Bureau in fleshing out more specific study plans for the ideas that are ultimately selected for experimentation in the coming months. 32

We also think that the Census Bureau needs to increase its in-house expertise in experimental design regarding census experimentation. The panel has seen evidence in the past that some experiments, in both censuses and test censuses, have not been fully consistent with accepted principles of experimental design. This includes the use of preliminary assessments of which factors might affect a response of interest, the use of controls and blocking for meaningful comparisons (see, e.g., National Research Council, 2006: Rec. 6.8), and the simultaneous varying of test factors (including use of orthogonal designs, factorial designs, and fractional factorial designs) for greater effectiveness of test panels. Also, often not enough attention is paid in advance to the statistical power of tests. Certainly some of this can be attributed to the fact that the primary function of a census or a census test is an opportunity to assess the full census operation with the embedded experiments having to make do with various limitations. However, it is important for the Census Bureau to improve its application of experimental design techniques for its experiments, both to reduce the costs of the experimentation and to increase the information contained in the results. 33

BOX 2-1 Situations Generating a Coverage Follow-up Interview 1. Households with discrepancies between the household counts and the number of individuals for which information is provided 2. Households with more than six residents (which will therefore not fit on the census questionnaire 3. Households that indicate on the census questionnaire other households in which the residents might also have been enumerated 4. Households that indicate other people not included in response that sometimes live there 5. Households that are identified as having individuals that might have been duplicated in the census through use of a national computer search for duplicates 6. Households that may have not been correctly enumerated given information from administrative records. SOURCE: Adapted from information from U.S. Census Bureau; see also National Research Council (2006:Box 6-3). 34

Figure 2-1 First page (Person 1), draft 2008 dress rehearsal questionnaire. SOURCE: http://www.census.gov/Press-Release/www/2007/questionnaire_4_24_07.pdf. 35

Figure 2-2 Person 2 panel, draft 2008 dress rehearsal questionnaire. SOURCE: http://www.census.gov/Press-Release/www/2007/questionnaire_4_24_07.pdf. 36

Next: 3 Initial Views on 2010 Census Evaluations »
Experimentation and Evaluation Plans for the 2010 Census: Interim Report Get This Book
×
Buy Paperback | $29.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

For the past 50 years, the Census Bureau has conducted experiments and evaluations with every decennial census involving field data collection during which alternatives to current census processes are assessed for a subset of the population. An "evaluation" is usually a post hoc analysis of data collected as part of the decennial census processing to determine whether individual steps in the census operated as expected. The 2010 Program for Evaluations and Experiments, known as CPEX, has enormous potential to reduce costs and increase effectiveness of the 2020 census by reducing the initial list of potential research topics from 52 to 6. The panel identified three priority experiments for inclusion in the 2010 census to assist 2020 census planning: (1) an experiment on the use of the Internet for data collection; (2) an experiment on the use of administrative records for various census purposes; and (3) an experiment (or set of experiments) on features of the census questionnaire. They also came up with 11 recommendations to improve efficiency and quality of data collection including allowing use of the Internet for data submission and including one or more alternate questionnaire experiments to examine things such as the representation of race and ethnicity.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!