Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 7
--> 1 Introduction The decennial census is used to determine political apportionment, redistricting, and fund allocation for a wide variety of federal, state, and local programs. Many of these uses are mandated by laws, which impose various constraints, including deadlines, and limitations on the information that can be collected and on the statistical methods that can be used. Participation in the census is mandatory, but as a practical matter it is not enforced. Consequently, there is less than full participation in the census, and various means are used to compensate for the missing data, which are vital given the uses of census data. In April 1995 the Bureau of the Census of the U.S. Department of Commerce asked the Committee on National Statistics of the National Research Council (NRC) to form a study panel to review plans and research for and make recommendations regarding the design of the 2000 census. The current panel was formed to further consider many of the issues raised by the earlier Panel to Evaluate Alternative Census Methods (National Research Council, 1994). It was charged with reviewing results of the 1995 and 1996 census tests, particularly with respect to sample design for nonresponse follow-up and the planned integrated coverage measurement sample design and estimation procedures for the 2000 census; recommending additional field tests and research to carry out in the near term and in the 2000 census; and reviewing the use of administrative records in the 2000 census. As required, the panel has issued two interim reports (National Research Council 1996, 1997b). The first report focused on the use of statis-
OCR for page 8
--> tical procedures, especially sampling, in the 2000 census. The second report provided refinements in several areas, including plans and research in the use of sampling for nonresponse follow-up, plans for constructing the master address file, plans and testing of multiple response modes and the use of respondent-friendly questionnaires, and plans for the use of administrative records. The panel also issued a letter report on the problems raised by the use of an untargeted replacement questionnaire (National Research Council, 1997a). The rest of this chapter provides an overview of census innovations and a description of the 1998 census dress rehearsal and the associated evaluation studies. Chapter 2 reviews key findings of the panel with regard to the six main processes (outlined below) that the Census Bureau plans to implement for the first time in the 2000 census. Chapter 3 reviews in more detail a number of Census Bureau decisions concerning how these activities are to be carried out in 2000 and what might be done differently in the 2010 census. Chapter 4 presents a discussion of three important technical criticisms in the statistical literature against use of integrated coverage measurement. Finally, Chapter 5 comments on current Census Bureau plans for research and experimentation and data collection during the 2000 census, looking forward to the 2010 census. A glossary of census terminology is also provided. Innovations in Census Methodology The basic approach to the 2000 census that the Census Bureau proposed in 1996 is either a direct continuation or else closely related to the methods that have been used since 1960: The Census Bureau develops a comprehensive list of residential dwellings in the United States. A census form is mailed to each of those housing units. Households are asked to return the completed forms by mail. Households that do not return the forms are visited by enumerators. The major problems in quality and cost that arise in the census result from the fact that these four procedures do not work perfectly by themselves and they do not interact perfectly. First, some households are missing from the address list used for mailing forms. In some cases, the Postal Service fails to deliver the form to the household, often because the address is inadequate or the Postal Service erroneously considers the dwelling to be vacant. Second, it is expected that more than 30 percent of American housing units in 2000 will not return the form delivered to
OCR for page 9
--> them. Third, for a portion of those that are returned, there will be persons missed and other errors of fact. Finally, enumerators often fail either to contact household members or to convince them to respond, even after numerous and expensive field follow-up visits. The Census Bureau has developed a number of revised procedures to update and improve each component of this fundamental structure. (See the glossary of census terms for details regarding language that appears below.) These are procedures that the Census Bureau now plans to implement for the first time in the 2000 census (although some are contingent on decisions by Congress and the courts): The Census Bureau has made and is making use of enhanced procedures for developing the address list to which the census forms are mailed. These procedures have involved efforts to build the address list throughout the decade (instead of relying solely on a rush effort as the census approaches, using sources of variable quality) and included the use of the previous decennial census mailing list. Each household will be sent a letter notifying it that a census form will be mailed shortly, followed by the arrival of the census form, followed by a reminder to complete the form. As this report is being completed, it is unclear whether the Census Bureau will make use of the mailing of a second census form to every housing unit (not only the nonresponding ones, as was once proposed but is now considered to be operationally infeasible). Census forms will be made available in a variety of public places that have previously not been used for this purpose. Households that believe they did not receive a form or individuals who believe they were not included on any household form may complete a form and return it to the Census Bureau. In addition, people may call the Census Bureau to provide their responses. Households that fail to return the mailed census form by a specific date will be followed up on a sample basis. A random sample of these households (including those that were classified as vacant by the Postal Service) will be contacted to obtain the requested data. This approach represents a major departure from past census practice, in which follow-up was attempted for all nonrespondent households.1 The Census Bureau believes that this plan will be advantageous for three reasons: (1) to ensure that nonresponse follow-up is finished within a reasonable time; (2) to control the costs of nonresponse follow-up (the primary cause of 1 Sampling for long-form information, however, has been used in the decennial census since 1940.
OCR for page 10
--> census cost overruns); and (3) to improve census quality, especially when used in conjunction with plans for integrated coverage measurement, by expediting field operations. The proportion of nonresponding households included in the follow-up sample will vary by geographic area (census tract). The proportion to be sampled will be determined by the proportion of households in a census tract that return the census forms by mail. The higher the proportion that do so, the smaller will be the proportion of nonrespondents who are visited by enumerators—but at least one-third of mail nonrespondents will be included in the sample from each tract, regardless of the mail return rate. There will be an additional survey of 750,000 housing units conducted after the nonresponse follow-up is concluded. This post-enumeration survey, much larger than that conducted in previous censuses, is designed to obtain information about housing units that were missed by the initial census process and about individuals who were omitted from or erroneously included on their household census form (and households included in the wrong geographic area).2 The post-enumeration survey is an effort at data collection that is independent, operationally, of the census—that is, it does not rely on other aspects of census processes. This approach assists in supporting the statistical assumption of the independence of the two enumeration processes used in estimation associated with the post-enumeration survey. By reconciling the results of this independent survey with inputs from previous stages of the census, information is obtained about the number and characteristics of people who were missed (or erroneously included) in the initial or standard census process. The integrated coverage measurement survey is very similar to the post-enumeration survey conducted in 1990, but with two important differences: it is planned to be nearly five times as large, and the results are planned to be incorporated into the single set of official census figures rather than presented separately as an adjustment. This survey along with the resulting estimation, is referred to as integrated coverage measurement. The results of the nonresponse follow-up and the integrated coverage measurement will be incorporated into the official census counts using statistical estimation and imputation procedures. Accordingly, the 2 A very important problem that results in both census omissions and erroneous enumerations in otherwise enumerated households is that for a certain portion of the population there is ambiguity of residence and household composition. This ambiguity includes the following situations: people with several residences, people living temporarily at an address, people whose usual residence is not where they sleep, children living with other relatives during the week or children in joint custody arrangements, and people with commuter marriages.
OCR for page 11
--> census figures released on December 31, 2000, for states and on April 1, 2001,3 for blocks for use in congressional redistricting and other purposes will reflect the results of these data collection procedures, accounting for the sampling used in nonresponse follow-up and integrated coverage measurement. Each of these steps plays a role in improving the quality of the census. Developing a high-quality address list (geographically referenced to the correct location on the census block boundary maps) and obtaining a high mail return rate are crucial to an accurate census. Both reduce reliance on the use of nonresponse follow-up and on integrated coverage measurement, and both will ensure that census collection and processing activities remain under control (especially cost control) while maintaining high standards. Nonresponse follow-up, integrated coverage measurement, and the placement of census forms in public places are designed to complement these two main steps by providing mechanisms for maintaining a high-quality census even when address list development and the mail return process understandably fall short of perfection. Collectively, the procedures are also designed to control costs, partly by increasing quality and thus reducing resource requirements for other aspects of the census process. Efforts to increase the mail return rate, and the use of sampling for nonresponse and vacant dwelling follow-up, have direct implications for reducing the costs of household nonresponse follow-up, which was a major problem in 1990. Although the panel has expressed various concerns about some details of the Census Bureau's plans (see National Research Council, 1996, 1997b), the panel believes that the basic plans for the 2000 census are sound, based both on the research conducted by the Bureau over the past few years and on its experience from past censuses. The panel also understands that there are unavoidable operational risks whenever new procedures are introduced in a census. The census is conducted only once every 10 years. There are no other operations sufficiently similar to a full census (particularly with respect to scale) to allow the operationally relevant testing that would ensure that each innovation works well. This is true not only of innovations. It has been demonstrated that one cannot be certain that features used in previous censuses will continue to work effectively in a new census since the nature of society changes, sometimes markedly, over a 10-year span.4 A key ex 3 Some states receive these counts earlier. 4 Waksberg (1998) discusses the history of innovation in the census. As Waksberg mentions, concerns about changes in the census are not new, but innovations have typically succeeded when guided by statistical-based testing and evaluation.
OCR for page 12
--> ample involves the inability to predict what proportion of households will mail in their forms as requested, which unexpectedly fell by 10 percentage points from 1980 to 1990, with basically the same methods as in 1980 (National Research Council, 1995). The mail response rate is still a major source of uncertainty in the quality and costs of the 2000 census, despite the use of this procedure for the past several decades.5 The 1998 Census Dress Rehearsal and Evaluations The last major opportunity to learn about the problems in the census plans is the census dress rehearsal. To acquire information and make final improvements on many aspects of the methodology and operations to be used in carrying out the 2000 census, the Census Bureau will use 37 separate studies based on the 1998 dress rehearsal. (A listing and short description of the 1998 dress rehearsal evaluation studies are given in the appendix to this chapter.) This section lays out the main components and goals of the 1998 census dress rehearsal. No recommendations are offered. The panel concludes that this total evaluation plan will supply a great deal of useful information in making the final decisions regarding the methodology to use in 2000. The main objective of the dress rehearsal and the associated evaluations is to test the integration of methods in a real-life census environment and to validate plans for the 2000 census. Some evaluations will provide information about the coverage of persons and the quality of the data collected. It is useful to mention that it is impossible for a test census to simulate all aspects of the decennial census for two reasons: the unequaled scale of the decennial census and the public's heightened awareness of it. In addition, for this dress rehearsal, the decision to test both sampling and nonsampling options limited the opportunity for concentrated effort on one option or the other. A reduction in both sample size and diversity of locations in testing of either approach means that the Census Bureau will be unable to evaluate either approach as comprehensively as was originally planned for the sampling option. Consequently, whichever 5 Early indications from the 1998 dress rehearsal show a 54.1 percent overall mail response rate in South Carolina, 40.6 percent in Menominee, Wisconsin, and 53.7 percent in Sacramento, California. These rates are about 2 percent lower than the 1988 dress rehearsal rates (Bureau of the Census, 1998a). (It should be kept in mind that the 1998 response rates reflect the use of a blanket replacement questionnaire, which was not used in 1988.) The comparable 1990 census rates for these areas were 60 percent for South Carolina and 63 percent for Sacramento: this is not surprising since decennial censuses typically receive greater cooperation than any of their tests or rehearsals.
OCR for page 13
--> option is adopted, there is a risk that substantial cost or data quality problems will go undetected. The dress rehearsal for the 2000 census was conducted at three sites:6 Sacramento, California; Columbia and 11 surrounding counties in South Carolina; and Menomonee County, Wisconsin, which includes the Menomonee American Indian Reservation. Both the Sacramento and the South Carolina sites used bailout/tailback for the census enumeration. The South Carolina site also used update leave/tailback for some of the census enumeration in rural areas. At the Sacramento site, sampling for nonresponse follow-up and integrated coverage measurement were used. South Carolina used 100 percent nonresponse follow-up and a post-enumeration survey.7 The main purpose of this post-enumeration survey was to measure the degree of coverage of the 1990-style census used at that site. The Menominee site used update leave/mailback and 100 percent nonresponse follow-up and tested an integrated coverage measurement program. Since the three sites were not comparable with respect to census processes, the various methods used across sites, specifically in Sacramento and South Carolina, cannot be compared directly. Instead, each site must be evaluated separately. It is important to point out that the last meeting of this panel took place in June, 1998, when the census dress rehearsal was in its preliminary stages, with very little information available as to the degree of success of various operations.8 Therefore, the panel cannot offer comments regarding how the results of the dress rehearsal should be used to alter plans for the 2000 census. A new National Research Council panel that has just gotten under way, the Panel to Review the Statistical Procedures of the 2000 Census, is expected to issue an interim report commenting on the dress rehearsal. The evaluations associated with the census dress rehearsal address eight components of 2000 census methodology: (1) the census questionnaire, (2) construction of the master address file, (3) coverage measure 6 Much of the following description is from Bureau of the Census (1998a). 7 By not using sampling for nonresponse follow-up and integrated coverage measurement, the dress rehearsal in South Carolina was meant to approximate the methods used in the 1990 census. However, some of the coverage improvement programs used in 1990 have not been incorporated in this test, which might have made a difference in the coverage of that dress rehearsal at that site. 8 The recent decision by the U.S. Supreme Court against the use of sampling for use in census counts for purposes of reapportionment was made public while this report was in the last stages of editing and final production. No changes were made to the report as a result of this decision other than the addition of portions of the preface, this footnote, and similar footnotes in the executive summary and in Chapter 2.
OCR for page 14
--> ment, (4) coverage improvement, (5) promotion and partnership, (6) unduplication, (7) nonresponse follow-up and field infrastructure, and (8) uses of technology. This section discusses the decisions that face the Census Bureau and how the dress rehearsal is providing information relevant to those decisions. This discussion refers to census dress rehearsal evaluation studies, which are indicated by notation, such as E5, where "E" signifies a type of evaluation study, and "5" indicates the specific study number; these studies are briefly summarized in the appendix to this chapter. Key Questionnaire-Related Evaluations With respect to the use of a replacement questionnaire, the key evaluation measurements are the percentage increase in mail response owing to its use and the effects on data quality of undiscovered duplicate responses and the cost of removing the duplicates that are discovered. Also, the percentage of telephone questionnaire assistance calls in which people complained about use of the replacement questionnaire gauges the extent of any negative public reaction. Studies A1 and F1, which provide these measurements, are key in determining whether a nontargeted replacement form should be used in 2000. Master Address File Evaluations Three activities assess the completeness and accuracy of the master address file (MAF). First, there is the determination of the number of added addresses received through the Local Update of Census Addresses (LUCA) program. Second is the determination of the number of additional addresses received as a result of the U.S. Postal Service's casing check.9 Finally, a housing unit coverage study, evaluation B1, assesses the completeness and accuracy of the final address list. This study compares the address list with the independent list created in a sample of block clusters for the post-enumeration survey, providing estimates of undercoverage and the frequency of geocoding errors (using dual-system estimation). Evaluation study B2 assesses, also at all three sites, the contribution of each component of the MAF in producing its degree of completeness. Unfortunately, these tests will not evaluate the identical MAF process as planned for the 2000 census: specifically, only a targeted canvass was used, as opposed to the full one planned for 2000. Coverage Measurement Evaluations The primary goals at the two major sites with respect to coverage measurement are (1) to measure the 9 A casing check is a check of the final master address list by Postal Service carriers just prior to Census Day.
OCR for page 15
--> net undercount rate for different groups in the South Carolina dress rehearsal site and (2) to determine the extent to which the scheduled milestone dates are met while achieving specified levels of quality at the Sacramento site. Measurement of the net undercount rate in South Carolina was to determine the potential impact of adjustment for census undercoverage. Examination of the post-enumeration survey schedule in Sacramento helps the Census Bureau understand whether the goal of a ''one number" census was operationally feasible (although the fact that a dress rehearsal is of a substantially different scale than that of a decennial census complicates the comparison to the timetable of a full decennial census). This is covered by study C1, in Sacramento and Menominee, by seeing whether scheduled milestone dates for various intermediate steps are met, whether specified quality levels are met at each milestone, and what the risk will be of not completing the parallel operations in the 2000 census. Coverage Improvement Evaluations The key goals at the two major sites are to determine the success of service-based enumeration, the "Be Counted" program, and the follow-up of large households.10 First, did service-based enumeration add people who would otherwise be missed using standard housing-unit enumeration? The second goal concerns the "Be Counted" program (covered in evaluation D2), and the key measurements are (1) how many people were added through use of the program, (2) whether they were at addresses that were not on the MAF or whether they were individuals at otherwise enumerated households, and (3) how many duplicate enumerations the program generated. Finally, the effectiveness of the use of the large household follow-up forms in enumerating households of more than five persons is being measured by determining the proportion of the mailback universe, by type and household size, that was mailed this form and the resulting response. Promotion and Partnership Evaluations These evaluations are to measure any increased awareness of the census through use of paid advertising and the partnership program (a program to enlist the assistance of local leaders to help increase awareness of the census). An additional goal is to measure whether the partnership program is effective in marshaling local knowledge and resources to help enumerate local areas. Study E1 assesses the effectiveness of paid advertising by measuring public awareness of the census, the likelihood of completing and returning the census questionnaire, and attitudes that affect this likelihood. To 10 Service-based enumeration is enumeration of the homeless population at places that offer meals or places to sleep.
OCR for page 16
--> evaluate these, a random-digit-dial telephone survey was taken in Sacramento and South Carolina to collect and tabulate responses before and after advertising for comparisons. The partnership program was to be evaluated through examination of contacts with partners and commitments made by partners, as well as by a survey of partners and census field staff. In addition, the level of participation of local and tribal governments in the LUCA program is being assessed. Unduplication Evaluations Because there are going to be several opportunities for households to provide more than one census questionnaire in the 2000 census, especially including return of the replacement questionnaire (if used) and "Be Counted" forms, the process of unduplication must be of high quality. These evaluations measure the percentage of erroneous enumerations that resulted from failure to unduplicate multiple responses. In addition, since improper unduplication for forms representing different households results in a census omission, it is important to measure what percentage of census omissions were the result of unduplication rules. Two key evaluation studies are F1 and F2. Study F1 assesses, at all three dress rehearsal sites, the effectiveness of a computer algorithm, the primary selection algorithm, which selects the persons who are judged to be residents of the housing unit in question, given all of the forms received for that housing unit. A follow-up interview evaluates the quality of the algorithm. In addition, studies F1 and F2 evaluate how often and where duplicates were found and which forms were involved. Study F2 specifically assesses how wide an area should be used to search for duplicate "Be Counted" forms. (Clearly, due to the fact that dress rehearsals are carried out in a very small number of locations, they cannot fully test the ability to identify duplicate responses from geographically disparate areas.) Nonresponse Follow-Up and Field Infrastructure Evaluations One of the key variables in carrying out a decennial census is the amount of time necessary to complete field follow-up. The goal of this evaluation was to see if it was possible to hire, train, and maintain staff to conduct nonresponse follow-up. Furthermore, since close-out and last-resort enumerations are presumed to be of lower quality than information collected directly from respondents, and close-out and last-resort enumerations are symptomatic of a census that is running late, it is important to measure the percentage of these proxy enumerations. Technology-Related Evaluations These evaluations check specific data capture and data dissemination systems being developed for the
OCR for page 17
--> 2000 census. They include use of laptop computers for computer-assisted personal interviewing (CAPI) in integrated coverage measurement, optical character recognition technology for data capture, and a sophisticated software system for maintenance of census operations. The questions addressed concern the integration of these systems with more standard census processes and the reliability of the systems. Summary The census dress rehearsal should give the Census Bureau a greater understanding of key issues: census operations, timing, costs, and logistics; the value of a blanket replacement questionnaire; the ability to hire effective field staff; the operational aspects of integrated coverage measurement, especially the schedule constraints; the use of CAPI instruments and the quality of the data collected; any problems with the census questionnaire; and any problems in developing the master address file.
OCR for page 18
--> Appendix: Census 2000 Dress Rehearsal Evaluations A. Questionnaire-Related Evaluations [Evaluations A1–A5]. The first group of evaluations deal with issues relating to the various methods of response. A1. Evaluation of Implementation for Mail Returns. This evaluation addresses issues related to the implementation strategy for delivery of mailbox questionnaires and will provide information about rates and patterns of response. It will also provide information on response rates and completeness of the foreign-language questionnaires. A second component of this evaluation will document if nonresponse follow-up is completed on time and will develop a response profile of nonresponse follow-up units. A2. Evaluation of the Mail Return Questionnaire. This evaluation focuses on how three components of the mailbox questionnaires affect the quality of the responses: (1) how the paper form is structured, (2) coverage-related questions, and (3) several new or revised content items. A3. Evaluation of the Short- and Long-Form Simplified Enumerator Questionnaire. The objective of this evaluation is to assess the data quality of the simplified enumerator questionnaire as measured by item nonresponse and patterns of response. A4. Evaluation of Telephone Questionnaire Assistance. This evaluation has three objectives: (1) to summarize the telephone questionnaire assistance operation, including such information as length of call and reasons for the call; (2) to assess the quality of respondent-provided addresses collected during telephone questionnaire assistance operations; and (3) to determine whether forms mailed out through the telephone questionnaire assistance operation were completed and returned. A5. Evaluation of the Effect of Alternative Response Options on Long-Form Data. The objective of this evaluation is to determine the frequency with which the alternative response options (telephone questionnaire assistance and "Be Counted" form responses) result in households that were intended to be included in the long-form sample but respond with only short form data. B. Master Address File (MAF) Evaluations [Evaluations B1–B2]. There are two evaluations dealing specifically with the MAF.
OCR for page 19
--> B1. Evaluation of Housing-Unit Coverage on the MAF. This evaluation addresses how complete the MAF coverage was of housing units at the time of the dress rehearsal enumeration. B2. Evaluation of the MAF Building Process. The objective of this evaluation is to determine how the various parts of the MAF building process affect the quality and coverage of the MAF. C. Coverage Measurement Evaluations [Evaluations C1–C8]. These evaluations deal with various aspects of coverage measurement. C1. Risk Assessment of the Integrated Coverage Measurement Field Data Collection and Processing Schedule of Operations. This evaluation will measure conformance to the overall schedule and intermediate milestones, as well as the effects on data collection and processing completeness and quality. C2. Contamination of Initial Phase Data Collected in Integrated Coverage Measurement Block Clusters. The purpose of this study is to determine if integrated coverage measurement (ICM) affects census results. C3. Evaluation of Outmover Tracing and Interviewing. Whole-household outmover tracing will be evaluated at the site level as well as for different populations based on the poststrata used. C4. Error Profile for the Census 2000 Dress Rehearsal. The first aspect of the error profile is to examine individually the sources of error corresponding to the enumeration process that are measurable and feasible to measure given the design of the Census 2000 Dress Rehearsal Integrated Coverage Measurement Survey. The second aspect of the error profile is to examine the net effect of a subset of these sources of error by estimating a net nonsampling error and combining it with the sampling, or random, error. C5. Evaluation of Quality Assurance Falsification Model for Integrated Coverage Measurement Personal Interview. The goal of this evaluation is to measure the efficiency and effectiveness of the falsification reports and the operations used to implement the model. C6. Evaluation of the Integrated Coverage Measurement/Post-Enumeration Survey Personal Follow-Up Interview. The purpose of this evaluation is to identify potential problems with question wording and ordering
OCR for page 20
--> and other questionnaire design issues in the personal follow-up interviews. C7. Assessment of Consistency of Census Estimates with Demographic Benchmarks. This study represents an extension of the demographic analysis program that the Census Bureau has used for many years to evaluate the consistency of census results and the completeness of coverage at the national level. It uses independent demographic benchmarks to evaluate (1) the consistency of the dress rehearsal census estimates and (2) the effectiveness of integrated coverage measurement in achieving a reduction in the differential undercount. C8. Analysis of the Final Numbers and Estimates. This evaluation contains five separate research projects under one heading. It can be thought of as the research umbrella for projects whose focus is improvements to and refinements of the sampling and estimation methodology for the 2000 census. The five research projects are raking evaluation, evaluation of bias from integrated coverage measurement missing data methodology, heterogeneity/small-area estimation evaluation, household-level data file research, and mover estimation evaluation. D. Coverage Improvement Evaluations [Evaluations DI–D5]. These evaluations provide information on various programs intended to improve person coverage in the census. Both the service-based enumeration and the "Be Counted" program are intended to include people in the census who may be missed in the standard housing unit and group quarters enumerations. D1. Service-Based Enumeration Coverage Yield Evaluation. This evaluation documents the coverage of persons included in the dress rehearsal as a result of the service-based enumeration program. D2. Evaluation Study of the "Be Counted" Program. This evaluation documents the coverage of persons included in the dress rehearsal as a result of the "Be Counted" program. D3. Evaluation of the Coverage Edit Operation. This evaluation will provide data on the appropriateness of the coverage edit rules and on the effectiveness of the edits and the follow-up. D4. Evaluation of the Large-Household Follow-Up. This evaluation
OCR for page 21
--> will provide data on the effectiveness of mailing a follow-up form to complete the enumeration of households with more than five persons. D5. Coverage Improvement Uses for Administrative Records in a Nonsample Census. The objective of this evaluation is to determine if coverage in a nonsample census can be improved by using data from administrative records. There are three components to this evaluation: file acquisition, coverage research, and field follow-up. E. Promotion Evaluation [Evaluation E1]. There is just one evaluation about promotion. El. Effectiveness of Paid Advertising. This evaluation will answer the question "Does public awareness about dress rehearsal activities increase as a result of paid advertising, and what does this tell us about the success of the paid advertising campaign?" F. Multiple-Response Resolution Evaluations [Evaluations F1–F3]. Because the options for responding to the census have increased since 1990, it is critical to develop a system for unduplicating multiple responses. This group of evaluations will provide information that will be used to refine the multiple response resolution process for the 2000 census. Fl. Evaluation Study of the Primary Selection Algorithm. The objective here is to evaluate the process of unduplicating multiple returns for the same address. For one component of this evaluation, an independent interview will be conducted at addresses for which more than one census form was returned to determine if specific rules were appropriate in determining the correct residents of the households. From the independent interview, erroneous enumerations will be calculated and omission rates for the specific rules evaluated. There is an operational component that will document the process used in the primary selection algorithm. The third component is to determine the operational effectiveness of the dress rehearsal invalid-return detection operation in identifying geographically clustered invalid returns. F2. Evaluation Study of the Within-Block Search Operation. This evaluation will simulate the within-block search operation by adding additional forms to the search and extending the search area to surrounding blocks. Similar to F1, there is an operational component to F2 to quantify the effect of the within-block search.
OCR for page 22
--> F3. Evaluation Study of Intentional Fraud. Contractor-provided returns are used to determine if the plan for multiple-response resolution is successful in identifying and eliminating invalid returns. G. Nonresponse Follow-up and Field Infrastructure Evaluations [Evaluations G1–G10]. Group G evaluations provide information about the implementation of the field operations and various aspects of the field infrastructure. They are designed to answer a wide range of questions. G1. Ability to Fully Staff Each Operation. Was the Census Bureau able to hire, train, and maintain staff to execute nonresponse follow-up, integrated coverage measurement, and the post-enumeration survey? This evaluation is closely tied to G4 (pay rates) and G8 (recruiting activities). G2. Field Infrastructure: Job Requirements. What are the essential job functions and physical demands for local census office positions? This identification will be necessary in designing accommodations under the Rehabilitation Act of 1973 and Americans with Disabilities Act. This evaluation will also help document the appropriateness of selection factors used in hiring, such as access to a car. G3. Field Infrastructure: Criterion Validation. Are there significant correlations between applicants' selection-aid test scores and measures of their job performance in terms of enumerator production rates, attendance, and length of stay? G4. Field Infrastructure: Pay Rates. Were staff members able to be hired and maintained to execute the nonresponse follow-up using the pay rates from an economic model developed by Westat, Inc. (a contractor for the census), and the Census Bureau? G5. Field Infrastructure: Preappointment Management System/Automated Decennial Administrative Management System. Do these systems work? An enterprise-wide integrated system will perform applicant processing and selection, personnel action processing, payroll processing, and history and reporting: How will it interface with other census systems? G6. Field Infrastructure: Supply Ordering Process. Was the Census Bureau able to provide enough supplies for all aspects of the dress rehearsal?
OCR for page 23
--> G7. Field Infrastructure: Equal Employment Opportunity (EEO) Process. Does the EEO Program set up for the dress rehearsal (a new automated system) adequately track complaints? Does it ensure that specific tasks related to a complaint are completed? Does it lead to a possible resolution of the complaint? G8. Field Infrastructure: Recruiting Activities. Were the Census Bureau's recruiting activities successful? Which specific sources of applicants were the best? Were the activities done at the right time? What advertising sources were most effective? What did it cost? G9. Field Infrastructure: Welfare to Work. Did the Census Bureau's dress rehearsal Welfare to Work Program work? That is, how well were census recruiters and partnership specialists able to identify state, local, and tribal government resources, as well as community resources, to aid in the development of an applicant pool of welfare recipients? G10. Enumerator Training for Nonresponse Follow-Up and Integrated Coverage Measurement Personal Interview. Did the training provided to enumerators result in skilled employees able to perform at an acceptable level? H. Technology-Related Evaluations [Evaluations HI and H3]. Results of Group H evaluations will be used for internal Census Bureau planning to validate specific data capture systems being developed for the 2000 census. H1. Evaluation of Segmented Write-ins. This evaluation will analyze respondents' use of the segmented boxes on the questionnaires for race and ethnicity. H2. H2 has been dropped. H3. Quality of the Data Capture System. This evaluation will look at what percentage of the answers in the dress rehearsal database are different from the actual responses on the census questionnaires.
Representative terms from entire chapter: