Skip to main content

Currently Skimming:


Pages 48-67

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 48...
... The items in this section could not be examined in this project and require further research. These items can be classified into the following three groups: 1.
From page 49...
... Category Original reference Item D-11 GPS surveys D-12 Internet surveys Items beyond scope of project I-8 SP data D-2 Who should be surveyed? D-9 Times of day for contacts E-6 Retention of data on incomplete households E-7 Cross checks in data collection and data review E-8 Days and periods to avoid data collection I-3 Collection of in-home activities I-4 Ordering of questions I-6 Instrument design I-7 Multitasking of activities S-1 Sample size S-2 Sizes and procedures for surveying augment samples S-3 Collecting augment samples S-4 Stratification options for samples S-5 Specification of sampling error requirements S-6 Development of default variances P-1 Focus groups P-5 Reporting of pretests and pilot surveys Items originally identified and not researched Q-4 Sampling error – Cell phones Other items identified during research – Incentives – Personalized interview techniques – Geocoding methods – Impacts of the national "do not call" registry – Initial contacts – Refusal and non-contact conversions – Effect of interview mode on recruitment and non-response rates – Unknown eligibility rates – Data archiving in transportation
From page 50...
... 50 Standardized Procedures for Personal Travel Surveys needed relating to these data. These would relate to the size of the task that can and should be presented to respondents (Stopher and Hensher, 2000)
From page 51...
... 4.2.2 D-9: Times of Day for Contacts Within telephone surveys, the time of day when contact is attempted has a critical influence on response rates. There is a wide range of practices in existing surveys, however, and these have never been formally documented.
From page 52...
... Despite the apparent usefulness of such data, in many surveys it is destroyed after the full sample is obtained either because it is automatically done by CATI software or because of specific desires of survey firms or clients. Many agencies are ignorant of the value of partial data and will either not specify in the contract that such data should be turned over or may even specify that such data are to be destroyed.
From page 53...
... In most cases, these problems are completely avoidable with appropriate checks. CATI and CAPI surveys offer enormous potential for cross checks on data quality in real time as a survey progresses and, in most such surveys, at least limited cross-checks are usually programmed in.
From page 54...
... It may also be useful to examine recent surveys for additional evidence as to whether requests for this detail appear to have had impacts on response rates. The literature on time use (Robinson, 1977 and 1991; Robinson and Godbey, 1997)
From page 55...
... certain issue or satisfaction with a service, should be asked as early as possible to make respondents feel as though their input and participation is valued. It is also considered good practice to ensure that questions follow a logical and appealing sequence that helps respondents understand what is being sought from them.
From page 56...
... Evidence from focus groups conducted for surveys in Dallas and Southern California suggests that respondents prefer not to have all questions in travel diaries and that this might increase response rates. Further research is required to examine trade-offs in completeness of responses and response rates.
From page 57...
... Sample sizes should be examined from recent surveys -- particularly those that have been used for model estimation, model updating, and policy testing and formulation -- and a determination made of the adequacy of the sample for these purposes. Again, we note here that the 15,000 household sample in Southern California turned out to be less than adequate for mode-choice modeling in that region because there were no augment samples and the decision on how to stratify the sample resulted in very few transit trips in the final data set -- too few, in fact, to allow reliable mode-choice models to be built with the intended specifications.
From page 58...
... reason for this is that unless the finite population correction factor is large, which will rarely be the case in urban area surveys, the error levels of a sample will not be dependent on the regional population. The specifics of the sample size will be dependent, however, on the use to which the data will be put and the sample design -- that is, stratification, clustering, or other sampling method.
From page 59...
... 4.2.13 S-4: Stratification Options for Samples Although the usual aim of stratification in household and personal travel surveys is to ensure coverage of household characteristics, it will generally have the effect of reducing the sampling error. This aspect of stratification has been largely ignored in travel survey sample designs.
From page 60...
... If there is insufficient variability in the overall error of trip rates, it may be necessary to sub-sample from some existing surveys since the sub-samples will have much larger sampling error for all characteristics. The third issue is to investigate the potential to use other attributes, such as mode shares, for the design sampling error.
From page 61...
... The implications of using default variances for setting sample sizes would need to be checked by comparing them with the results of using actual variances for several recent surveys. In the absence of any local information, these variances could be used to estimate stratification, sampling rates, and sampling errors.
From page 62...
... It is suggested that reports on recent surveys be reviewed to determine what has been documented in the past. Some of the items to be considered here should be • Sample sizes and methods of drawing the samples for any pretests and pilot surveys; • Nature of the design that was tested; • Results of the tests, including response rate(s)
From page 63...
... . In addition to this, many households are now moving away from landline phones and using cell phones exclusively.
From page 64...
... It is not known how much of an increase in response rate can be obtained with incentives of different sizes, nor what biases may result from their use. This is research that would be warranted.
From page 65...
... even if that means that it is not necessarily interviewer-friendly. Personalized interviewing techniques are also becoming increasingly popular through travel behavior modification programs such as TravelSmart® and Travel Blending®.
From page 66...
... This will give an understanding of the non-response bias and is important to account for in household travel survey results. A possibility is to determine whether it is possible to obtain a list of households subscribed to the registry and then to compare response rates, characteristics, etc., among households recruited that are on the registry and those that are not.
From page 67...
... 4.3.9 Unknown Eligibility Rates In defining standardized procedures for computing response rates, the issue of the estimated rate of eligibility for those contacts that remained with unknown eligibility was recommended as being left to the survey firm. However, better guidance would be preferred for this issue because it has a critical impact on the calculation of response rates.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.