National Academies Press: OpenBook

Survey Automation: Report and Workshop Proceedings (2003)

Chapter: Automation and Federal Statistical Surveys

« Previous: Software Engineering -- The Way to Be
Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

we know we need a clear specification and requirement stage, a clear testing stage, and all this kind of thing.

So I think that we have made a lot of progress, and as we go from survey-to-survey we do borrow a lot of expertise. But I think that it’s with the testing and the responding to field problems that we have major problems.

POORE: OK, one question and then I’ve got to finish up here with a couple of other slides.

PIERZCHALA: I just wanted to make the point that there are classes of surveys.

POORE: Right.

PIERZCHALA: There are variations on a theme; there [are] household surveys, economic surveys, social group surveys, health occupational surveys. So there are variations on a theme.

POORE: And it would seem to me that you could inherit some large percentage from some previous survey and work within that framework. But, like I said, I’m not in that business.

So, in conclusion, I think that the advice is manage, manage, manage. You’ve got to manage expectations. You can’t have people expecting a job to be completed when the workers don’t have any reasonable chance of making that date. And so you have to get realism into the thing early.

Manage the technology, manage the users, manage the requirements, manage the environment, manage the staff—and even manage your productivity rate. You should have a steady stream of product come out at a predictable quality level and a predictable rate.

Some of these ideas you can find in this book;17 it’s just one way of doing things, it’s not the only way of doing things. And I will end on that commercial message. [laughter]

CORK: I don’t want to be too much of a traffic cop, but we also want to be out of here before seven o’clock. So, hopefully, we’ll have time to discuss some of the general themes Jesse raised later on in the afternoon and certainly lively discussion over lunch.

AUTOMATION AND FEDERAL STATISTICAL SURVEYS

Bob Groves

CORK: But, right now, bridging these two perspectives, we asked Bob Groves—who is the director of the Survey Research Center at the University of Michigan—to comment on the two presentations. Bob?

17  

The book referenced here is Prowell et al. (1999).

Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

GROVES: This is not the first such meeting that I’ve attended. In fact, listening to these talks reminds me of the first one, which was in the mid-1970s at a beautiful resort near Berkeley. And the conclusions of that meeting, as I recall, were that we were on the cusp of a revolution. That we could totally automate all of the activities of survey research. And that, in fact, the lessons from then-extant computer science were easily applied to the task because it was a rather simple one.

I think what was missed in that meeting was that the attention was on the design of a questionnaire as a software kind of problem, and what turned out to be more difficult for the field was all the stuff around—the so-called systems stuff that Pat [Doyle] was talking about.

Then the other conclusion was that this would be a radical reduction in cost of collecting data for human and business populations, because it was so easy to change an instrument. It could actually be done at the very last moment; in fact, it could be done in the middle of the field data collection. We [would] really [be] able to completely revolutionize the timeline of development, which at that time people were fretting about.

That didn’t happen either.

And then the final thing was that this should allow us to have stored archives of software that would be applicable to the kinds of questions we ask in all the surveys—because aren’t these surveys similar to one other? And you could just take, say, the demographic questions and store them, and when Pat did it a few years later she could use exactly the same code. Well, what happened to that prediction?

The prediction was naïïve in the sense that the demographic measures haven’t stabilized. They change all the time; the code changes—the functions change.

And so we are where we are.

I have a few things to say, along the same lines. My job, I think, is to attempt—although I can’t— to bring these two perspectives together. [You’ll] find me speaking more to our computer science colleagues, I think, because I want to set the context. I think that’s really important.

The federal government spends about $3 billion a year in statistical activities—that’s not a lot of money, if you think about comparable commercial sectors. And, in fact, the commercial sector in terms of surveys and statistical activities is probably triple the size of that. This is a relatively small enterprise, even though we think of it as our life and everything in the world.

The surveys that are done here are of extremely long duration and are relatively stable … despite what Pat says. [laughter] So, I have colleagues who work at CBS News’ survey unit who have two hours to put together a survey, that is done over a four-hour period, to be reported by Dan Rather the next day. The surveys we’re talking about here are very, very

Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

different, and they might have developmental timelines where the questionnaire would be constructed over a 12-month period, or an 18-month period, and the questionnaire would remain stable for years. These are very different worlds and pose very different problems for software.

The federal government consists—on the survey side—of very large organizations that have a long history. They have work structures that are bureaucratic in nature by design—“bureaucratic” with a positive spin on that word. But these divisions of labor are very slow in changing, and much slower in changing than a commercial organization because the impact of external environmental changes is buffered by organizational requirements.

Finally, a distinct aspect of this that Pat touched on is that there is a devotion to providing data back to the people. So the product is not just Dan Rather saying the next night that 72 percent of the American public favor what Bush is doing—it’s actually much more complex. It’s large sets of data that need to be accessible by diverse users, and checked, and analyzed, and re-analyzed to check the credibility of the information. So these are burdens that aren’t faced by others.

There is a long history of survey automation that you can just kind of go through here. Let me go through these terms on the back end here. “ACASI” is Automated Computer-Assisted Self-Interviewing. “TDE” is Touchtone Data Entry, and “VRE” is Voice Recognition Entry. We have, as a field, automated … if you look at this, these automation efforts are five to ten years behind software and hardware development in other sectors. They lag. And one question central to a group like this is: is that lag good, or is it bad?

The problem of innovation in surveys deserves its own attention because a group like this would, if successful, innovate. And there’s a problem in innovation in surveys. One is that a scientific orientation in the field is only in certain components of the field. It’s certainly on some design aspects and some analysis aspects. But the survey field developed so that the middle—the questionnaire, the instrumentation, the collection of data—is designed as a relatively routine, stable-technology, production shop. Our problem is changing that, and it’s hard to move those structures.

Historically, there’s a fairly low investment in research and development for data collection and dissemination, relative to other sectors. This is not a group, this is not an industry that is planning for the future ten years out and has a set of R& D teams that are really planning what’s going to happen. It doesn’t look like the pharmaceutical industry, OK? And hence there are weak ties between R&D and production.

And a concern, more recently, is that the industry has relied on a relatively unskilled labor market for its data production activities. These are

Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

close-to-minimum-wage persons who are increasingly being given software and hardware products of rather large sophistication. Could that labor market change? Should we be designing for a labor market that becomes much more technically astute? It isn’t clear. But certainly that’s the current state.

The challenges, I think, of this workshop are the following. We do have to be real careful about definition of tools. So Michael [Cohen] was interrupting Pat from time to time saying, “define what you mean by that.” I think we ought to keep doing that, and not let anyone get away without defining their terms if they’re unknown to you.

There’s a real problem with choosing the level of abstraction that we speak at. My belief is that the errors of the past, for gatherings like this that I have attended, is that we have allowed ourselves to talk at too abstract a level, and hence—we actually miscommunicated. Both sides were saying the right things, but it was irrelevant to the problem we were facing. And that means assessing the solution in the context of the users.

The solution set, I think, has to maintain attention to the fact that there isn’t a well-defined answer to the question, “What is a survey?” So my friend at CBS News is a survey researcher. And Pat is a survey researcher. But they live completely different lives with regard to [their] issues in terms of design and future problems and innovation. And we really need to fix our attention at what we’re trying to solve.

And, by and large, I think that the largest failure has been this last bullet. [The final bullet on the slide in question reads, “Inventing long-lasting collaborations.”] These workshops are great fun, and we all get excited, and we share ideas; we pick up new jargon. I love the—what was it?—“inch pebble”, I think I’ll keep that one … probably not very well, though! [laughter] We’ll have that kind of fun. But the workshop—frankly—is the easiest thing to do. The harder thing to do is to figure out collaborations, or the importation of best practices, across fields, to really implement change. And I hope we don’t have too much fun without attending to that long-run problem, because that’s really the payoff.

So, thanks a lot—I guess it’s lunchtime, Dan?

CORK: It is lunchtime. Actually, if there are one or two questions, we could take those without thoroughly breaking the schedule, but just a couple, and then lunch is in the next room. Yes?

DOYLE: I think that one of the points you made sort of pointed out why I shook my head this way and others shook their head that way with regard to how stable the instruments are. It really depends on the level at which you look at them. Sure, we’ve always asked for age, race, sex—we’ve always asked those questions. But now we’re asking race with multiple questions, different sets of questions. We make changes

Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×

at times during the development process that are not opportune. And that’s where we get into trouble.

I think we have a sense of an appropriate design-develop-test model that you provided, and we would love to implement it. But, in our lives, the requirements are really not set at the beginning. And not at the level of all the questions, and not at the level of all the flows.

So, just within the instrument piece, we live in a world where we are constrained by reality—the laws change when we’re halfway through developing an instrument and suddenly our questions are irrelevant. That’s the kind of thing that I can see—from your [Poore’s] description, it sounds like that’s controlled better when you’re developing a product, and know what the product is. But in our case it doesn’t seem to be under control. We need to find a way to live within that unpredictability.

GROVES: My third meeting like this one was a wonderful day spent in Palo Alto, overlooking the ocean as I recall, with a set of software engineers for expert systems. And the question on the table was, “Why can’t we develop an expert system for questionnaire construction?” That was what went on. It was a great day, we had a lot of fun. The conclusion of the expert systems engineers, at the end of the day, was, “Perhaps you people would like to define how to do a questionnaire before you talk to us next time.” [laughter] And partly that’s it. And the immediate reaction is, “Why don’t you standardize this stuff? That’s stupid; you’ve been doing this for thirty years, or fifty years. Well, that’s your problem, and when you fix that, come back to me.” Well, it doesn’t quite work like that.

I think the other thing to note is that there’s a real distinction in our discussion so far between developing software of generalizable use versus developing an application within a software framework. So some of what Pat was talking about is of the ilk: “What does a user need to know to develop a document in Microsoft Word?” And some of what she was saying was, “What do you need to do to develop Microsoft Word as a software product?” And those are very different things. So my friend at CBS News doesn’t design Microsoft Word; they do documents in Microsoft Word—to use this metaphor—nightly to get a survey out. So we need to be careful in our discussions on that point.

CORK: This is a good point to stop here, if we can break things off and pick up after lunch.

[The workshop stopped for a lunch break.]

Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 78
Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 79
Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 80
Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 81
Suggested Citation:"Automation and Federal Statistical Surveys." National Research Council. 2003. Survey Automation: Report and Workshop Proceedings. Washington, DC: The National Academies Press. doi: 10.17226/10695.
×
Page 82
Next: Understanding the Documentation Problem for Complex Census Bureau Computer Assisted Questionnaires »
Survey Automation: Report and Workshop Proceedings Get This Book
×
Buy Paperback | $80.00 Buy Ebook | $64.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

For over 100 years, the evolution of modern survey methodology—using the theory of representative sampling to make interferences from a part of the population to the whole—has been paralleled by a drive toward automation, harnessing technology and computerization to make parts of the survey process easier, faster, and better. The availability of portable computers in the late 1980s ushered in computer-assisted personal interviewing (CAPI), in which interviewers administer a survey instrument to respondents using a computerized version of the questionnaire on a portable laptop computer. Computer assisted interviewing (CAI) methods have proven to be extremely useful and beneficial in survey administration. However, the practical problems encountered in documentation and testing CAI instruments suggest that this is an opportune time to reexamine not only the process of developing CAI instruments but also the future directions of survey automation writ large.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!