the surveyors were interested in. The instructions were revised to ask people to list risks to health safety in the environment. Respondents were then asked to identify the five risks they were most concerned about and answer a series of detailed questions about those risks.

The second approach is a very different strategy. It uses a way of framing the question different from that used in the Roper survey. The results have been published elsewhere (Fischer et. al., 1991) and are summarized here. Environmental risks are a significant concern (mentioned by 44 percent of respondents compared with 23 percent who cited health risks, 22 percent who were concerned about safety, and 11 percent who noted socially based risks, such as crime). Concerns about traditional pollutants were more frequently cited than were concerns about "exotic" pollutants (21 percent vs. 13 percent). This was a three-generation study, and there were interesting generational effects. Younger people and women were slightly more concerned about the environment; everybody was worried about AIDS; middle-aged people were worried about on-the-job risks; older people were worried about things like cancer and heart attacks. In short, a picture emerged that is very different from the one from the EPA Roper survey. The general insight is that care must be taken about how questions are asked and how inferences are made from the answers.

The second example is from a study undertaken several years ago by the Chemical Manufacturers Association (CMA). CMA commissioned three of the country's leading experts in risk communication to review the literature and extract from it advice for chemical plant managers on how to communicate risk information, focusing particularly on risk comparison. On the basis of this very careful reading of the literature, the experts offered advice on good and bad ways of comparing risk (Covello et al., 1988). They concluded by providing 14 examples of text illustrating good and bad comparisons for a specific ethylene oxide plant.

These 14 pieces of text were presented in a CMU study to several different samples of Americans with the following scenario, "You have a friend. He's the manager of an ethylene oxide plant in the Midwest. He's about to get up and give a talk to a community group and here's a bunch of text that he is proposing to use. If it overlaps, he'll edit that out in the final version. Here are several factors that he's concerned about. Rank each of these pieces of text on the basis of each of those factors."

Again, the results are in the literature (Roth et al., 1990). In summary, through the use of various analytical strategies no correlation was found between the acceptability judgment predicted by the manual (i.e., by the literature), and those produced by the subjects of the survey. The insight here is not that the experts did a bad job summarizing the literature, but that even experienced professionals have limited predictive abilities with respect to the design of risk communication. There is no substitute for taking an iterative empirical approach. Studies must be done with actual people in order to see what effect the message is having; the state of the art is such that predictions cannot be made.



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement