As discussed in Chapter 1, the committee is charged with three specific tasks: determining whether a statistical association exists between exposure to the herbicides used in Vietnam and health outcomes, determining the increased risk of effects among Vietnam veterans, and determining whether a plausible biologic mechanism or other causal evidence of a given health outcome exists. This section discusses the committee's approach to each of those tasks.
In trying to determine whether a statistical association exists between any of the herbicides used in Vietnam or the contaminant 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) and a health outcome, the committee found that the most helpful evidence came from epidemiologic studies—investigations in which large groups of people are studied to determine the association between the occurrence of particular diseases and exposure to the substances at issue. Epidemiologists estimate associations between an exposure and a disease in a defined population or group using measures such as relative risk, standardized mortality ratio, or odds ratio. Those terms describe the magnitude by which the risk or rate of disease is changed in a given population. For example, if the risk in an exposed population increases two-fold relative to an unexposed population, it can be said that the relative risk, or risk ratio, is 2.0. Similarly, if the odds of disease in one population are 1:20 and in another are 1:100, then the odds ratio is 5.0. Sometimes the use of terms such as relative risk, odds ratio, and estimate of relative risk is inconsistent, for instance when authors refer to an odds ratio as a relative risk. In this report relative risk refers to the results of cohort studies and odds ratio (an estimate of relative risk) refers to the results of case–control studies. An estimated relative risk greater than 1 could indicate a positive or direct association (that is, a harmful association), whereas values between zero and 1 could indicate a negative or inverse association (that is, a protective association). A “statistically significant” difference is one that, under the assumptions made in the study and the laws of probability, would be unlikely to occur if there were no true difference and no biases.
Determining whether an observed association between an exposure and a health outcome is “real” requires additional scrutiny because there may be alternative explanations for the observed association. Those explanations include error in the design, conduct, or analysis of the investigation; bias, a systematic tendency to distort the measure of association so that it may not represent the true relation between exposure and outcome; confounding, distortion of the measure of association because of failure to recognize or account for another factor related to both exposure and outcome; and chance, the effect of random variation, which produces spurious associations that can, with a known probability, sometimes depart widely from the true relation. In deciding whether an association between