National Academies Press: OpenBook
« Previous: 5 The Participant Cohort
Suggested Citation:"6 The Referent Cohort." Institute of Medicine. 2000. The Five Series Study: Mortality of Military Participants in U.S. Nuclear Weapons Tests. Washington, DC: The National Academies Press. doi: 10.17226/9697.
×
Page 32
Suggested Citation:"6 The Referent Cohort." Institute of Medicine. 2000. The Five Series Study: Mortality of Military Participants in U.S. Nuclear Weapons Tests. Washington, DC: The National Academies Press. doi: 10.17226/9697.
×
Page 33
Suggested Citation:"6 The Referent Cohort." Institute of Medicine. 2000. The Five Series Study: Mortality of Military Participants in U.S. Nuclear Weapons Tests. Washington, DC: The National Academies Press. doi: 10.17226/9697.
×
Page 34
Suggested Citation:"6 The Referent Cohort." Institute of Medicine. 2000. The Five Series Study: Mortality of Military Participants in U.S. Nuclear Weapons Tests. Washington, DC: The National Academies Press. doi: 10.17226/9697.
×
Page 35

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

6 The Referent Cohort To detains whether participants at the five nuclear test series had differ- ent mortality experience than a comparable group of nonparticipants, we built a comparison referent cohort of nonparticipants. Using records kept by the De- partment of Defense and the National Archives and Records Administration, we used frequency matching to assemble a referent cohort that would be similar to the Defense Threat Reduction Agency (DTRA)-provided five-series participant cohort according to branch of service, time period, location (Pacific, western United States, other), age, type of unit, and paygrade. We did this by creating a pool of units such as ships and battalions from among those selected by DTRA as likely to be comparable. DTRA selected reference units by consider- ing their similarity to the participating units. The similarity was defined by function, size, paygrade distribution, and time period. Units assigned to states downwind of the Nevada Test Site, to operations in Korea, or to participation in any atmospheric nuclear test were not eligible for selection. From the eligible units, we selected individuals to fit the paygrade distribu- tion of the participants in the related unit. Those participants without reasonably close referents in terms of paygrade were pooled by type of unit, within branch of service and series, as were the excess individuals in referent cohort units who were not selected. Individuals in each service were selected from the larger ref- erent cohort pools to be similar in paygrade and unit type to the individuals re- maining in the participant pool. MFUA, with DTRA assistance, assembled a 64,781-member military refer- ence cohort. Reference individuals were selected using frequency matching on the following criteria: (1) service during the 12 months immediately preceding or fol- lowing the date of the participant's selection series, (2) service in a similar unit in the same branch of service as the participant during the test period, (3) the same or 32

THE REFERENT COHORT 33 similar paygrade at the time as the participant during the test period, and (4) no participation in any atmospheric nuclear weapons testing program. Since no single source document can provide all of the information neces- sary for assembling the referent cohort, the assembly procedure was divided into three phases. The first phase involved selecting reference units; the second phase involved building a referent pool by identifying names, service numbers, and paygrades of all individuals in the units; and the third phase involved selecting individual referents and further obtaining identifying information concerning those individuals. The NTPR team identified referent units through a review of Station Lists, which specify all units according to their numerical designation in each calendar year; these unit specifications can be cross-referenced with the unit's physical location. The similarity between the participant and referent units was determined by considering their function, size, and paygrade distribution. Since unit names are usually consistent and can provide a basic understanding of these characteristics within the unit, the reference unit selection was based on unit names. The geo- graphic area of the station was also considered in selecting reference units. Units stationed in Utah, Arizona, New Mexico, Colorado, and Nevada within 2 years of any atmospheric continental test were excluded from referent unit selection since they may have been exposed to test fallout. Units stationed in Korea dur- ing the Korean War (1950-1953) were also excluded. Units participating in any atmospheric nuclear weapon tests within a defined time period were excluded as well: for the Army and Marine Corps, units within a 2-year window of any test period; for the Air Force, units within a 3-year window; and for the Navy, units within a 4-year window. These time frames were chosen based on typical unit rotation periods within the services. More than one reference unit was selected for each participating unit: two units for each participating unit in the Navy and Marine Corps; six units for each participating unit in the Army and Air Force. The principal purpose of obtaining multiple reference units was to provide a referent population pool large enough for frequency sampling on branch, series, and paygrade. These multiple units were ordered and sampled according first to geography and then to time, as indi- cated below. For continental tests, units stationed within the continental United States had a higher priority than those stationed outside the continental United States. For Pacific test series, similar units stationed within the Pacific theater had a higher priority than those outside the Pacific theater. Units stationed within the 6-month window of the test series period had a higher priority than those within the 12-month window. Therefore, for the continental tests, for ex- ample, units stationed in the continental United States within the 6-month win- dow had the highest priority, followed by those stationed in the continental United States within the 12-month window. The degree of difficulty in identifying reference units ranged from minimal to extreme. Finding reference units such as ships, battalions, and standard squad- rons was relatively simple. However, finding counterparts of temporary units

34 THE FIVE SERIES STUDY such as provisional, special project, and observer units, was difficult. The struc- ture of these temporary units did not follow the established standards (e.g., ta- bles of distribution and allowances, tables of organization and equipment); therefore, the unit names do not provide a basic description of their size, func- tion, and paygrade distribution. Once the reference units had been chosen, organizational records of each branch of service were reviewed to identify the names, service numbers, and paygrades of all personnel in these units. For the Navy, all enlisted men aboard a particular ship can be found in the ship's muster rolls (on microfilm); officers are listed in the ship's deck logs (in log books) and on post-1955 muster rolls; both sources are available through the National Archives. For Navy shore units, which do not have muster rolls and deck logs, unit diaries were reviewed to as- certain the identifiers of the unit members. For the Marine Corps, muster rolls (on microfilm) and Station Lists were searched to accurately identify military service numbers or names. For the Army, monthly personnel rosters and morning reports that are avail- able in the National Personnel Records Center (NPRC) in St. Louis were used. The morning reports have been completed each day since World War II, usually at the company level. They list persons who experienced a change in duty status, showing their names, service numbers, and ranks. A change in status could be discharge, temporary duty, absence, return from absence, reassignment, promo- tion, etc. Since the morning reports may list the same person multiple times and omit certain persons depending on their changes in duty status, the monthly per- sonnel roster was used as the primary information source. For the Air Force, morning reports are the only available source. Once the referent pool was constructed for each service, the roster was matched to the NTPR database on name and service numbers, to exclude those who are known to be participants in any atmospheric nuclear weapons test. The members of the referent pool were grouped by unit and paygrade, as were members of the participating units. For each paygrade, the same number of reference subjects as participants was selected according to the unit priority or- der described earlier. When there were insufficient numbers of referent pool members in a specific paygrade, an adjacent paygrade was used. Table 6-1 shows the closeness of matching. Selection categories 1 and 2 in which service, series, paygrade, and type of unit are all exact matches account for 79.5 percent of the study population. Another 15.5 percent was selected using one of six close, although not exact, combinations of characteristics outlined in the table. The pool of potential referent personnel did not include, however, sufficient numbers of certain participant characteristic combinations for there to be equal numbers in each cohort. Each combination, however, is represented in both co- horts. The referent cohort has 3,388 fewer members than the participant cohort. The referent cohort acquisition process yielded a group of individuals with distributions similar to the participant cohort for the desired and available char- acteristics. This balance within the overall and series-specific cohorts is illus- trated in Chapter 10.

35 ~ o c t4 .~ 'e ~ o ~ ~ ~ ~o an a' ~ ~ O x To v) v o o .~ c) ad A ad ~ - c) cd at at At o at at at o v - ~ 'e u, ~ - v) at .- A at ;.- ~ o~ 'e o v A, ~ .- o at id ~ to o' z o a, c) .- . o o as 'e v) To * 0 o m ~ ~ ~0 E 3 ~ ~ c ~,, ~i j~24 ~ ~O ~ ~ O 0 0 x xx ~x x _ ~ ~ ~ ~v v o~ ~ == ~ 0 ~ O ° ~ 0 z ~ ~ ~ ~ ~ ~ ~x ~ *~ ~4.~ } c~ ~c ~c' ~ c ~c' ~ ctct ct ct ct ct ct x xx x x x x x (~ d" ~- ~O O ~] ~ O O OO ° _ ~ ~ O ~00 O ~ ~_ _ ~ oo ~_ ~ ~_ ~C~ _' O _ ~- , ~ ~oo -- =0 ._ . t4 S: ^:O -= ~ ~ C;: S: Ct ,= _ ·_ ._ U~ C~ Cd s~ ~ ~ Ct 4- C~ ._ ~ O O S:~ ~ ·= C) C) O ·< O O ~ CD a, 4_ C~ =0 . - ~ ~ ,s: .= ~ O.= ~q U, ~ ·- C.) _ . C) U) ~ .9 s~ U~ (4-o, 'e Ct C) ~ X Ci, *¢ =O

Next: 7 Exposure Definition and Measurement »
The Five Series Study: Mortality of Military Participants in U.S. Nuclear Weapons Tests Get This Book
×
Buy Paperback | $67.00 Buy Ebook | $54.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

More than 200,000 U.S. military personnel participated in atmospheric nuclear weapons tests between 1945 and the 1963 Limited Nuclear Test Ban Treaty. Questions persist, such as whether that test participation is associated with the timing and causes of death among those individuals. This is the report of a mortality study of the approximately 70,000 soldiers, sailors, and airmen who participated in at least one of five selected U.S. nuclear weapons test series1 in the 1950s and nearly 65,000 comparable nonparticipants, the referents. The investigation described in this report, based on more than 5 million person-years of mortality follow-up, represents one of the largest cohort studies of military veterans ever conducted.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!