Skip to main content

Currently Skimming:

E THE RATING OF DOT WORKER FUNCTIONS AND WORKER TRAITS
Pages 315-335

The Chapter Skim interface presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter.
Select key terms on the right to highlight them within pages of the chapter.


From page 315...
... There are several reasons to suspect that the ratings of DOT occupations for worker functions and worker traits are subject to error. First, the factors that the DOT scales purport to measure are vague and ambiguously defined.
From page 316...
... The specific effects, along with their labels and a brief description of each, are given in Table E-1. STUDY DESIGN With the assistance of national office personnel we asked six experienced job analysts at each field center with at least 6 months' training and experience to rate one of two sets of job descriptions.
From page 317...
... one of three DoT occupations within eight categories of job type by GED;C, one of seven field centers; D, one of two job descriptions for given DoToccupa tion; R one of 42 individual occupational analysts.
From page 318...
... In this way, two job descriptions for each of three base title occupations were selected for eight combinations of job type by GED. (It might be noted in passing that we had to go through 92 DOT codes in order to obtain the necessary two descriptions for each of 24 occupations, yet another indication of the poor quality of the DOT source data.)
From page 319...
... The ratings task and the rating form used closely approximated the ratings made in the normal course of job analysis for the DOT, although analysts were unable to observe the jobs directly, as they would usually do. The rating task was administered to the 42 raters at their respective centers on June 1 1, 1979, under controlled conditions.
From page 320...
... V, Ct a' ~: o C: ~- ON ON O cry oo oo ~ ~ ~1 1 1 1 1 o 0 mo JO 0 ° ~ .
From page 321...
... 1 1 - ~O O r~ - ~ + + + oo ~ r~ ~oo ~ cr~ ~ ~ ~ ~ ~° ~ ~ ~1 ~ ~1 1 1 1 1 ° ° ° ° ° ° 1 oo oo oo oo oo oo - '- ~ t<)
From page 322...
... c~ r~ ~ ~ ~ ~ _ ~ oo ~ oo ~ ~ ~ r~ ~ ~ oo ~ ~ ~ \0 ~ ~ O - oo - =\ ~ l- t- l0N _ 0 oo ~ ~ oo ~ ~ oo ~ r~ Oo ~ O 00 0 ~ ~ ~ ~ r~ ~ oo ~ ~ ~ ~ O O oo oo oo oo - - oo ~ - o ~ _ oo ~ ~ 0 ~ ~ r~ _ - )
From page 323...
... _ ~V~ ~ ~O ~ - ~ C~ ~ r~ ~O - ~D 1 o ~ ~ ~ ~ ~ ~ 0 ~ 0 ~ ~ ~O r~ 0 0 0 0 0 ~ oo oo cr - ~ ~ cr c~ 0 ~ oo ~ ~ ~ ~ ~ ~ CN ~ ~ ~ C~ ~ 0 ~_ ~ ~ 0 l- ~ ~ _ O O O O O _ ~ 00 00 .
From page 324...
... In these estimates, only the residual effect is considered to be error; the differences between raters and field centers are taken as valid sources of variation. The difference between reliabilities in the first and second set of estimates indicates the contribution of the job description effect per se to the total variation in the ratings.
From page 325...
... The especially low reliabilities of the THINGS and STRENGTH scales may well result from insufficient information in the description being rated. Of the 18 analysts who made comments at the end of the study, most noted that the descriptions contained insufficient information to rate jobs for physical capacities and environmental conditions.
From page 326...
... The reliabilities by job type- service versus manufacturingare presented in Table E-6. These reliability estimates were calculated using the same set of assumptions about error that were used in previous analysis.
From page 327...
... See text for explanation. b Reliabilities for the LOCATION factor could not be calculated separately for service and manufacturing occupations because there was no variation on this factor for the manufacturing occupations.
From page 328...
... To assess the consistency of individual raters in rating each factor, we calculated the correlation across all jobs between the rating of each rater and the average rating of all other raters. Since half of the raters rated the first set of job descriptions for the 24 occupations and half rated the second set, the two groups of raters were analyzed separately.
From page 329...
... As mentioned previously, however, many analysts felt that the descriptions contained insufficient information with which to assign these particular ratings. Perhaps if additional information were incorporated into the description, higher levels of consistency would be achieved with the same, or only a slightly larger number of raters.
From page 330...
... v~ ~ oo oo ~ ~ ~ cr .
From page 331...
... ~ oo o o oo o o o ~ ~ ~ ~ oo ~ ~ o ~ o o ~ oo oo oo oo .
From page 332...
... ~ r~ ~ ~ ~ r~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ oo ~ ~ ~ oo oo oo oo oo oo oo oo oo oo oo Oo oo oo oo oo oo ~ oo oo oo .
From page 333...
... \0 ~ ~ ~ ~ ~ ~ oo ~ ~o ~ cr~ ~ ~ V~ _ ~ ~ ~r r~ ~ 00 Oo 0 ~ oo oo ~ ~ ~ oo oo oo oo oo ~ ~ ·n ~ oo .· .
From page 334...
... . That is, the logic of alpha is exactly the same as the logic of the Spearman-Brown formula, with r, the average interrater reliability, being stepped up, via Spearman-Brown, to alpha, the reliability of the average of k raters.
From page 335...
... Rating DOT Worker Functions and Worker Traits 0.67 - 0 09.


This material may be derived from roughly machine-read images, and so is provided only to facilitate research.
More information on Chapter Skim is available.