Appendixes



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 319
Appendixes

OCR for page 319
This page in the original is blank.

OCR for page 319
Appendix A Prototype Evaluation An initial set of studies was conducted with the O*NET™ instruments. Data were collected from incumbents in about 30 occupations out of an initial targeted sample of 70 occupations. In addition, occupational analysts rated 1,122 occupations on a subset of O*NET™ descriptors using information from the Dictionary of Occupational Titles to guide their ratings. Return rates were disappointing for the questionnaires mailed to incumbents. Though 60 percent of all incumbents who were given questionnaires completed them, many establishments that initially agreed to participate failed to do so (only 27 percent actually participated). A number of factors may have contributed to the disappointing participation rates, suggesting that future data collections would benefit from shorter questionnaires, less burden on each establishment, monetary or other incentives for points of contact within each establishment and for incumbents, and greater use of organizations willing to volunteer their services or showing interest in applying O*NET™ results. However, results for the reliability of the questionnaires, as measured by interrater reliability coefficients, were acceptable. In addition, the correlations between the mean ratings of incumbents and the mean ratings of analysts were sufficiently high to warrant the use

OCR for page 319
of the analysts' ratings as an interim O*NET™ database. Tables A.1 and A.2 summarize these results. A large number of analyses was conducted on the data generated by these two studies of the prototype O*NET™. These are reported in Peterson et al., (1996, 1999) and Peterson (1997). Within each O*NET™ domain, these analyses were aimed at evaluating the adequacy of the structure of the variables and scales used. A number of broader analyses were also conducted, aimed at identifying relationships between variables within and across O*NET™ domains, identifying larger groups or clusters of O*NET™ occupations, and linking O*NET™ variables to variables outside the O*NET™ model, in particular to individual assessment variables. These kinds of analyses began to integrate the large amount of available O*NET information about both job descriptors and occupations and began to examine the relationships of O*NET™ variables and occupations to some of the available external information. Cross-Domain Analyses Several different analytical approaches were used in cross-domain analyses, and each provided a somewhat different perspective on the relationships between descriptors from the various O*NET™ content domains. In general, these results strongly support the construct validity of the O*NET™ descriptors across all content domains, providing some interesting insights concerning cross-domain relationships. All of the tests of a priori cross-domain hypotheses showed that when strong correlations between O*NET™ descriptor scores were expected, strong correlations were in fact obtained. In general, work activities involving information and people had strong correlations with many cognitive ability and skill requirements. The achievement and other more cognitively oriented work styles were also strongly related to activities involving information and people, as well as to cognitive ability and skill requirements. Work styles involving interpersonal interactions were positively correlated with activities and environments involving working with others. These relationships were summarized in the cross-domain factor analysis results, in which the first

OCR for page 319
TABLE A.1 Incumbent Interrater Agreement Coefficients for Each Scale Type Questionnaire Scale rk r30 Skills Level .79 .93 Importance .79 .93 Job entry requirement .60 .83 Knowledge Level .86 .95 Importance .85 .94 Training, education, licensure, and experience Instructional program .78 .92 Educational subject area .74 .90 Licensure .85 .95 Experience .79 .93 Generalized work activities Level .80 .92 Importance .78 .92 Frequency .74 .90 Work context   .87 .95 Organizational context Across occupations .64 .84 Across organizations .45 .79 Abilities Level .82 .93 Importance .82 .93 Occupational values   .60 .82 Work styles Level .70 .88 Importance .67 .86 Note: rk is the observed interrater agreement coefficient; r30 is the estimated interrater agreement coefficient for 30 raters. TABLE A.2 Mean Correlations Between Incumbents' and Analysts' Ratings Questionnaire Scale ria Skills Level .74 Importance .67 Knowledges Level .65 Importance .65 Generalized work activities Level .71 Importance .61 Frequency .53 Work context   .64 Abilities Level .70 Importance .65 Note: ria is the mean correlation between incumbent and analyst mean occupation ratings.

OCR for page 319
factor was defined by descriptors related to interpersonal and managerial activities, cognitive skill requirements, and achievement-related worker characteristics. Although activities involving working with information and working with people had generally similar patterns of correlations with descriptors from other domains, the differences in these patterns of correlations support the construct validity of these composites. For example, a composite of several generalized work activities, labeled "working with information," was more strongly related to technical skills and math ability, whereas a composite labeled "working with people" was more strongly related to the people-oriented work styles. Manual and physical work activities were correlated with technical skills and with psychomotor and physical ability requirements. Environmental factors from the work context domain also tended to be positively correlated with manual and physical activities and related worker requirements. In fact, manual and physical activities, physical and psychomotor abilities, and environmental factors defined the second factor in the cross-domain factor analysis. In addition to obtaining the expected relationships across domains, the analyses also generally showed that constructs conceptually unrelated do not correlate. For example, physical and psychomotor ability requirements were not significantly correlated with work activities involving information or people. These analyses also uncovered some conceptually interesting relationships. For example, office activities and related requirements tended to be negatively correlated with physical activities and related worker requirements. Technical skill requirements were negatively related with a work context that involves interacting with the public, and finally, law enforcement knowledge requirements correlated significantly with ability requirements such as psychomotor, vision and hearing, and spatial. Grouping Occupations One advantage of the O*NET™ system is that occupations can be grouped on the basis of a variety of job descriptors, depending on the needs of the particular project. For example, occupations could be grouped on the basis of their scores on the

OCR for page 319
ability requirements if the desire is to find groups of occupations that have similar ability profiles, say, for an entry-level employee selection program. Or occupations could be grouped on the basis of their generalized work activity scores if the goal is to identify occupations that tend to require that similar kinds of tasks are completed. Given that the preferred method of clustering occupations is to examine the O*NET™ to form clusters that best meet the applied purpose, there remains some interest in identifying a set of occupational groups that can be used in a more general sense, i.e., the identification of a relatively small set of occupational groups that have been formed on the basis of a diverse set of the O*NET™ variables. These groups could be used in a general descriptive way, or they could be used as entry points for persons interested in exploring the world of work. Toward that goal, several cluster analyses of O*NET™ occupations were completed. The first set of investigations focused on methodological variations and the statistical properties of different solutions, serving to guide a second set of more substantively oriented investigations. The central objective of this work was to evaluate the interpretability of occupational clusters generated using the occupational analyst ratings of knowledge, skill, ability, generalized work activities, and selected work context requirements for the 1,122 occupational units. The Ward-Hook and Q-factor analysis hybrid methods were initially used to generate solutions that had 50 occupational clusters. The Q-factor method provided the more interpretable solution, and accordingly, this method was employed to generate a more differentiated, 60-cluster solution, and a more parsimonious, 40-cluster solution. Although more research is obviously needed to evaluate the interpretability of a wider range of cluster solutions beyond those identified in these studies, these three solutions provide an initial set of occupational groups for descriptive and exploratory purposes. Aggregation of Descriptor Variables The O*NET™ descriptor variables are hierarchically organized within each of the major variable types; for example, the 46

OCR for page 319
skill variables were originally organized into a smaller number of higher-order aggregates of those skills, based on the available theoretical and empirical evidence. These higher-level organizations were evaluated using the incumbent and analyst data. This was accomplished by computing correlations among the descriptors within each type (e.g., skills), factor analyzing the correlation matrices, and comparing the results to the original organizations of descriptors. The resulting reorganizations of descriptors were called "rational/empirical models" since they combined rational and empirical analyses to arrive at the structures. The alternative rational/empirical models based on analyst data, not too surprisingly, appeared to describe the underlying structure of the analyst or transitional database better than the original content models for the ability, skill, and generalized work activity descriptors, respectively. Thus, these hierarchical structures are probably most appropriate for many uses of the transitional analyst dataset, especially for purposes of developing occupational scores for higher-level aggregates of descriptors. Regarding these aggregates, the second-order categories (for example, there are 16 second-order skill categories) were thought to be more appropriate for the development of aggregates than are the highest-order categories (for example, there are 6 highest-order skill categories). These latter categories are extremely broad, and aggregates formed at this level would lose too much information for most purposes. Several methods of combining descriptor scores into the higher level scores were investigated. Linking O*NET™ Job Analysis Information to the Assessment of Job Requirements It is well known that professional and legal guidelines stipulate that the use of selection tests should be based on job analysis information. Consequently, the use of information obtained from job analyses is a critical element for identifying and establishing requirements for jobs, and the identification of important work behaviors and the employee characteristics that underlie those behaviors leads to the choice of appropriate selection tests. Research was conducted to demonstrate the applicability of the

OCR for page 319
O*NET™ database for the identification of assessment instruments or selection techniques to use when measuring aptitude requirements associated with selecting and placing employees. The design used to investigate the relationship between the O*NET™ job analysis results and potential assessment variables was predicated on the job component validation (JCV) model (McCormick, 1979). Job component validation is one method to identify potential selection tests in situations in which it is not feasible to conduct other types of validation studies, primarily because of a lack of sufficient numbers of employees in the occupations for which selection procedures are to be developed. JCV involves two main hypotheses: (1) if various jobs have a given component in common, the attributes needed to fulfill the requirements of that component would be the same across the various jobs; and (2) the validity of a predictor of a job requirement defined by a job component is consistent across jobs. The first step in the JCV process is the development and use of an objective job analysis procedure to document critical information about work behaviors and required worker characteristics for the job or occupation in question. Next, the JCV process examines the relationships between these specific job and worker characteristics and well-defined aptitude and ability characteristics. It was hypothesized that the O*NET™ data—in particular, the occupational analyst data—could be used as a source of job analysis information in the JCV process. The Position Analysis Questionnaire (PAQ) database, together with the O*NET™ analyst database, was used to see whether O*NET™ information could be used to accurately predict the General Aptitude Test Battery (GATB) and estimates of the Wonderlic test scores contained in the PAQ database. Using 249 occupations, a generally high level of accuracy was obtained in predicting these scores (e.g., cross-validated multiple correlation coefficients of .88 for predicting verbal aptitude, .82 for clerical perception, .64 for manual dexterity, and .81 for the Wonderlic). In addition, generalized work activities rationally linked to PAQ dimensions produced multiple correlations of similar magnitude as those generalized work activities empirically selected through a cross-validated regression

OCR for page 319
analysis. Building on this research, it would seem plausible that the systematic and standardized job analysis information from the O*NET™ could be incorporated into a job component validation process that could assist organizations in identifying and selecting assessment systems for hiring and placement.