Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 193
Organizational Linkages: Understanding the Productivity Paradox 8 Models of Measurement and Their Implications for Research on the Linkages between Individual and Organizational Productivity John P. Campbell As argued in the previous three chapters, measuring productivity is a very intrusive process. It makes goals very explicit, serves to identify the work to be done, influences individual and organizational choice behavior, and helps to define what will be rewarded and punished. That is, measurement is a powerful influence on individual and organizational productivity and performance. My purpose in this chapter is to outline a substantive measurement model that has direct implications for future research on the linkage between individual and organizational productivity. The model is intended to be entirely consistent with Chapters 5 and 6 and to make the necessity of a substantive specification of information technology (IT) productivity even more explicit. Measurement has two principal parts. The first concerns the substantive specification of the variables to be measured. The second is the specification of the rules and scaling operations by which different values of a particular measure will be assigned to different amounts of the variables under consideration. The substantive theory cannot be separated from the scaling model. Both are always present, if only by default. For example, a university may use student credit hours per faculty member as an indicator of the productivity of academic departments simply because "it is there." Use of such a measurement operation, however, implies something very concrete about what is meant by departmental productivity in that institution—teaching large classes cheaply is the way to be productive.
OCR for page 194
Organizational Linkages: Understanding the Productivity Paradox In any measurement model, there also should be a clear distinction between the latent variable, or construct, and the observed measure. By definition, a particular variable can be measured in any of several different ways. For example, quantity of code production for a programmer/analyst could be measured through archival production records (raw or adjusted), the amount of code produced in a specified time in a simulation exercise, or a supervisor's judgment. The distinction between the latent variable and the operational measure is of fundamental importance and is the crux of this chapter. A basic axiom of measurement is that the validity of a measure can only be judged against the specific objectives or purposes of measurement (American Educational Research Association et al., 1985). In the current context, one major goal is to support the modeling of the linkage between individual and organizational productivity. That is, if both individual and organizational productivity can be measured, can the causal relationships between individual and organizational productivity be described? A second measurement objective is to assess the components of productivity that most directly reflect the effects of IT implementations. The two general goals of modeling the individual-organizational linkage and evaluating the effects of new information technologies may indeed require different measures. Having said this, it should be noted again that measurement itself is a very powerful treatment (see Chapter 5). It defines goals and guides action. If it is based on a sound analysis of the variables to be measured and the measurement operations validly reflect the appropriate sources of variation, the effects on performance, effectiveness, or productivity can be dramatic (e.g., Pritchard et al., 1989). Perhaps for the first time, people would know where to direct their energies and the outcomes on which rewards and punishments will be contingent. In the next section I outline a measurement model that could be used at the individual and organizational levels to guide research on measuring productivity and its antecedent. I discuss the model in the context of the individual first and then consider the organizational analog. A SUGGESTED MEASUREMENT MODEL As discussed above, any measurement theory must address the nature of the variable(s) to be measured and the appropriateness of the scaling procedures used to estimate "scores." Without a specification of the former, the latter cannot be evaluated. Consequently, the description of the model below focuses on the former rather than the latter.
OCR for page 195
Organizational Linkages: Understanding the Productivity Paradox Modeling the Latent Structure of IT Productivity: An Analogy from Individual Performance What are the basic variables that constitute IT productivity and what is their basic form? That is, what is the latent structure of IT productivity? No one seems to know at the moment. Even a tentative specification has not been offered. Speculations seem to center around a variety of specific measures that happen to be available (e.g., lines of code per programmer per year). Describing in the abstract what is meant by the latent structure of the basic variables that constitute productivity in a specific industry or organization is not easy. A quick rendition of a prototype model might be helpful (see Figure 8-1). Although the prototype to be described concerns the latent structure of individual performance (not productivity), it provides a useful stepping stone for talking about how to develop a measurement model of individual and organizational productivity in the IT industry (for a fuller discussion of the prototype, see Campbell, 1991). The specifications for modeling the latent structure of individual performance begin with a definition of performance. Performance is individual action that has relevance for the organization's goals. A measure of performance reflects how well people execute the relevant actions. Also, performance is not one thing. It is composed of a finite, but not too large, number of major components, or factors. The covariances among the components probably are not zero, but neither are they so large as to allow highly accurate prediction of performance on one component from performance on another (e.g., teaching performance versus research performance). In this model performance has eight components at the most general level (classes of things people do on the job). The eight components are intended to be sufficient to describe the top of the latent hierarchy in any specific job. However, the eight components have different patterns of subgeneral components, and their content varies differentially across jobs. Further, any particular job might not incorporate all eight components. A brief description and discussion of each of the eight components follows. Job-specific task proficiency reflects the degree to which the individual can perform the core substantive or technical tasks that are central to his or her job. Core tasks are the job-specific performance behaviors that distinguish the substantive content of one job from another. Constructing custom kitchens, doing word processing, designing computer architecture, driving a bus through city traffic, and di-
OCR for page 196
Organizational Linkages: Understanding the Productivity Paradox Determinants of Job Performance Components: PCi=f(DK, PKS, M)a Declarative Knowledge (DK) Procedural Knowledge and Skilll (PKS) Motivation (M) Labels Cognitive skill Choice to perform Facts Psychomotor skill Choice of level of effort Rules Physical skill Principles Self-management skill Choice of duration of effort Goals Interpersonal skill Self-knowledge i= 1, 2 . . . k performance components. Performance components are the latent variables that represent the substantive content of what people should be doing in a particular job (e.g., conducting undergraduate classes). They are intended to fit a hierarchical factor model and to define a set of continuums along which individual proficiency can be measured. At the top of the hierarchy k = 8. Predictors of Performance Determinantsb DK = f[(ability, personality, interests), (education, training, experience), (aptitude/treatment) interactions] PKS = f[(ability, personality, interests), (education, training, practice, experience), (aptitude/treatment) interactions] M = f[whatever variables are stipulated by the chosen motivation theory] a Performance differences can also be produced by situational effects, such as the quality of equipment, degree of staff support, or nature of the working conditions. For purposes of this model of performance, these conditionals are assumed to be held constant (experimentally, statistically, or judgementally). b Individual differences, learning, and motivational manipulations can only influence performance by influencing declarative knowledge, procedural knowledge and skill, or the three choices. FIGURE 8-1 A proposed model of job performance and its determinants. recting commercial air traffic—all are categories of job-specific task content. Individual differences in how well such tasks are executed are the focus of this performance component. Non-job-specific task proficiency reflects the fact that the vast majority of individuals are required to perform tasks or take actions that are not specific to their particular job. For example, in research universities with doctoral programs, the faculty must teach classes, advise students, make admissions decisions, and serve on committees. All faculty must do these things, in addition to ''doing" chemistry, psychology, economics, or electrical engineering.
OCR for page 197
Organizational Linkages: Understanding the Productivity Paradox Written and oral communication, that is, formal oral or written presentations, is a required part of many jobs. For those jobs, the proficiency with which one can write or speak, independent of the correctness of the subject matter, is a critical component of performance. Demonstrating effort reflects the degree to which individuals commit themselves to all job tasks, work at a high level of intensity, and keep working under adverse conditions. Maintaining personnel discipline reflects the degree to which negative behavior, such as substance abuse at work, law or rule infractions, or excessive absenteeism, is avoided. Facilitating peer and team performance reflects the degree to which the individual supports his or her peers, helps them with job problems, and acts as a de facto trainer. It also encompasses how well an individual facilitates group functioning by being a good model, keeping the group goal directed, and reinforcing participation by other group members. If the individual works alone, this component will have little importance. However, in many jobs, high performance on this component would be a major contribution toward the goals of the organization. Supervision/leadership includes all the behaviors directed at influencing the performance of subordinates through face-to-face interaction and influence. Supervisors set goals for subordinates, teach them more effective methods, model the appropriate behaviors, and reward or punish in appropriate ways. The distinction between this component and the previous one is a distinction between peer leadership and supervisory leadership. Management/administration includes the major elements in management that are distinct from direct supervision. It includes the performance behaviors directed at articulating goals for the unit or enterprise, organizing people and resources to work on them, monitoring progress, helping to solve problems or overcome crises that stand in the way of goal accomplishment, controlling expenditures, obtaining additional resources, and representing the unit in dealings with other units. Next, performance components must be distinguished from performance determinants (the causes of individual differences on each performance component). As noted by the equation at the top of Figure 8-1, in this model individual differences on a specific performance component (PC) are a function of three major determinants: declarative knowledge (DK), procedural knowledge and skill (PKS), and motivation (M). Thus, PC = f(DK,PKS,M). Declarative knowledge is simply knowledge about facts and things. Specifically, it represents an understanding of a given task's requirements (e.g., general principles for equipment operation). Procedural knowledge and skill is attained when declarative
OCR for page 198
Organizational Linkages: Understanding the Productivity Paradox knowledge (knowing what to do) has been successfully combined with being able to actually do it (modified from Anderson, 1985, and Kanfer and Ackerman, 1989). The major categories of procedural knowledge and skill are (1) cognitive skills, (2) psychomotor skills, (3) physical skills, (4) perceptual skills, (5) interpersonal skills, and (6) self-management skills. As a direct determinant of performance, motivation is defined as the combined effect of three choice behaviors: (1) choice to perform, (2) choice of level of effort to expend, and (3) choice of the length of time to persist in the expenditure of that level of effort. These are the traditional representations for the direction, amplitude, and duration of volitional behavior. The important point is that the most meaningful way to talk about motivation as a direct determinant of behavior is as one or more of these three choices. Having summarized the general ingredients of the model, a few general points should be noted. First, the precise functional form of the PC = f(DK, PKS, M) equation is obviously not known and perhaps not even knowable. Further, spending years of research looking for it would probably not be of much use. Instead, the following is suggested. First, performance will not occur unless there is a choice to perform at some level of effort for some specified time. Consequently, motivation is always a determinant of performance, and a relevant question for virtually any personnel problem is how much of the variance in choice behavior can be accounted for by stable predispositions, how much is a function of the motivating properties of the situation, and how much is a function of the interaction. Performance also cannot occur unless there is some threshold level of procedural skill, and there may be a very complex interaction between procedural skill and motivation. For example, the higher the skill level, the greater the tendency to choose to perform. Another reasonable assumption is that declarative knowledge is a prerequisite for procedural skill (Anderson, 1985). That is, before being able to perform a task, one must know what should be done. However, this point is not without controversy (Nissen and Bullmer, 1987), and it may indeed be possible to master a skill without first acquiring the requisite declarative knowledge. Nevertheless, given the current findings in cognitive research, the distinction is a meaningful one (Anderson, 1985). Performance could suffer because procedural skill was never developed, because declarative knowledge was never acquired, or because one or the other has decayed. Also, some data suggest that the abilities that account for individual differences in declarative knowledge are not the same as those that account for individual differences in procedural skills (Ackerman, 1988).
OCR for page 199
Organizational Linkages: Understanding the Productivity Paradox Beyond the Basic Taxonomy of Performance Components The eight performance components above are meant to be the highest-order components that can be useful. To reduce them further would mask too much. However, as noted, not all the components are relevant for all jobs. The nature of the lower-order factors within the major components has been the subject of considerable research for some of the eight components (e.g., supervision/leadership, management/administration) and is a matter of speculation for others (e.g., communication, demonstrating effort). It is possible that the number of subfactors for the first component (core task performance) is equal to the number of different jobs in the occupational hierarchy. That is, specific determinants of performance on this component would be different for each job. This would be a very poor description of the latent structure. However, one would hardly expect the determinants of core task performance to be the same for jazz musicians, graphic artists, professional golfers, theoretical economists, the clergy, farm managers, and so on. Where between these two extremes is a more appropriate description of the subfactors of core task performance? The model assumes that the number of discriminable subfactors for this component is a manageable number, and that it would be quite possible to build a systematic body of knowledge around the major differences in the correlates for the subfactors. Peak versus Typical Performance In a very illustrative study of supermarket checkout personnel, Sackett et al. (1988) obtained the correlation between a standardized job sample measure (see below) administered by the researchers, and an on-line computerized record of actual performance on the very same job tasks. Both measures were highly reliable, but the correlation between the two was surprisingly low. The authors called this a distinction between maximum and typical performance, and they reasoned that the cause of the low correlation was the uniformly high motivation generated by the research situation versus the differential motivation across individuals in the actual work setting. If such an explanation is accurate, attempts to model performance must address the issue of what to do with the distinction. The prototype model includes the two components of core task performance and level and consistency of effort in an attempt to keep these two aspects of performance ("can do" versus "will do") separate. If both components can actually be measured, so
OCR for page 200
Organizational Linkages: Understanding the Productivity Paradox much the better. If they cannot, the typical performance measure is a fuller account of an individual's contribution. Potential Measurement Methods When it comes to the actual measurement of performance, the model allows only three, or possibly four, primary measurement methods. Ratings, or the use of expert judgments to assess performance, are considered by many to be a very poor method of measuring (Cascio, 1991), but they are probably not as bad as claimed. One advantage of ratings is that their content can be directly linked to the basic performance components by straightforward content validation methods (e.g., critical incident sampling or task analysis). Also, their reliabilities are usually respectable and can be improved considerably by using more than one rater, and they are as predictable as objective effectiveness measures (Nathan and Alexander, 1988; Schmitt et al., 1984). In the context of the performance model being described, ratings have the added advantage of being able to reflect all three kinds of performance determinants (DK, PKS, and M) and can be used for any performance component (i.e., the basic eight or their subfactors). The principal concern about ratings has always been their possible contamination by systematic variance unrelated to the performance of the person being assessed. Breaking performance rating into its sequential elements—sampling observations, perceptual filtering, encoding information, storage in memory, retrieval of information for a specific purpose, and differential weighting and composite scoring of the information retrieved—shows it to be a very complex cognitive process that allows many opportunities for unsystematic variance and contamination (Landy and Farr, 1980). Considerable faith is restored by the fact that the more thorough attempts to use the method have produced credible results (Campbell, 1991). The second measurement method is the standardized job sample, in which the task content of the job (e.g., the supermarket checkout above) is simulated, or actually sampled intact, and presented to the assessee in a standardized format under standardized conditions. The content validity of this method can also be determined directly, but for the reasons discussed above, it may not reflect the influence of all three determinants (DK, PKS, and M), and it may be difficult to use it to measure some of the eight components. Also, there is always a question about whether the knowledge and skill required by the standardized sample are different in any major respect from those required in the actual job setting. A third measurement method would consist of directly observing
OCR for page 201
Organizational Linkages: Understanding the Productivity Paradox and measuring the task as it occurs in the job setting. This is what Sackett et al. (1988) were able to do for the supermarket checkout personnel. Using this method would require rather expensive observational or recording techniques for most jobs, and for complex jobs the difficulties in observation might be insurmountable. In general, the fourth measurement method is not allowed because it equates performance and effectiveness, which includes factors outside the control of the individual. However, it may sometimes be possible to specify outcomes of performance that are almost completely under the control of the individual and to assess individuals on such indicators. The Characteristics of Good Measurement In general, a "good" measure has high content validity, allows the appropriate performance determinants to operate, has a high proportion of reliable variance, is not contaminated by systematic variance due to things such as race or gender bias, minimizes the influence of general method variance (e.g., the "ratings" method), minimizes the influence of instrument-specific method variance, and maximizes the estimated true correlation between the operational measure and the latent variable. If validity is defined as the degree to which an indicator measures what it is supposed to measure, there are two major ways a measure could be invalid: (1) the definition/specifications for the variable(s) to be measured are wrong and (2) the measurement operations themselves do not capture the appropriate determinants. For example, when defined as the number of units produced divided by labor costs, individual productivity can be increased simply by cutting salaries, and the measurement operations can validly reflect the change. On the other hand, if the operational measure is insensitive to changes in output (i.e., the numerator), the measure would not be a valid one. This might happen if cutting salaries leads to more defects in the work produced but the quality control system cannot detect it. Similarly, if reliability of measurement is defined as consistency over repeated measures, then unreliability can result from a lack of consistency in the way the variable is defined (either across time or across decision makers) or from a measurement method that contains too much random error. To say it another way, the validity and reliability of measurement depend on a good theory and a good measurement method.
OCR for page 202
Organizational Linkages: Understanding the Productivity Paradox MEASUREMENT ISSUES RAISED BY THE PROTOTYPE MODEL The prototype model provided above has useful implications for the measurement of individual and organizational productivity. The following issues seem critical. Goal Explication Understanding the organizational goals to which a job should contribute is a fundamental issue for performance measurement. The same is true for measuring individual and organizational IT productivity against the goals of the IT enterprise. For example, some sources speak disparagingly about the effects (or lack of an effect) of the automatic teller machine (ATM) on bank productivity. This seems to imply that introducing a new technology to maintain market share or to keep customer satisfaction from decreasing is not a goal against which bank IT should be judged. Is this a correct inference? Certainly customers can do many things with an ATM system that they could not do before. Should this not be designated as an increase in productivity? In general, if one wants to make clear statements about the state of IT productivity and the rate at which it is changing, the goals for IT must be articulated. The Definition of Productivity As is often true in discussions of performance, the term productivity is frequently used without any attempt to say what it means. In the context of the individual-organizational linkage, what should it mean? Is it the value of the IT-using organization's output that is purchased by users divided by the costs of achieving that output? Is it the judged utility of the output to some larger organization or industry? Or is it the quantity or quality of the output itself? There is nothing sacred about any particular characterization. The requirement imposed by the model is to come to some agreement about the definition that would be most useful and to use it consistently. How best to do that is a research issue. Unit of Analysis and Locus of Measurement In the performance model, a major concern is whether individual differences on the performance measure(s) are in fact under the control of the individual. The same concern is relevant for productivity measurement. Are the quantity (quality?, value?) of output and the costs of
OCR for page 203
Organizational Linkages: Understanding the Productivity Paradox achieving that output actually under the direct control of the individual (if the concern is individual productivity) or the organization (if the concern is organizational productivity)? If not, evaluating individuals or organizations by using such contaminated measures is counterproductive. Components of Productivity If performance is truly multidimensional such that the word should never be used in the singular without a modifier, the implication is that the same is true for productivity. What then are the major components of IT productivity? There are at least two complicating factors. First, IT is not a monolith. At various places in this report, the word productivity has been modified by each of the following: software engineering/design productivity (e.g., the completion of software designs by software engineers); software development productivity (e.g., the writing of code to operationalize the design); software production productivity (e.g., the writing of code to meet specific user demands); software productivity (e.g., PC World magazine's standardized comparison of WordPerfect 4.2 versus 5.1); hardware productivity (e.g., the output versus costs for a 486/33 processor compared with a 286/25 processor under standardized conditions); hardware user productivity (e.g., replacing all 286/25 personal computers with 486/33 machines in a data analysis facility); and software user productivity (e.g., replacing WordPerfect 4.2 with 5.1 in an administrative/clerical operation). One can think of the individual-organizational linkage within each of these contexts. However, the nature of the dependent variables to be measured would most likely be different, and certainly the sources of variation in a productivity indicator will not be the same across contexts. Which contexts are the most important? Are all of them? The second complicating factor is that, by analogy to the prototype model, productivity within each of these contexts is multidimensional. That is, productivity is not singular for software engineering organizations or for any of the other contexts. The ability to specify the basic variables of interest is a function of the current accumulation of information within each context and the knowledge to be gained by future
OCR for page 204
Organizational Linkages: Understanding the Productivity Paradox research. In some contexts there may indeed be enough information to propose a useful array of basic productivity components if the information has been summarized with such a purpose in mind. In other contexts, more research may be needed. The investigation of the productivity of computer-aided design (CAD) users in Chapter 10 is an example of such research. The eight higher-order components in the individual performance model described earlier are meant to reflect the structure of performance that seems to be represented in the current theory, practice, and research evidence. The question here is whether the available literature on IT productivity would permit at least a tentative specification of its latent structure. Productivity Determinants The prototype measurement model says that variation in individual performance is a function of three determinants (DK, PKS, M). They are important ingredients of the model because an operational performance measure (e.g., supervisor ratings or scores from a job task simulator) could choose to control or not control for one or more of them. For example, the measurement objective could stipulate that individual differences in motivation (the three volitional choices noted above) should not contribute to individual differences in performance scores produced by the measure, as when evaluating the effects of a skills training program. In such an instance, the measurement goal is to determine whether the specified technical skills have in fact been mastered, not whether the individual chooses to use them in the actual job setting. In general, the critical issue is whether the measure allows the relevant determinants to influence scores. Perhaps another example would help illustrate the point. It is generally agreed that many commercial airline accidents are the result of faulty "cockpit management." That is, at a critical time there is a breakdown in task delegation, communication, and teamwork. These specific variables seem to represent supervision/leadership and management/administration in the prototype model's taxonomy of performance components. If a simulator is used to measure performance, there are two major considerations at the outset. First, does the simulator allow performance on the two components to be observed? Second, what performance determinants should be allowed to operate and which should be controlled? For example, one frequently critical determinant in cockpit management is the hesitation of a junior crew member to question the actions of the senior pilot if he or she appears to be in error. To bring this determinant into the simulator, the simulator "crew" should re-
OCR for page 205
Organizational Linkages: Understanding the Productivity Paradox flect the established air crew hierarchy. To serve a different objective, the measurement procedure could choose to control for the motivational determinants so as to evaluate the effects of knowledge or skill differentials without being confounded with motivational differences. Again, the choice of measurement operations is very dependent on the measurement objectives. Moving from individual performance to a consideration of individual and group productivity makes the explication of determinants a bit more complicated. Individual and group productivity are surely influenced by a number of other things besides individual knowledge, skill, and motivation, and legitimately so. For example, the translation of individual output to group output is a function of such things as the nature of the task, the structure of the organization, the nature of the technology, and a number of management considerations (e.g., coordination, planning, goal definition, feedback, and control) regardless of whether they are exercised by a manager or an empowered work group. Chapter 3 provided an excellent summary of what is known, and not known, about how to model the linkage between individual and organizational productivity. Taken in concert with Chapters 5 and 6, Chapter 3 makes it possible to at least outline the basic determinants of organizational productivity. At perhaps the highest level of abstraction, the list of basic determinants might be as follows: Individual performance, as explained in this chapter. Technology, in this case IT, as discussed in Chapter 9. The interaction of technology and individual capabilities, in the sense that certain kinds of technologies, in combination with certain kinds of individuals, may have a much greater or lesser effect on productivity than would be expected from the sum of the main effects. For example, new technology A may be totally beyond the capabilities of the current users, but technology B may take full advantage of those capabilities. Organizational structure, as it applies to the individual-organizational linkage. The parameters of organizational structure were discussed in Chapters 3 and 5. The interaction between technology and organizational structure, that is, some technologies may be very inappropriate and even counterproductive when implemented within certain kinds of organizational structures, and vice versa. For example, installing and maintaining a computerized project management system may detract from the performance of a nonhierarchical research and development team that interacts closely on a daily basis. The interaction between organizational structure and individual
OCR for page 206
Organizational Linkages: Understanding the Productivity Paradox capabilities, that is, individuals may need certain specific skills to be productive in certain organizational structures, or certain structures may differentially influence the motivation of certain types of people. Management functions, as in the expertise with which planning, coordination, goal setting, monitoring, control, and external representation are carried out. The overall effects of ''management" on organizational productivity are very much a function of who does it and how well. Also, there are undoubtedly a number of critical interactions among individuals, technologies, and the procedures by which the management functions are executed. For example, the empowered work group may be an effective "manager" only for individuals at a certain level of performance. Various chapters in this report provide a more detailed view of one or more of these productivity determinants. The critical issue is that the specific determinants of a specific component of organizational productivity constitute the linkage with which this report is concerned. For a productivity measure to be useful in studying linkage phenomena, it must be capable of being influenced by the appropriate determinants. IMPLICATIONS FOR EVALUATING STRATEGIES FOR IMPROVING PRODUCTIVITY If the above list captures the basic determinants of organizational productivity, a number of chapters in this report point to an array of basic strategies that could be used in an attempt to improve IT productivity by operating as one or more of the determinants. The basic change strategies might be outlined as follows: Change individuals by selecting people who would exhibit higher performance, increasing individual knowledge and skill through training, more clearly identifying the tasks to be performed and the goals to be achieved, and increasing the time individuals spend on the task and the level of effort they expend to accomplish goals. Change to a more appropriate organizational structure. Improve the technology itself. Improve the management functions. For example, one of the most important topics for research and practice
OCR for page 207
Organizational Linkages: Understanding the Productivity Paradox in organizational science during the past 20 to 25 years has concerned the implications of less hierarchical and more participative work groups and organizations as instruments of the management functions. On at least one occasion, even one of the largest corporations in the world (the Ford Motor Company) has been the unit of analysis (Banas, 1988). The pieces of this strategy go by various names, such as autonomous work groups, self-managed work teams, employee empowerment, and the high-involvement organization (Goodman, 1986; Goodman et al., 1988; Lawler, 1991). The central concern of this very large domain of research and practice is how the contributions of individuals to the effectiveness of the larger unit can be optimized by the strategy of decentralizing the management functions. Such a strategy should lead to better communication, coordination, and problem solving, and to higher motivation and commitment. If a particular change strategy aimed at a particular determinant, or set of determinants, of organizational productivity fails to exhibit any effects, it is useful to keep in mind that such a result could occur for any of several reasons. Among them are the following: The strategy truly does not work. There is a certain lag between the time of implementation and the time the effect will be realized. The productivity indicator was measured too soon (see Chapter 3). Changes in the productivity indicator are a function of so many other things that even if the change strategy is a good one its efforts will be masked (see Chapter 4). The productivity indicator is so unreliable that differences in scores across units, between treatment strategies, across time, and so on, reflect nothing more than unsystematic error. The productivity indicator is not a measure of productivity. That is, it is a reliable measure of something else. To cite one strategy of major interest, a question considered by a number of analysts (see Chapter 2; also Loveman, 1988) is whether the large investments in IT by firms or industries have improved the productivity of the firm or the productivity of the industry. In the opinion of many people, the investment has not yielded much of a return. However, from the perspective of the measurement model described here, most of the available data are not appropriate to the question. Besides the fact that most of the indicators used do not fit the definition of productivity (e.g., profit, return on investment), the indicators are so distant from the locus of technological change that it seems virtually impossible to interpret whatever relationship is found. So many other
OCR for page 208
Organizational Linkages: Understanding the Productivity Paradox determinants can intrude that interpreting a strong relationship would be almost as difficult as interpreting a weak relationship. However, the difficulty is not symmetrical because the influence of a type II error (saying that a technological change has no effect on productivity when in fact it does) operates only one way. One could fail to find a significant relationship because of a low N (e.g., too few firms in the sample), because the observed indicator is not a valid measure of the appropriate dependent latent variable, or because the productivity measure is not really under the control of the information technology, no matter how good or bad the technology is. It is perhaps little wonder that so few relationships are detected. These issues are not unique to the implementation of information technologies. The problems associated with the implementation of change have been major topics for research and practice for many decades (e.g., Bennis et al., 1962). Chapter 4 summarized a number of the issues and demonstrated that it is unreasonable to expect a specific intervention that is directed at part of the organization to affect substantially the overall productivity of the entire organization as reflected in a summary index several steps removed from the direct effects of the new technology. Finally, there is sometimes an implication that the goal of modeling the individual-organizational productivity linkage is to be able to determine how much of the variance in group or organizational productivity is due to individual productivity and how much is due to other sources. That is, the goal is to account for all significant sources of variance in organizational productivity and to determine the proportion of variance accounted for by each source. Using such a comprehensive analysis-of-variance framework would pose measurement problems (e.g., specifying all the populations of interest and sampling from them) that are impossible to surmount. In reality, estimates of the variance in organizational productivity accounted for by individuals will always be a function of specific sample (i.e., organizational) characteristics that cannot be overcome, in any practical sense, by a randomized design. That is, by definition, there is no general answer to the question of how much of the variance in organizational productivity is due to variation in individual productivity. Instead, as pointed out in Chapter 3, the goal should be to learn as much as possible about how each determinant operates under various conditions. Knowledge of the interactions will always be incomplete, but knowing a fair amount about the most important effects will go a long way toward maximizing an organization's utilization of the contributions of individual workers.
OCR for page 209
Organizational Linkages: Understanding the Productivity Paradox SUMMARY OF THE CRITICAL IMPLICATIONS In summary, the proposed model for the measurement of IT productivity incorporates the following notions: A guiding definition of IT productivity must be agreed on. For example, is the central concern performance, effectiveness, productivity in the conventional sense (Mahoney, 1988), utility, or something else? Are there different domains of IT that require a different definition? By whatever definition, productivity is composed of major components that are distinct enough to preclude talking about it in the singular as one thing. The better the specification of the latent structure of productivity in substantive terms, the more valid and useful measurement will become. Specification of the major determinants of each productivity component is of critical importance. In particular, for IT productivity, are the effects of differences in individual knowledge and skill, individual motivation, IT, job and organizational structure, and the management functions all of interest, or just some of them? A measure will be valid to the extent that (1) the variables to be measured are defined appropriately, (2) the content of the measure matches the content of the variable, and (3) the determinants of score differences on the measure accurately reflect the measurement objectives. The score variation on measures of individual productivity should be under the control of the individual. The score variation on measures of unit productivity should be under the control of the unit. Measurement should minimize the opportunity for productivity "scores" to be influenced by sources of contamination having nothing to do with the objectives of measurement. For example, one most likely does not want the scores on a productivity measure merely to reflect changes in the business cycle. Given these implications, the next section outlines steps that should be taken to enhance understanding of the nature of IT productivity and its measurement. RESEARCH NEEDS The analyses of IT productivity measurement issues in this chapter point to a number of questions that can be addressed only through
OCR for page 210
Organizational Linkages: Understanding the Productivity Paradox additional research and information gathering. To achieve truly effective measurement of IT productivity for purposes of modeling the linkage between individual and organizational productivity and evaluating the effects of productivity improvement strategies, the following steps should be taken. First, a representative panel of relevant experts (i.e., individuals who are very knowledgeable about the IT industry) should consider the question of which domains of IT productivity are the most critical to address. The possibilities range from the productivity of software design organizations to the productivity of operational information processing systems themselves. A taxonomy of the critical types of IT organizations or systems and their relevant goals would add considerable clarity to all these issues. Second, for each type of IT organization, an additional expert panel should be assembled to consider all available information and formulate an initial statement of what the basic components of productivity are within that context. These would be substantive specifications, not abstractions. To proceed with measurement, the enterprise simply must know what it wants to measure. As used here, expert does not refer to academics or other experts in organizational research or measurement. The experts of interest are the people who have responsibility for using IT itself. To the fullest extent possible, the panel(s) should also attempt to specify the major determinants for each relevant component of productivity. If, for certain specific productivity components, it would be impossible for changes in a specific determinant (e.g., higher-performing individuals or new hardware) to affect productivity, such constraints should be identified. In effect, these two steps would generate a working "theory of productivity" in each context for which IT productivity is a critical issue. The theory may change as more evidence accumulates, but there must be a starting place. Certainly, the people most involved with IT productivity issues can offer a theory of its latent structure that goes beyond identifying ad hoc measures that happen to be available and can provide some reasonable specification for the determinants of the basic IT productivity components. Third, the above steps would feed directly into a program of research and development on productivity measurement itself. Chapters 6 and 7 outlined specific procedures for such measure development and offered relevant examples. The critical ingredient is the use of a designated group (or groups) of experts/decision makers to articulate the goals of their specific enterprise, develop a theory for what productivity is in that context, and construct valid measures that directly re-
OCR for page 211
Organizational Linkages: Understanding the Productivity Paradox flect the productivity of the unit being studied. It is the group's responsibility to make sure that the appropriate determinants are reflected by the measures and that there is no serious contamination by extraneous influences. That is, by design, output measures must be identified or developed that directly reflect the performance of the unit in question and are directly relevant for the organization's goals. There is really no way to sidestep these judgments. There is no standardized set of commercially available operational measures that can be purchased and used. This is as true for organizational productivity as it is for unit or individual productivity (Campbell, 1977). The ProMES procedure described by Pritchard in Chapter 7 is the most direct application of these notions. Pritchard et al. (1989) go one step further and ask that the marginal utility of different levels of output on each measure be scaled, given the goals of the enterprise. This might be a very eye-opening exercise for organizations. Achieving large gains on some components of productivity may be of little value, while even small gains on some other measure might be judged to have tremendous value; and the differences in marginal utility need not be highly correspondent with a dollar metric. In an ideal world, the specific goals and specific measures that are specified for a particular organization would be congruent with the theory of productivity articulated in the second step above. If they are not, revisions to the model should be considered. Over time, this interplay between a conceptual framework and specific measurement applications should steadily increase understanding of IT productivity and how it should be measured. One way to aid such investigation would be to develop an IT productivity measurement manual that would incorporate the working model and a set of procedures such as those suggested in Chapters 6 and 7. Fourth, to enhance understanding of the linkage of certain determinants to specific components of IT productivity, it would be useful to conduct two kinds of exploratory investigations, using the working theory developed in the second step above. Both kinds of studies would seek simply to describe the critical events in specific organizations that seemed to have a positive or negative effect on productivity. One type would be the straightforward case study. If an appreciable number of case studies are done and they are conducted within the same framework, the accumulated results relative to how the various determinants affect productivity should be both interpretable and informative. The second type of study would collect accounts of critical incidents from several panels of people within each organization. The general instructions to the writers of the accounts would ask them to describe specific examples of incidents that illustrate positive or negative ef-
OCR for page 212
Organizational Linkages: Understanding the Productivity Paradox fects of "something" on productivity (as defined by the working model). This is a proven strategy for identifying specific individual training needs (Campbell, 1988). Taken together, the two kinds of studies should provide considerable information about why particular strategies that are used to improve IT productivity succeed or fail. The two types of studies are exploratory in nature. By no means are they meant to supplant more controlled multivariate or experimental research, such as outlined in Chapter 9 and elsewhere. It is also true that some of the reasons why new technology does not have the intended effects are already well known. The case studies and critical incident data gathering are not meant to reinvent the wheel. The intent is simply to provide additional specific information as to how changes in IT can succeed or fail. Aggregating such information over a large number of instances may indeed lead to an expansion of the understanding of how to improve IT productivity. REFERENCES Ackerman, P.L. 1988. Individual differences in skill learning: An integration of psychometric and information processing perspectives. Psychological Bulletin 102:3–27. American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. 1985. Standards for Educational and Psychological Testing. Washington, D.C.: American Psychological Association. Anderson, J.R. 1985. Cognitive Psychology and Its Implications, 2nd ed. New York: W.H. Freeman. Banas, P.A. 1988. Employee involvement: A sustained labor/management initiative at the Ford Motor Company. Pp. 388–416 in J.P. Campbell and R.J. Campbell, eds., Productivity in Organizations. San Francisco: Jossey-Bass. Bennis, W.G., K.D. Benne, and R. Chin., eds., 1962. The Planning of Change. New York: Holt, Rinehart, & Winston. Campbell, J.P. 1977. On the nature of organizational effectiveness. Pp. 13–55 in P.S. Goodman and J.M. Pennings, eds., New Perspectives in Organizational Effectiveness. San Francisco: Jossey-Bass. 1988. Productivity enhancement via training and development. Pp. 177–216 in J.P. Campbell and R.J. Campbell, eds., Productivity in Organizations. San Francisco: Jossey-Bass. 1991. Modeling the performance prediction problem in industrial and organizational psychology. In M.D. Dunnette and L. Hough, eds., Handbook of Industrial and Organizational Psychology. Palo Alto, Calif.: Consulting Psychologist's Press. Cascio, W.F. 1991. Applied Psychology in Personnel Management, 4th ed. Englewood Cliffs, N.J.: Prentice-Hall.
OCR for page 213
Organizational Linkages: Understanding the Productivity Paradox Goodman, P.S., ed. 1982. Change in Organizations. San Francisco: Jossey-Bass. Goodman, P.S., ed. 1986. Designing Effective Work Groups. San Francisco: Jossey-Bass. Goodman, P.S., R. Devadas, and T.L. Griffith. 1988. Groups and productivity: Analyzing the effectiveness of self-managing teams. Pp. 295–327 in J.P. Campbell and R.J. Campbell, eds., Productivity in Organizations. San Francisco: Jossey-Bass. Kanfer, R., and P.L. Ackerman. 1989. Motivation and cognitive abilities: An integrative-aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psychology 74:657–690. Landy, F.J., and J.L. Farr. 1980. A process model of performance rating. Psychological Bulletin 87:72–107. Lawler, E.E. 1991. High Involvement Management. San Francisco: Jossey-Bass. Loveman, G.W. 1988. An Assessment of the Productivity Impact of Information Technologies. Sloan School of Management. Cambridge, Mass.: MIT Press. Mahoney, T.A. 1988. Productivity defined: The relativity of efficiency, effectiveness, and change. Pp. 13–39 in J.P. Campbell and R.J. Campbell, eds., Productivity in Organizations. San Francisco: Jossey-Bass. Nathan, B.R., and R.A. Alexander. 1988. A comparison of criteria for test validation: A meta-analytic investigation. Personnel Psychology 41:517–536. Nissen, M.J., and P. Bullmer. 1987. Attentional requirements of learning: Evidence from performance measures. Cognitive Psychology 19:1–32. Pritchard, R.D., S.D. Jones, P.L. Roth, K.K. Stuebing, and S.E. Ekeberg. 1989. The evaluation of an integrated approach to measuring organizational productivity. Personnel Psychology 42:69–115. Sackett, P.R., S. Zedeck, and L. Fogli. 1988. Relations between measures of typical and maximum job performance. Journal of Applied Psychology 73:482–486. Schmitt, N., R.Z. Gooding, R.A. Noe, and M. Kirsch. 1984. Meta analyses of validity studies published between 1964 and 1982 and the investigation of study characteristics. Personnel Psychology 37:407–422.
Representative terms from entire chapter: