National Academies Press: OpenBook

Distributed Decision Making: Report of a Workshop (1990)

Chapter: Empirical Research for Distributed Decision Making

« Previous: Theories for Distributed Decision Making
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 7
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 8
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 9
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 10
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 11
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 12
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 13
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 14
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 15
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 16
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 17
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 18
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 19
Suggested Citation:"Empirical Research for Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 20

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

DISTRIBUTED DECISION MAKING 7 B) offers similiar possibilities requiring detailed interdisciplinary research. In these extensions it seems essential to stretch existing theories to cover as much of the problem as possible, rather than leaving them with those pieces of the puzzle with which they are most comfortable. EMPIRICAL RESEARCH FOR DISTRIESI1ED DECISION MAKING Applying existing theories to distributed dec~sion-making systems re- quires some perception of which features of their design are most important to their operation. ~ some extent, that perception may come from existing research or direct experience with particular systems. However, to a large extent, it represents disciplined speculation, extrapolating from existing theory or data. Clearly, there are many questions bearing direct empirical study In order tO guide theory development and provide direct input tO designers. AS a first approximation, these topics can be divided into those concerning the behavior of individuals acting alone, those conceding the interaction of individuals with machines, those concerning the interactions of multiple individuals, and those concerning organizational behavior-all within distributed decision-making systems (although advances here would often be of interest elsewhere). A sampling of such topics follows, drawn from the discussions at the workshop. All have implications for human factors specialists, helping them either to propose designs or to anticipate the performance of designs proposed by others. Research Topics in Individual Behavior Mental Models A distinctive feature of distn~uted decision-making systems is the interdependence of many parts, each of whose status may be constantly changing. Operators within the system must be mindful of the possibility of changes as they decide what actions to take, what information to solicit, and what information to share. Routine reporting requirements are designed to help people keep up to date; however, they may not be appropriate for all circumstances and may create a greater stream of information than can be incorporated in operators' mental picture of the system's status. Standard operating procedures are designed to direct actions in uncertain situations; however, that very uncertainly often leaves some latitude for determining exactly what situation exists at any moment. lt woWd be very helpful to Mow how (and how weld people create, maintain, and manipulate their mental pictures of such complex, dynamic systems. A point of departure for such research might be studies of people's mental models of formally defined physical systems (e.g., Chi,

8 DISTRIBI=ED DECISION MATING Glaser and Parr, 1988; Gentner and Stevens, 1983; Johnson-Laird, 1985~. The uncertainty, dynamism, and lack of closure of distributed decision- making systems would add some interesting wrinkles to that research. Semi-analytic Cognition The very complexity of such systems forces some serious thinking regarding the nature of decision making within them. Normative theories of decision making specify a laborious set of procedures, including listing of all relevant alternative courses of action, identifying the consequences potentially associated Path each action, assessing the likelihood of each consequence, determining the relative importance (or attractiveness) of the different consequences, and combining all these considerations according to a defensible decision rule (von Winterfeldt and Edwards, 1986; Watson and Buede, 1988~. Basic research into cognitive processes suggests that these are difficult mental operations under any circumstances; specific research into intuitive decision-making processes shows a variety of potential deficiencies; the speeded conditions of decision making in distributed systems should severely stress whatever capabilities people bring to it. When questioned, the operators of complex systems often report a process more like pattern matching than decision making (Klein, 1986; Klein, CaldeIwood and Clinton-Cirocco, 1986~. That is, they figure out what is happening in some holistic sense, then take the appropriate action. Their reports include some elements of analytic decision making (e.g., they tank about alternatives and weigh consequences), however, it seems to be much less than the full-blown treatment required by decision theory. Additional descriptive studies are clearly needed, in order to elaborate on these accounts and clarify how well actual behavior corresponds to this recounting. Such studies could be accompanied by theoretical treatment of the optimal (or suboptimality) of decision making by pattern matching. If there is a case to be made for pattern matching, then one could examine how it could be facilitated through the provision and display of information (and perhaps what changes in analytic decision aids might make them seem more useful). Conversely, system designers, in general, would want to know when pattern matching leads one astray. In defense systems, designers would want to know where it creates vulnerability to being led astray. Relevant literatures include those regarding the validity of introspections (e.g., Ericsson and Simon, 1980; Goldberg, 1968; Nisbett and Wilson, 1977) and the diagnostic ability of experts (e.g., Chase and Simon, 1973; Chi et al., 1988; Elstein, Shulman, and Sprafl<a, 1978; Goldberg, 1968; Henrion and Fischhoff, 1986~.

DISTRIBUTED DECISION EKING Decision Making in Real Life 9 As mentioned briefly in the preceding section, an extensive literature has developed documenting deficiencies in intuitive judgment and dec~sion- making processes (e.g., Dawes, 1979, 1988; Fischhoff, 1988; Hogarth, 1988; Kahneman, Slovic, and Iversky, 1982~. Although enough studies have been done with experts performing tasks in their fields of expertise to establish that these results are not just laboratory curiosities, the vast majority of studies have still come from artificial tasks using lay subjects. ~ ~ r`~clllt th'~.re. i.~ the natural concern about their generalizability and ~ ~ ~ ~ _v ~ ~ even about whether apparently suboptimal behavior actually serves some higher goal (e.g., reflecting the occasional errors produced by generally valid judgmental strategies, reflecting strategies that pay on over a longer run than that inferred by the investigator) (Berkeley and Humphreys, 1982; Hogarth, 1988~. When human performance is suboptimal, there is the need for training, decision aids, or planning for problems. In addition to providing additional impetus for addressing these topics of general interest, distributed decision- making systems create a variety of new circumstances that may exacerbate or ameliorate the problems. Appendix A to this report speculates on some of these possibilities, which were discussed at some length during the work Hop. Interpreting Instructions The pattern-matching process described above seems to involve inter- preting concrete real-life situations in terms of some fundamental categories that people (experts) have created through experience. A complementary task which may have quite a different cognitive dynamic, is interpreting real-life experiences in terms of general instructions provided by those higher up in an organization. These might be contingency plans, of the form "If X happens, do Y." or rules specifying goals at a fairly high level of abstraction (e.g., "Act as though Quality is Job One"~. A cognitive challenge ill the former case is achieving enough fluency with the abstract categories to be able to identify them with actual contingencies in the way intended by the contingency planners. If operators are unsuccessful, then decision-making authority has not been distributed in the way intended. A cognitive challenge in the latter case is to adapt hard abstract rules to murky situations (e.g., "Should ~ really shut down the assembly line because the paint looks a little spotty?"~. If operators are unsuccessful, then system designers have failed to create the incentive schemes that they thought they had. Points of departure for these topics include studies of

10 D STRIBI=ED DECISION MATING categorization processes for natural and artificial categories (e.g., Murphy and Medin, 1985), interpretations of reward structures (e.g., Roth, 1987), and lay theories of system behavior (e.g., Furnham, 1988). Research Topics in Indi~dual-Machine Behavior t Distributed decision-maldug systems often execute their actions through machines (e.g., missiles, reactor control rods, automatic pilots). They al- ways coordinate those actions through machines (e.g., telecommunications networks, automated monitoring systems, data exchanges). The human operators of the system always must ask themselves whether the machines can be trusted. Will they do what I tell them to? Are they telling me the truth about how things are? Have they transmitted Me messages as they were sent? Obvious (and different) kinds of errors can arise from trusting too much and trusting too little. The designers of a system want it not only to be reliable, but also to seem as reliable as it is. In some cases, they might even want to sacrifice a little actual reliability for more realistic operator expectations (Fischhoff and MacGregor, 1986~. Expectations for the components of distributed decision-maldng sys- tems presumably draw on cognitive processes akin to those used in predict- ing the behavior of humans and machines in other situations (e.g., Fischhoff, MacGregor, and Blackshaw, 1987; Furnham, 1988; Moray, 1987a, 1987b; Moray and Rotenberg, 1989; Murphy and Rankler, 1984; Reason, in press). An obvious research strategy is to examine the generalizabili~ of these results. A second strategy is to study the impact of features unique to distributed decision-making systems. One such feature shared by some undistributed systems is what has been called the supervisory control prob- lem (National Research Council, 1985), the need for operators to decide when an automated system has gone sufficiently astray for them to override it (Muir, 1988~. In doing so, they may be expressing mistrust, not only of the system's choice of actions, but also of its reading of the real world (e.g., based on the reports of sensors and their interpretation) and its theory of how to respond to that reality. A third strategy is to record the operation of actual systems, eliciting operators' confidence in them in ways that can be subsequently calibrated against actual perfo~ance. A fourth strategy is to look at operators' interpretations of the clanns made for new equipment before it is introduced and how those expectations change (for better or worse) with experience.

DISTRIBUTED DEC SION MAKING Expert Systems 11 One variant on this general theme of trust and trustworthiness, dis- cussed at some length at the workshop, concerned expert systems, that is, computerized systems intended to incorporate the wisdom of the most accomplished experts regarding a particular category of problem. These systems could be allowed to operate on their own unless overriden by operators (e.g., systems for deciding whether incoming mmsles are hostile) or could be queried as to how the expert in the machine would respond to the current situation, in terms of the way it has been deserted to it. There is a strong technological imperative pushing the development of expert systems for an ever-increasing range of situations. This imperative should be particularly strong in distributed decision-making systems because the promise of having a proxy expert online ~ the machines available at re- mote sites seems like an obvious way of maintaining a consistent policy and centralized control throughout. Like any other decision aid, the contribution of expert systems to system performance depends both on their capabilities and on the ap- propriateness of the faith placed in those capabilities. In this light, any improvements in expert systems should improve their usefulness for dis- tnbuted decision-making systems, provided that their limitations are equally well understood. Specifically, operators must understand what expert sys- tems do and how well they do it. They must know, for example, what values a system incorporates and how well those correspond to the values appropriate to their situation (e.g., "Was the expert of the system more or less cautious in reaching decisions than I want to bend. They must also know how their world differs from that in which the expert operated (e.g., "Did the expert have more trustworthy reporting systems? Did the expert have to consider deliberate deception when interpreting reports?"~. They must know if they have advantages denied to the expert (e.g., the ability to draw on additional kinds of expertise beyond that lodged in even the most knowledgeable single individual in the world). In addition to the cognitive challenge of unprov~ng the ~nterpretabilib of expert systems for individual operators, there Is also the institutional challenge of managing the allocation of responsibility for decisions made by expert systems (or by the operators who override them). This need aeates ~ special case of the general problem of understanding institutional incentive structures. System Stability and Operator Competence No system in a complex, dynamic environment works exactly as planned. That is why there are still human operators, even in cases in

12 DISTRIBUTED DECISION AWING which actions are executed by machine. The role of these operators must therefore come from knowing things that are unknown to the machine, perhaps because its formalized language cannot accommodate them, per- haps because there is inadequate theoretical understanding of the domain in which the system operates, perhaps because the theory has known flaws. In any case, the operators must have some local knowledge or indigenous technical knowledge or tacit knowledge allowing them to pick up where the machine leaves off (Brokensha, Warren, and Werner, 1980; C:hi et al., 1988; Foucault, 1980; Moray, 1987b; Polanyi, 1962; Rasmussen, 1983, 1986~. Knowing more about the quality of this unique human knowledge would obviously help in guiding the allocation of responsibility between person and machine. Knowing more about the nature of this knowledge would help to understand the impact of changes in a distributed decision- maldng system on its operability and about procedures for maintaining (or restoring) this kind of expertise. For example, is it better to examine potentially significant changes constantly to determine their effect on one's understanding? Or is it better to conduct periodic reviews, looking for aggregate impacts that might be more readily discernible recognizing that one may be functioning with an outdated model between reviews? Finally, such knowledge should help manage those changes that are controllable. There may be little that one can do to retard an opponent's adoption of a new weapons system (with its somewhat unpredictable impact on the operation of one's own systems) or on the spread of an iBiOt drug or unfamiliar virus in the population (with their effect on the interpretation of lab results). However, one may have some control over the introduction of new technologies that can reduce operators' understanding of their own system either by disrupting the operational patterns that they know well or by reducing their direct contact with the system (a sort of intellectual deskilling). Given the imperatives of innovation, it would take quite solid documentation of operators' world views to resist changes in technology on the grounds that it will reduce their understanding. Displaying Uncertainty If systems are known to be imperfect, it is incumbent on their designers to convey that information. A fairly bounded design problem that came up several times during the workshop was how to display information about the uncertain in a system. This general category includes several different kinds of uncertainly: that surrounding direct measurements (e.g., the temperature in a reactor core, the altitude of an aircraft), that surrounding interpreted data (e.g., the identity of an aircraft, its likely flight path), and that surrounding its recommendations (e.g., whether to shoot). Such displays would be attempts to create realistic expectations. Whether such

DISTRIBUTED DECISION MAKING 13 good intentions achieve the intended effects is a potentially difficult design question, especially when the uncertainty arises from imperfect theories (even when those are applied to integrating perfect observations). Research Topics in Multiple Individual Behavior Shared Knowledge The mdrnduals (or units) in a d~stnbuted decision-maldog system are meant to have a shared concept of their mission (i.e., objectives and situation) at a fairly high level of generality that nonetheless allows them to function effectively in the restricted environment for which they have more detailed knowledge. Achieving this goal is in part a matter of training, so that distn~uted operators share certain common conceptions, and in part a matter of distributing current information, so that they stay in touch conceptually. Insofar as it is impossible to tell everybody everything, the designers of a system need to know what is the minimal level of explicit sharing needed to ensure adequate convergence of views. They also need to know what kind of information is most effectively shared (e.g., raw observations or interpretations). Conversely, they need to know the drift in perceptions that arises from lack of sharing, whether due lo individuals having too much to say, having too much to listen to, or being denied the opportunity to communicate. Such knowledge would guide them in determining the capacity needed for communication channels, the fidelity needed for those channels, and the protocols for using them (e.g., when to speak how to interpret silence). Approaches to these questions range from observational studies of the conversational norms of intact communities to mathematical models of the impact of sharing on the creation of communities (Carley, 1986a, 1986b, 1988; Grice, 1975; Hilton, 1988) Bamers to Sharing Communication involves more than just the transmission of proposi- tional statements. People read between the lines of other people's state- ments. People read the vocal intonations and facial expressions accompa- nying statements for additional cues as to what is intended and what to believe. These are well-worked topics in social psychology, whose implica- tions for distributed decision-maldug systems need to be understood (e.g., Spencer and EkInan, 1986; Fiske and Taylor, 1984~. In addition, there are special features of such systems that threaten to disrupt normal patterns of communication, interpretation, and under- standing. For example, modern telecommunications may deprive users of

14 DISTRIBUTED DECISION MAKING necessary interpretative cues, a fact that may or may not be apparent to both transmitters and receivers of messages. They may disrupt the timing (or sequencing) messages and responses, delaying feedback and reducing coordination. Restricted communications can also prevent the unintended communication of peripheral information Consider, for example, people who come off poorly on television not because they are uncertain of their messages, but because of discomfort with the medium. Or consider whether there would be better communication between U.S. and Soviet leaders were the current hot line replaced by a higher-fidelity channel, thereby letting through more cultural cues that might be subject to misinterpretation. These questions are beginning to receive systematic attention through both controlled experunents and detailed observational studies (e.g., Hiltz, 1984; Kiesler, Siegel, and McClure, 1984; Meshkati, in press; Sproull and Kiesler, 1986~. More research is needed, directed toward the particular conditions created by distributed decision-making systems. Distribution of Responsibility Organizations of any sort must allocate responsibility for their various functions. For distributed decision-making systems, this allocation must, by definition, include the collection, sharing, and interpretation of information, as well as the decision to undertake various classes of actions. These are obvious parts of its design, which would exist even were there no technology involved at ale Considering technology raises a few issues calling for particular input from human factors specialists. One is how the distribution of technical knowledge about the equipment affects control over the system. Particularly under time pressure, technically qualified operators may have to take actions without adequate opportunity to consult with their superiors (e.g., the flight deck chiefs on carriers who are career noncommissioned officers, yet subordinate to officers who are there because they have more generalized expertise) ~Porte, 1984; Rochlin, in press). Even without time pressure, differences in social status may restrict communication, so that technically skilled operators are required to follow orders that do not make sense to someone on the shop floor. In either circumstance, the welfare of the system as a whole may require out-of-role behavior by its operators. Designers should want a better understanding of when such situations arise, how they can be minimized, and how to deal with their aftermath without undermining an organiz~tion's authority structure. A rather different impact of technology on the distribution of respon- sibili~ is its effect on the opportunities for monitoring the performance of operators. Successful organizations require an appropriate balance between central contra! and local autonomy. Operators need some independence both for motivational and functional reasons. Mo~civationally, they need to

DISTRIBUTED DECISION MAKING 15 feel that someone is not on their case all the time. Functionally, they must be able tO exploit whatever unique advantages their local knowledge gives them, so that they can improvise solutions to problems that are not fully understood by those at the top. Much organizational theory deals with how to achieve this balance. In practice, though, these designs probably specify no more than necessary conditions for balance, within which real people might be able to negotiate workable arrangements. Any change in technology (or in the external world) could destabilize this balance. The increased capacity for surveillance may become a recurrent destabilizing factor In distributed dec~sion-making systems. If those at the top of a system must know everything they can know, they may then receive a good of information that is inadequate to assert effective control, but enough tO restrict the ability of local operators to innovate. Where this happens, changes in the technology or its management are needed (Impair, Fischhoff, and Johnson, 1988~. Research Topics in Organizational Behavior Most of the research topics described in the preceding sections concern the reality facing individuals in distributed decision-making systems and how their performance may be improved by better design of equipment and procedures. A common assumption of these potential interventions is that an organization will be better if the performance of its constituents is improved. While this is doubtless true in general, certain phenomena emerge most clearly at the organizational level. Although these topics may seem somewhat distant from traditional human factors work, the workshop participants felt that they were essential for deploying human factors resources effectively and for understanding the impacts (intended and unintended) of interventions. Reliability, Organizations can fail in many ways. Knowing the ways that are most liked can focus efforts on improving design or help one to choose among competing designs. Detailed quantitative modeling of organizational reliability might highlight such vulnerabilities (Pate-Cornell, 1984, 1986) for example, which methods of distributing information are most robust with regard to noise and interruptions? Human factors specialists could not only take direction from such analyses, but also give them shape by characterizing the probability of failures arising from venous operator problems (Swain and Gutman, 1983~. While the methods used for modeling mechanical systems (e.g., McCormick, 1981; U.S. Nuclear Regulatory Commission, 1983) are an obvious place to start such analyses, there Is important

16 DISTRIBUTED DECISION A~1~NG theoretical ground to be broken by incorporating in them those forms of vulnerabilities that are unique to single or interacting individuals (e.g., shared misconceptions, refusals to cooperate, the ability to deceive and be deceived). Conflicting Demands Many organizations face conflicting demands. For example, they may have to act in both crisis and routine situations; they may have to main- tain a public face quite at odds with their internal reality (e.g., exuding competence when all is chaos undemeath); or they may need to ad- here to procedures (or doctrine) while experimenting in order to develop better procedures. Each of these conflicting roles may call for different equipment, different procedures, different personnel, different patterns of authonty, and different incentive schemes. Mediating these conflicts is essential to organizational survival. An understanding of these conflicts is essential if human factors specialists are to see inhere they fit in and to create designs that serve both purposes. Learning Like individuals, successful organizations are continually learning, both about their environment and about themselves. Their design can facilitate or hinder both the acquisition of such -understanding and its distribution throughout the organization (both at any given tune and over time). Human factors work may have leverage on learning processes through the methods by which experience is accumulated and disseminated. Because such change and maintenance are not always part of an organization's explicit mission, they may need to be studied and identified lest they be ignored. Research Methods for Distributed Decision Maldng Each of the research topics cited in the previous section of the report has particular methodological demands. Although those could be left to the individuals undertaking each task, there are also some recurrent needs that might be addressed profitably by research that is primarily methodological in character. The workshop identified a number of such topics, which are described below. Analytical Measures As mentioned, the label distributed decision making covers a very wide variety of organizations. Although each deserves attention in its own right,

DISTRIBUTED DECISION MAKING 17 respecting its peculiarities, the accumulation of knowledge requires the ability to make more general statements about different kinds of distributed decision-mal~g systems. That goal would be furthered by the development of broadly applicable analytical measures. For example: . degree of d~stn~ution of decision-making authority · degree of distribution of information · heterogeneity of tasks across the system/degree of human and physical asset specialization · stability of external environment · heterogeneity of external environments · variation in organizational demands (e.g., across steady-state and peak-load situations) stability of internal environment (e.g., personnel turnover, techno- logical change) · irreversibility of actions · time stress Case Studies Detailed case studies of actual distracted decision-maldug systems are needed, both for individual disciplines to make contact between then existing theories and this complex reality, and for them to make contact with one another. Establishing a database of case studies created for these purposes would help achieve these goals. Such studies would have lo provide the information needed by the different relevant disciplines and avoid preemptive interpretation of what has happened. Assembling such a canonical set might begin with existing case studies, reviewing them for the features that are missing and might be supplemented. Even if individually adequate studies are currently available, the set of existing studies would have to be reviewed for sampling biases. For example, it might unduly emphasize crisis situations fin which organizations are tested) and calamities (in which they fail the test). Instrumentation Case studies are usually fictions to some extent because they rely on retrospective reports. Even central participants might not remember what they themselves have done in a particular situation (e.g., because it no longer makes sense, given what they now know about what was really happening) (Dawes, 1988; Ericsson and Simon, 198~, Pew, Miller, and Feeher, 1981~. Especially central participants may be reluctant to reveal what they lmow, in order to present themselves in a more favorable light. In addition, critical events may simply go unobserved. As a result, it would

18 DISTRIBUTED DECISION MAKING be helpful to automatically log or record ongoing systems operation for the sake of later analysis. This might involve developing black boxes to record events, online interrogation procedures to question operators about what they think is happening, observational techniques cueing investigators to potentially significant occurrences, or even the creation of experimental systems, operating long enough for participants to achieve stable behavior patterns under the watchful eyes of investigators (Moray, 1986~. Capturing Mental Models Inevitably, the study of distributed decision-making systems must rely on operators' reports of what they think is or was happening. For the foreseeable future, that is likely to be an irreplaceable source of insight re- garding the subjective reality that it creates for them. Distributed decision- making systems pose particularly difficult challenges for such elicitation. The events are complex and dynamic; participants often have an incom- plete understanding of what was happening; reports require inferences, rather than mere observation; and critical events may have to be translated from visual (or even visceral) experiences to verbal statements. Improved methods are needed for eliciting what only participants can know (Gentner and Stevens, 1983; Lehner and Adelman, 1987; Moray, 1987a). Institutional Structure for Distributed Decision Making Although there was considerable agreement among workshop partic- ipants on the importance of studying the topics raised in this report, the question of how distributed decision making should be studied was not fully resolved. Participants agreed that significant research progress depends on the creation of a research community that allows and reinforces sustained interaction among leading scholars in the venous relevant disciplines, and between these scholars and substantive experts familiar with the operation of actual systems. Consideration of distributed decision-making systems raises cutting-edge issues in many disciplines. These include both topics that are unique to such systems and topics that occur elsewhere but have yet to be addressed fully. Certainly the operators and designers of such systems would benefit from universal access to existing scientific knowledge. However, these systems really deserve the attention of creative investigators breaking new ground. There are several obstacles to creating such conditions for university- based researchers. One is the centripetal force of academic disciplines that typically reward scholars most for intradisciplina~y wore A second obstacle is the need to master the nuances of concrete systems and additonal methodological procedures, which simply demand more time than can be

DISTIUBUTED DECISION MAKING ~9 allotted by many individuals facing the pressures of promotion and tenure. A third is the frequent lack of respect within the academic community for applied research and its often particularist conclusions. One response to these obstacles is to look elsewhere for researchers in settings less subject to the constraints of a university-based culture, say, to a private research and consulting organization. This is an appropriate solution when such an organization can provide the laud of enrichment that comes from interdisciplinary exchanges comparable to Pose of an academic environment and from the rigor that comes from peer review. Some contract research organizations meet these standards; others do not. These obstacles are not, of course, unique to research on distributed decision making; that makes them no less real for sponsors of research and the managers of related systems. The recognition of workshop participants that obstacles do exist did not dampen their enthusiasm for working on the problems that were raised nor from considering ways in which they might work on them inside or outside academia. Just how this might be done and which research topics should be given the highest priorities were questions beyond the scope of the workshop and constitute the basis for an agenda for some future effort. a

Next: References »
Distributed Decision Making: Report of a Workshop Get This Book
×
Buy Paperback | $40.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Decision making in today's organizations is often distributed widely and usually supported by such technologies as satellite communications, electronic messaging, teleconferencing, and shared data bases. Distributed Decision Making outlines the process and problems involved in dispersed decision making, draws on current academic and case history information, and highlights the need for better theories, improved research methods and more interdisciplinary studies on the individual and organizational issues associated with distributed decision making. An appendix provides additional background reading on this socially and economically important problem area.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!