National Academies Press: OpenBook

Distributed Decision Making: Report of a Workshop (1990)

Chapter: Appendix A: The Possibility of Distributed Decision Making

« Previous: References
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 25
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 26
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 27
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 28
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 29
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 30
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 31
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 32
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 33
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 34
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 35
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 36
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 37
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 38
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 39
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 40
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 41
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 42
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 43
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 44
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 45
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 46
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 47
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 48
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 49
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 50
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 51
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 52
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 53
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 54
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 55
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 56
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 57
Suggested Citation:"Appendix A: The Possibility of Distributed Decision Making." National Research Council. 1990. Distributed Decision Making: Report of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/1558.
×
Page 58

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Appendix A The Possibility of Distnbuted Decision Making BARUCH FISCHHOFF AND STEPHEN JOHNSON Modern command-and-control systems and foreign affairs operations represent special cases of a more general phenomenon: having the in- formation and authority for decision making distributed over several in- dividuals or groups. Distn~uted dec~sion-making systems can be found in such diverse settings as voluntary organizations, multinational corporations, diplomatic corps, government agencies, and married couples managing a household. Viewing any distributed decision-maldng system in this broader context helps to clarifier its special, and not-so-special, properties. It also shows the relevance of research and experience that have accumulated elsewhere. As an organizing device, we develop a general task analysis of distributed dec~sion-making systems, detailing the performance issues that accrue with each level of complication, as one goes from the supplest situation (involving a single individual intuitively pondering a static situa- tion with complete information) to the most complex (with heterogeneous, multiperson systems facing dynamic, uncertain, and hostile environments that threaten the communication links and actors in their system). Drawing from the experience of different systems and from research in areas such as behavioral decision theory, psychology, cognitive science, sociology, and organizational development, the analysis suggests bow problems and possi- ble solutions. It also derives some general conclusions regarding the design and management of such systems, as well as the asymptotic limits to their performance and the implications of those limits for an organization and overall design strategy. Partial support for this research was provided lay the Office of Naval Research, under Contract No. N00014~5-C 0041 to Perceptronics, Inc.,~'Behavioral Aspects of Distributed Decision Mak- ing." 25

26 DISTRIBUTED DECISION MAKING A SHORT HISTORY OF DECISION AIDING It is common knowledge that decision making is often hard. One of the clearest indications of this difficulty is the proliferation of decision aids, be they consultants, analyses, or computerized support systems (Humphreys, Svenson, and Vari, 1983; Stokey and Zeckhauser, 1978; Wheeler and Janis, 1980; von ~Interfeldt and Edwards, 1986; Yates, 1989~. Equally clear, but perhaps more subtle evidence is the variety of devices used by people to avoid analytic decision making; these include procrastination, endless pursuit of better information, reliance on habit or tradition, and even the deferral to aids when there is no particular reason to think that they can do better (Corbin, 1980~. A common symptom of this reluctance to make decisions is the attempt to convert decision making, which reduces to a gamble surrounded by uncertainly regarding what one will get and how one will like it, to problem solving, which holds out the hope of finding the one right solution (Montgomery, 1983~. Somewhat less clear is just why decision making is so hard. The diversity of coping mechanisms suggests a diversity of diagnoses. The disappointing quality of the help offered by decision aids suggests that these diagnoses are at least somewhat off target. The battlefield of decision aiding is strewn with good ideas that did not quite pan out, after raising hopes and attracting attention. Among the aids that remain, some persist on the strength of the confidence inspired by their proponents and some persist on the strength of the need for help, even if the e~capy of that help cannot be established. In retrospect, it seems as though most of the techniques that have fallen by the wayside never really had a chance. There was seldom anything sustaining them beyond their proponents' enthusiasm and sporadic ability to give good advice in specific cases. The techniques drew on no systematic theoretical base and subjected themselves to no rigorous testing. For the past 20 to 30 years, behavioral decision theory has attempted to develop decision aids with a somewhat better chance of survival (Ed- wards, 1954, 1961; Exhort and Hogarth, 1981; Pitz and Sachs, 1984; Slovic, Fischhoff, and Lichtenstein, 1977; Rappoport and Wallsten, 1972~. Its hopes are pinned on a mixture of prescriptive and descriptive research. The former asks how people should make decisions, while the latter asks how they actually do make decisions. In combination, these two research programs attempt to build from people's strengths while compensating for their weaknesses. The premise of the field is that significant decisions should seldom be entrusted entirely either to unaided intuition or to au- tomated procedures. Finding the optimal division of labor requires an understanding of where people are and where they should be. The quest for that understanding has produced enough surprises to establish that

DISTRIBUTED DECISION MAKING 27 it requires an integrated program of theoretical and empirical research. Common sense is not a good guide to knowing what makes a good decision or why it is hard to identity one. Initially, behavioral decision theory took its marching orders from standard American economics, which assumes that people always know what they want and choose the optimal course of action for getting it. Liken literally, these strong assumptions leave a narrow role for descriptive research: finding out what it is that people want by observing their deci- sions and working backward to identity the objectives that were optimized. These assumptions leave no role at all for prescriptive research, because people can already fend quite well for themselves. As a result, the eco- nomic perspective is Slot very helpful for the erstwhile decision aider if its assumptions are true. However, the perceived need for decision aiding indicates that the assumptions are not true. People seem to have a lot of trouble with decision making. The first, somewhat timorous, response of researchers to this discrepancy between the ideal and the realizer was to document it. It proved not hard to show that people's actual performance is suboptunal (Edwards, 1954, 1961; Einhorn and Hogarth, 1981; Pitz and Sachs, 1984; Slovic, Fischhoff, and Lichtenstein, 1977; Rappoport and Wallsten, 1972~. Knowing the size of the problem, at least under certain circumstances, is helpful in a number of ways: it can show how much to worry, where to be ready for surprises, where help is most needed, and how much to invest in that help. However, size estimates are not very informative about how to make matters better. Realizing this limitation, researchers then turned their attention from what people are not doing (making optimal decisions) to what they are doing and why it is not working. Aside from their theoretical interest, such psychological perspectives offer several points of leverage for erstwhile decision aiders. One is that they allow one to predict where the problems will be greatest by describing how people respond to different situations. A second is that they help decision aiders tank to decision makers by showing how the latter think about their tasks. A third is that they show the processes that must be changed if people are to perform more effectively. Although it would be nice to make people over as model decision makers, the reality is that they have to be moved in gradual steps from where they are now. As behavioral decision theory grew, two of the first organizations to see itS potential as the foundation for new decision-aiding methods were the Advanced Research Projects Agency and the Office of Naval Research. I-heir joint program in decision analysis promoted the development of methods that, first, created models of the specific problems faced by in- dividual decision makers and, then, relied on the formal procedures of -

28 DISTRIBUTED DECISION MAKING decision theory to identify the best course of action in each. These meth- ods were descriptive in the sense of trying to capture the subjective reality faced by the decision maker and prescriptive in the sense of providing advice on what to do. Although it might have been tempting to take the (potentially flashy) technique and run with it, the program managers required regular interac- tions among their contractors, including psychologists, economists, decision theorists, operations researchers, computer scientists, consulting decision analysts, and even some practicing decision makers. The hope was to keep the technique from outrunning its scientific foundations. At any point in time, decision analysts should use the best techniques available. However, their decision aid will join its predecessors if they cannot eventually an- swer questions such as, How do you know that people can describe their decisions problems to you? What evidence is there that this improves de- cision making, beyond your clients' reports that it makes them feel good? (Fischhoff, 1980~. Like other good-looking products, decision analysis has taken on a life of its own, with college courses, computer programs, and consulting firms. Its relative success and longevity may owe something to the initial attention paid to its behavioral foundations. That research probably helped both by sharpening the technique and by giving it an academic patina that enhanced its marketability. Moreover, there is still a flow of basic research looking at questions such as, Can people assess the extent of their own knowledge? Can people tell when something important is missing from the description of a decision problem? Can people describe quantitatively the relative importance of different objectives (e.g., speed versus accurapy)?: The better work in the field, both basic and applied, carries strong caveats regarding the quality of the help that it is capable of providing and the degree of residual uncermin~ surrounding even the most heavily aided decisions. Such warnangs are essential, because it is hard for the buyer to beware. People have enough experience to evaluate quality in toothpaste and politicians. However, it is hard to evaluate advice, especially when the source is unfamiliar and the nature of the difficult is unclear. Without a sharp conception of why decision making is hard, one is hard put to evaluate attempts to make it better. tAI1 three of these questions refer to essential skills for effective use of decision analysis. The empirical evidence suggests that the answer to each is,"No, not really." However, there is some chance for improving their performance by properly structuring their tasks (F~schho£, Svenson, and Slovic, 1987; Goldberg, 1968; Kahneman, Slovic, and Tvemly, 19~32; Slovic, Lichtenstein, and FischhofE, 1988~.

DISTRIBU175:D DECISION MAKING WEY IS INDIVIDUAL DECISION MAKING SO lIARD? 29 According to most prescriptive schemes, good decision making involves the following steps: a. Identitr all possible courses of action (including, perhaps, inac tion). b. Evaluate the attractiveness (or aversiveness) of the consequences that may arise if each course of action is adopted. c. Assess the likelihood of each consequence actually happening (should each action be taken). d. Integrate all these considerations, using a defensible tie., rational) decision rule to select the best tie., optional) action. The empirical research has shown difficulties at each of these steps as described below. Option Generation When they think of action options, people often neglect seemingly ob- vious candidates. Moreover, they seem relatively insensitive to the number or importance of the omitted alternatives (Fischhoff, Slavic, and Lichten- stein, 1978; Gettys, Pliske, Manning, and Casey, 1987; Pitz, Sachs, and Heerboth, 1980~. Options that would otherwise command attention are out of mind when they are out of sight, leaving people with the impression that they have analyzed problems more thoroughly than is actually the case. Those options that are noted are often defined quite vaguely, making it difficult to evaluate them presser, communicate them to others, follow them if they are adopted, or tell when circumstances have changed enough to justitr rethinking the decision.2 Imprecision also makes it difficult to evaluate decisions in the light of subsequent experience, insofar as it is hard to reconstruct exactly what one was trying to do and why. That reconstruction is further complicated by hindsight bias, the tendency to exaggerate in hindsight what one knew in foresight (Fischhoff, 1975, 1982~. The feeling that one knew all along what was going to happen leads one to be unduly harsh on past decisions (if it was obvious what was going to happen, then failure to select the best option must mean incompetence) and to be unduly optimistic about future decisions (by encouraging the feeling that things are generally well understoc)d, even if they are not working out so well). 2 For discussion of such imprecision in carefully prepared formal analyses of government actions, see F~schho~ (1984) and F~schho~ and Colic (1985).

30 DISTRIBl]TED DECISION MAKING Value Assessment Evaluating the potential consequences might seem to be the easy part of decision making, insofar as people should know what they want and like. Although this is doubtless true for familiar and simple consequences, many interesting decisions present novel outcomes in unusual juxtapositions. For example, two potential consequences that may arise when deciding whether to dye one's graying hair are reconciling oneself to aging and increasing the risk of cancer 10 to 20 years hence. Who knows what either event is realty like, particularly with the precision needed to make trade-offs between the two? In such cases, one must go back to some set of basic values (e.g., those concerned with pain, prestige, vanity), decide which are pertinent, and determine what role so assign them. As a result, evaluation becomes an inferential problem (Rokeach, 1973~. The evidence suggests that people have trouble making such infer- ences (F~schhoff, Slovic, and Lichtenstein, 1980, Eogarth, 1982; National Research Council, 1981; Iversky and Kahneman, 1981~. They may fail to identify all relevant values, to recognize the conflicts among them, or to reconcile those conflicts that they do recognize. As a result, the values that they express are often highly (and unwittingly) sensitive to the exact way in which evaluation questions are posed, whether by survey researchers, decision aids, politicians, merchants, or themselves. Formally equivalent versions of the same question can evoke quite different considerations and hence lead to quite different decisions. ~ take just three examples, (a) the relative attractiveness of two gambles may depend on whether people are asked how attractive each is or how much they would pay to play (Grether and Plott, 197~, Slovic and Lichtenstein, 1983~; (b) an insurance policy may become much less attractive when its premium is described as a sure loss (F~schhoff et aL, 1980; Hershey, Kunreuther, and Schoemaker, 1982~; (c3 a nsly venture may seem much more attractive when described in terms of the lives that will be saved by it, rather than in terms of the lives that will be lost (Kahneman and Tversky, 197~, Tversky and Kahneman, 1981~. People can view most consequences in a number of different lights. How richly they do view them depends on how sensitive the evaluation process is. Questions have to be asked in some way, and how they are asked may induce random error (by confusing people), systematic errors (by emphasizing some perspectives and neglecting others), or unduly extreme judgments (by failing to evoke underlying conflicts). People appear to be ill equipped to recognize the ways in which they are manipulated by evaluation questions, in part because the idea of uncertain values is countenntuitive, in part because the manipulations prey (perhaps unwittingly) on their own lack of insight. Even consideration of their own past decisions does not provide a stable port of reference, because people have difficulty introspecting

DISTRIBUTED DECISION EKING 31 about the factors that motivated their actions tie., why they did things) (Ericsson and Simon, 1980; Nisbett and Wilson, 1977~. Thus, uncertainly about values can be as serious a problem as uncertainty about facts (March, 1978~. Uncertain Assessment Although people are Epically ready to recognize uncertainty about what will happen, they are not always well prepared lo deal with that uncertainty (by assessing the likelihood of future events). How people do (or do not) make judgments under conditions of uncertainty has been a major topic of research for the past 15 years (Kahneman, Slovic, and l~versly, 1982~. A rough summary of its conclusions would be that people are quite good at tracking repetitive aspects of their environment, but not very good at combining those observations into inferences about what they have not seen ~wards, 1954, 1961; Einhorn and Hogarth, 1981; Pitz and Sachs, 1984; Stoic, Fischhoff, and Lichtenstein, 1977; Rappoport and Wallsten, 1972; Kahneman, Slovic, and Tversly, 1982; Brehmer, 1980; Peterson and Beach, 1967~. Thus, they might be able to tell how frequently they have seen or heard about a particular cause of death, but not how unrepresentative their experience has been leading them to overestimate risks to which they have been overexposed (Tversky and Kahneman, 1973~. They can tell what usually happens in a particular situation and recognize how a specific instance Is special, yet not be able to integrate those two (uncertain) facts most often focusing on the specific information and ignoring experience (Bar Hillel, 1980~. They can tell how similar a specific instance is to a prototypical case, yet not how important similarity is for making predictions-usually relying on it too much (Bar Hillel, 1984; Kahneman and Iversky, 1972~. They can tell how many times they have seen an erect follow a potential cause, yet not infer what that says about causality-often perceiving correlations when none really exists (Beyth- Marom, 1982a, 1982b; Einhorn and Hogarth, 1978; Shaklee and Mimms, 1982~. In addition to these difficulties in integrating information, people's in- tuitive predictions are also afflicted by a number of systematic biases in how they gather and interpret information. These include overconfidence in the extent of their own knowledge (Fischhoff, 1982; I~chtenstein, Fischhoff, and Phillips, 1982; Wallsten and Budescu, 1983), underestimation of the time needed to complete projects (Armstrong, 1985; Kidd, 1970; Tihan- sly, 1976), unfair dismissal of information that threatens favored beliefs (Nisbett and Ross, 1980), exaggeration of personal ~muni~ to various threats (Svenson, 1981; Weinstein, 1980), insensitivity to the speed with

32 DISTRIBUTED DECISION MAKING which exponential processes accelerate (Wagenaar and Sagana, 1976), and oversimplification of others' behavior (Mischel, 1968; Rose, 1977~. Option Choice Decision theory Is quite uncompromising regarding the sort of rule that people should use to integrate all of these values and probabilities in the quest of a best alternative. Unless some consequences are essential, it should be an expectation rule, whereby an option is evaluated according to the attractiveness of its consequences, weighted by their likelihood of being obtained (Schoemaker, 1983~. Since it has become acceptable to question the descriptive validity of this rule, voluminous research has looked at how well it predicts behavior (Feather, 1982~. A rough summary of this work would be that: (a) it often predicts behavior quite well- if one Mows how people evaluate the likelihood and attractiveness of consequences; (b) with enough ingenuity, one can usually find some set of beliefs (regarding the consequences) for which the rule would dictate choosing the option that was selected meaning that it is hard to prove that the rule was not used; (c) expectation rules can often predict the outcome of decision-making processes even when they do not at all reflect the thought processes involved so that predicting behavior is not sufficient for understanding or aiding it (Fischhoff, 1982~. More process-onented methods revealed a more complicated situation. People seldom acknowledge using anything as computationally demanding as an expectation rule or feel comfortable using it when it is proposed to them (Lichtenstein, Slovic, and Zink, 1969~. ~ the extent that they do compute, they often seem to use quite different rules (Kahneman and Iversky, 1979; Tversly and Kahneman, 1981; Beach and Mitchell, 1978; Payne, 1982~. Indeed, they even seem unimpressed by the assumptions used to justify the expectation rule (Slovic and Tversky, 1974~. ~ the extent that they do not compute, they use a variety of simple rules whose dictates may be roughly similar to those of the expectation rule or may be very different (Beach and Mitchell, 1978; Payne, 1982; lands and Mann, 1977; Tversky, 1969~. Many of these can be summarized as an attempt to avoid making hard choices by finding some way to view the decision as an easy choice (e.g., by eliminating consequences on which the seemingly best option rates poorly) (Montgomery, 1983~. Cognitive Assets and Biases This (partial) litany of the problems described by empirical researchers paints quite a dismal picture of people's ability to make novel (or analyt- ical) decisions, so much so that the investigators doing this work have

DISTRIBUTED DECISION MAKING 33 been accused of being problem mongers (Berkeley and Humphreys, 1982; Jungermann, 1984; van ~nterfeldt and Edwards, 1986). Of course, if one hopes to help people (in any arena), then the problems are what matter, for they provide a point of entry. In addition to meaning well, investiga- tors in this area have also had a basically respectful attitude toward the objects of their studies. It Is not people, but their performance, that is shown in a negative light Indeed, in the history of the social sciences, the interest in judgmental biases came as part of a cognitive backlash to psychoanalysis, with fits dark interpretation of human foibles. The cognitive perspective showed how biases could emerge from honest, unemotional thought processes. Typically, these mini-theories show people processing information in reasonable ways that often work well but can lead to predictable trouble. A simple example would be relying on habit or tradition as a guide to decision making. That might be an efficient way of making relatively good decisions, but it would lead one astray if conditions had changed or if those past decisions reflected values that were no longer applicable. A slightly more sophisticated example is reliance on the "availability heuristic" for estimating the likelihood of events for which adequate statistical informa- lion is missing. This is a rule of thumb by which events are judged likely if it is easy to imagine them happening or remember them having occurred in the past. Although it is generally true that more likely events are more available, use of the rule might lead to exaggerating the likelihood of events that have been overreported in the media or are the topic of personal worry (lversly and Kahneman, 1973~. Reliance on these simple rules seems to come from two sources. One is people's limited mental computation capacity; they have to simplify things in order to get on with life (Miller, 1956; Simon, 1957~. The second is their lack of training in decision making, leading them to come up with rules that make sense but have not benefited from rigorous scrutiny (Beyth-Marom, Dekel, Gombo, and Shaked, 1985~. Moreover, people's day-to-day experience does not provide them with the conditions (e.g., prompt, unambiguous feedback) needed to acquire judgment and decision making as learned skills. Experience does often allow people to learn the solutions to specific repeated problems through trial and error. However, things get difficult when one has to get it right the first time. WHAT CAN BE DONE ABOUT IT? The down side of this information-processing approach is the belief that many problems are inherent in the way that people think about making decisions. The up side is that it shows specific things that might be done to get people to think more effectively.

34 DISTRIBUTED DECISION MANN Just looldng at the list of problems suggests some procedures that might be readily incorporated in automated (online) decision aids (as well as their low-tech human counterparts). ~ counter the tendency to neglect significant options or consequences, an aid could provide checklists with generic possibilities (Beach, ldwnes, Campbill, and Keating, 1976; Ham- mer, 1980; Janis, 19823. 1b reduce the tendency for overconfidence, an aid could force users to list reasons why they might be wrong before assessing the likelihood that they are right (Koriat, [ichtenstein, and Fischhoff, 1980~. Th discourage hindsight bias, an aid can preserve the decision makers' his- tory and rationale (showing how things once looked) (Slovic and Fischhoff, 1977~. 1b avoid incomplete value elicitation, an aid could force users to consider alternative perspectives and reconcile the differences among them. At least these seem like plausible procedures; whether they work is an em- pirical question. For each intervention, one can think of reasons why it might not work at least if done crudely (e.g., long checklists might reduce the attention paid to individual options, leading to broad but superficial analysis). Modeling Languages One, or the, obvious advantage of computerized aids is their ability to handle large amounts of information rapidly. The price paid for rapid information handling is the need to specify a model for the computer's work This model could be as simple as a list of key words for categorizing and retrieving inflation or as complex as a full-blown decision analysis (Behn and Vaupel, 1983; Brown, Kahr, and Peterson, 1974; Keeney and Raiffa, 1976; Raida, 1968) or risk analysis (McCormick, 1981; U.S. Nuclear Regulatory Commission, 1983; Wilson and Crouch, 1982) within which all information is incorporated. However user friendly an aid might be, using a model means achieving a degree of abstraction that is uncommon for many people. For example, even at the simplest level, it may be hard to reduce a substantive domain to a set of key words. Moreover, any model is written in something like a foreign language, with a somewhat strange syntax and vocabulary. Successful usage means being able to translate what one knows into terms that the modeling language (and the aid) can understand. Any lack of fluency on the part of the user or any restrictions on the language's ability to capture certain realities reflects a communication disorder limiting the aid's usefulness. For example, probabilistic risk analyses provide a valuable tool for figuring out how complex technical systems, such as nuclear power or chemical plank, operate and how they will respond to modifications. They do this by representing the system by the formal connections among itS parts (e.g., showing how failures in one sector will affect performance in others).

DISTRIBUTED DECISION A~lKING 35 Both judgment and statistics are used to estimate the model's parameters. In this way, it is possible to pool the knowledge of many experts, expose that knowledge to external review, compute the overall performance of the system, and see how sensitive that performance is to variations (or uncertainties) in those parameters. (These are just the sort of features that one might desire in an aid designed to track and project the operation of a military command.) Yet current modeling languages require the experts to summarize their knowledge in quantitative and sometimes unfamiliar terms, and they are ill suited to represent human behavior (such as that of the system's operators) (Fischhoff, 1988~. As a result, the model is not reality. Moreover, it may differ in ways that the user understands poorly, just as the speaker of a foreign language may be insensitive to its nuances. At some point, the user may lose touch with the model without realizing it. The seriousness of this threat with particular aids is an empirical question that iS jUSt being to receive attention Rational Research Council, 19~. Skilled Judgment Whether or not one relies on an aid, a strong element of judgment is essential to all decision making. With unaided decision making, judgment is all. With an aid, it is the basis for creating the model, estimating its pa- rameters, and interpreting its results. Improving the judgments needed for analysis has been the topic of intensive research, with moderately consistent (although incomplete) results, some of them perhaps surprising (Fischhoff, 1982~. A number of simple solutions have proven rather ineffective. It does not seem to help very much to exhort people tO work harder, to raise the stakes hinging on their performance, to tell them about the problems that other people (like them) have with such tasks, or to provide theoret- ical lmowledge of statistics or decision theory. Similarly, it does not seem reasonable to hope that the problems will go away with time or when the decisions are really important Judgment is a skill that must be learned. Those who do not get training or who do not enjoy a naturally instructive environment (e.g., one that provides prompt unambiguous feedback and rewards people for wisdom rather than, say, for exuding confidence) will have difficulty going beyond the hard data at their disposal Although training courses in judgment per se are rare, many organized professions hope to inculcate good judgment as part of their apprenticeship program. This reaming is expected to come about as a by-product of having one's behavior shaped by masters of the craft (be they architects, coaches, officers, or graduate advisers). What is learned is often hard tO express in words and hence must be attributed to judgment (Polanyi, 1962~. What is unclear is whether that learning extends to new decisions, for which the profession has not acquired trial-and-error experience to shape its practices.

36 DISTRIBUTED DECISION MATING When attempts have been made to improve judgment, a number of approaches have proven promising (Fischhoff, 1982~. One is to provide the conditions that learning theory holds to be essential for skill acquisition; for example, whether forecasters show great skill in assessing the confidence to be placed in their precipitation forecasts for which they receive prompt, pertinent, and unambiguous feedback that they are required to consider (Murphy and Wmlkler, 1984~. It these conditions do not exist in life, then they might be simulated in the laboratory, for example, confidence assess- ment has been improved by giving concentrated training trials (Lichtenstein and Fischhoff, 1980~. A second seemingly effective approach is to restruc- ture how people perform judgment tasks, so as to enable them to use their own minds more effectively. For example, hindsight bias may be reduced by forcing people to imagine how events that did happen might not have happened (Slovic and Fischhoff, 1977~; availability bias may be reduced by encouraging people to search their minds in a variety of ways so as to get a more diverse set of examples (Behn and Wupel, 1983; Brown, Kahr, and Peterson, 1974; Keeney and Raifla, 1976; Raiffa, 19683; new evidence may be interpreted more appropriately by having people consider how it might be consistent with hypotheses that they doubt (F~schhoff and Beyth- Marom, 1983; Kahneman and ~rersky, 1979~. Developing such procedures requires an understanding of how people do think as well as of how they should think. Finally, there is the obvious suggestion to train people in the principles of decision making, along with exercises in applying them to real problems. Researchers working in the area typically feel that they them- selves have learned something from observing everyone else's problems. Whether this is an accurate perception and whether similar understanding can be conferred on others is an empirical question. HOW IS DISTRIBUTED DECISION MAKING DIFFERENT? If like is hard for single individuals wrestling with their fate, then what happens in command-and-control systems, with interdependent deci- sion makers responsible for incompletely overlapping portions of complex problems? Addressing these situations is a logical next step for behavioral decision theory, although not one that it can take alone. Although the essential problem in command-and-control Is still individuals pondering the unknown, there are now rigid machines, rules, and doctrines in the picture, along with more fluid social relations. These require the skills of computer scientists, human factors specialists, substantive experts, and organizational theorists. What follows is our attempt to pull these perspectives together into a framework for analyzing command-and control systems. In doing so, we have characterized the problem more generally as distributed decision

DISTRIBUTED DECISION At4KING 37 making, defined as any situation in which decision-making information is not completely shared by those with a role in shaping the decision. The set of systems having this proper includes high-tech examples, such as air traffic control and satellite management of a multinational corporation; mid-tech examples, such as forest fire fighting and police dispatch; low-tech examples, such as a volunteer organi7~tion's coordination of its branches' activities or a couple's Integration of their childrearing practices or their use of a common checking account as will as the command of a military operation or a far-flung foreign service. We choose to look far afield for examples, in the belief that it is possible to understand one's own situation better by looking at the circumstances of others. They may do things so well or so poorly as to cast the Ability of different strategies in sharp relief, as was the goal of In Search of Excellence (or might be the goal of its complement, In Search of Dereliction). Synthesizing the experience of diverse systems may highlight the significant dimensions in characterizing and designing other systems. Although geographical separation is often considered a distinguishing characteristic of distributed decision making, there can be substantial dif- ficulties in coordinating the (current and past) information of individuals in the same room or tent. (In the 1960s, some of these problems were called "failures to communicate.") As a result, we leave the importance of different kinds of separation as a matter for investigation. Although the distribution of decision-making authority might seem to be another distinguishing characteristic, we believe that it is trivially achieved in almost all human organizations. Few are able, even if they try, to centralize all authority tO make decisions. It seems more productive to look at how decision-making authority is distn~uted. For example, even when there are focal decision makers who choose courses of action at clearly marked points in time, their choice is often refined through interactions with their subordi- nates, shaped by the information (and interpretations) reported by others, and constrained by the predecisions of their predecessors and superiors (e.g., avoid civilian casualties, avoid obviously reversing directions). From this perspective, a useful uniting concept seems to be that of a shared model. Those living in distributed decision-making systems have to keep in mind some picture of many parts of that system, for example, how external forces are attempting to affect the system, what communications links exist within the system, what the different actors in the system believe about itS internal and external situation, and what decisions they face (in terms of the options, values, constraints, and uncertainties). These beliefs are sometimes dignified by terms like mental representation. It seems unlikely than anyone (in our lifetimes, at least) will ever actually observe what goes on in people's minds with sufficient clarity to be able to outline the contents. What investigators can see is a refined and disciplined

38 DISTRIBlJTED DECISION MAKING version of what other people can see, those aspects of people's beliefs that they are able to communicate. That communication might be in terms of unrestricted natural language, in terms of the restricted vocabulary of a formal organization, or in terms of a structured modeling language. In all cases, though, people need to translate their thoughts into some language before those can be shared with others. Their ability tO use the language sets an upper limit on the system's coordination of decision making, as does the system's procedures for information sharing. Looking at the language and procedures provides a way of characterizing a system's potential (and anticipating its problems). Young at the knowledge that has been shared provides a way to characterize its current state of affairs (and anticipate its problems). A reasonable question at this stage is why anyone should expect anything useful out of this effort or, indeed, why we should get into this topic rather than concentrate on more tractable problems. Command- and-control theory is a graveyard or good intentions. The stakes are so high that fenders and fundees are willing tO go with very long shots in hopes of producing some useful results. Yet the complexity of the problem is such that many of its theories appear almost autistic, as though the attempt to make sense of it leads researchers down a path lo convoluted and idiosyncratic theorizing. Our hopes for beating these long odds lie in taking a fresh look at the problem, in bnng~ng to it disciplinary perspectives that have not typically been combined (e.g., psychology, human factors, sociology, political science, as well as military science), in having experience with a variety of other distributed decision-making systems (e.g., public administration, technology management, finance, voluntary organizations), and in enjoying messy problems (which should keep us from reaching premature closure). lIOW DO DISTRIBUTED DECISION-MAKING SYSTEMS DIFFER? At the core of distributed decision-maldng systems are the people who have to get the work done. As a result, a natural way to begin an analysis of such systems is with the reality faced by those individuals, wherever they find themselves within it. Sensible complications are to look then, at the issues that arise when decision making is distributed over two individuals and, finally, when multiple individuals are involved. The form of what follows is a task analysis, which is the standard point of entry for human factors engineers, the specialists concerned with the performance of people in technical systems. Such analyses characterize systems in terms of their behaviorally significant dimensions, which must be considered when designing the system and adapting people tO it (Perrow, 1984~. The substance of the present analysis follows most work on human

DISTRIBUTED DECISION MAKING 39 factors in general, and its decision-making branch in particular, by empha- sizing cognitive aspects of performance. It asks how people undersold and manipulate their environment under reasonably unemotional conditions. Insofar as pressure and emotion degrade performance, problems that are unresolved at this level constitute a performance ceiling. For example, people need to stretch themselves to communicate at ale The risk that they may not stretch enough or in the right direction to be understood is part of the human condition. These risks can, however, be exacerbated (in predictable ways) by designers who do things such as compose multiservice task forces without malting basic human communication and understanding a fundamental command concern. Singl+Person Systems The simplest situation faced by an individual decision maker involves a static world about which everything can be known and no formal rep- resentation of knowledge is required. The threats to performance in this basic situation are those identified in the research on individual decision making (described above). They include the difficulties that arise in identi- fying relevant options, assembling and reviewing the knowledge that should be available, determining the values that are pertinent and the trade-ofE among them, and integrating these pieces in an effective way. The aids lo performance should also be those identified in the existing literature, such as checklists of options, multimethod value elicitation procedures, and integration help.3 A first complication for individual decision malting is the addition of uncertainty. With it come all the difficulties of intuitive judgment under uncertainty, such as the misperception of causality, overconfidence in one's own knowledge, and heunstic-induced prediction biases. The potential solutions include training in judgmental skills, restructuring tasks so as to overcome bad habits, and keeping a statistical record of experience so as lo reduce reliance on memory. A second complication is going from a static to a dynamic external world. With it come new difficulties, such as undue adherence to currently favored hypotheses, as well as the accompanying potential solutions, such as reporting forms that require consideration of how new evidence might be consistent with currently unfavored hypotheses. A third complication is use of a formal modeling language for organizing knowledge and decision making. One associated problem is the users' sin the absence of a formal model, Computational help is impossible. However, there are in- tegration rules following other logics, such as flow charts, hierarchical lists of rules, or policy- capturing methods for determining what consistency with past decisions would dictate (Dawes, 197~, Goldberg, 1968; Meehl, 1954; Slovic, 19723.

40 DISTRIBUTED DECISION MAKING inability to speak the modeling language; it might be addressed by using linguists or anthropologists to develop the language and train people in it. Another associated problem is the language's inability to describe certain situations (such as those including human factors or unclear intentions); it might be addressed by providing guidelines for overriding the conclusions produced from models using the language. Iw~Person Systems Adding a second person to the system raises additional issues. How- ever, before addressing them, it is important to ask what happens to the old issues. That is, are they eliminated, exacerbated, or left unchanged by the complications engendered by each kind of two-person system? In behavioral terms, the simplest two-person system involves individu- als with common goals, common experience, and a hardened commun~ca- tions link. Thus, they would have highly shared models and the opportunity to keep them consistent. Having a colleague can reduce some difficulties experienced by individuals. For example, information overload can be reduced by dividing information-processing responsibilities, and some mis- takes can be avoided by having someone to check one's work But having someone who thinks similarly in the system may jUSt mean having two people prone to the same judgmental difficulties. It might even make mat- ters worse if they drew confidence from the convergence of their (similarly Hawed) judgmental processes. More generally, agreement on any erroneous belief is likely to increase confidence without a corresponding increase in accuracy, perhaps encour- aging more drastic (and more disastrous) actions. Rosy shift is a term for groups' tendency to adopt more extreme positions than do their individual members (Davis, 1982; Myers and Lamm, 1976~; g~oupthink is a term for the social processes that promote continued adherence to shared beliefs (Janis, 1972~. Restricting communication would be one way to blunt these tendencies, however, at the price of allowing the models to drift apart, perhaps-without the parties realizing iL Even with unrestricted commu- nication, discrepant views can go a long while without being recognized. False consensus refers to the erroneous belief that others share one's views (Nisbett and Ross, 1980~; pluralzsuc ignorance refers to the erroneous belief that one is the odd person out (Fiske and Taylor, 1984~. Both have been repeatedly documented; both can be treated if the threat is recognized and facing the discrepancy is not too painful Such problems arise because frequency of interaction can create a per- ception of completely shared models, when sharing is inevitably incomplete. An obvious complication in two-person distributed decision-maldng systems

DISTRIBUTED DECISION MAKING 41

42 DISTRIBUTED DECISION MAKING advantage that accrues from them is the ability to pool the knowledge of individuals with different experiences (e.g., observers on different fronts) in a single place. An additional disadvantage is that the language may suppress the nuances of normal communication that people depend on to understand and make themselves understood. It is unclear what substitutes people will find (or even if they will recognize the need to find them3 when deprived of facial expression, body language. intonation, and similar cues. These problems may be further exacerbated when knowledge comes to reside in a model without indication of its source, so that model users do not know who said it, much less how it was said. FinaLly, models that cannot express the appropriate level of confidence for a subordinate's re- port probably cannot generate the confidence needed to follow a superior's choice of action making it hard to lead through electronic mail. The great memory capacity of automated aids makes it possible, in principle, to store such information. However, there are human problems both in getting and in presenting those additional cues. On the input side, one worries about people's inability to characterize the extent of their own knowledge, to translate it into the precise terms demanded by a language, or to see how they themselves relied on nonverbal cues to be understood. On the output side, one worries about creating meaningful displays of such qualifications. If shown routinely, they may clutter the picture with "soft" information which, in any case, gets lost when users attempt to integrate such uncertainties with best guesses at what is happening (Peterson, 1973~. If available on request, qualifications may slip the minds of decision makers who want clear-cut answers to their questions. Because it iS SO difficult lo tell when qualifications are not in order, such systems require careful design and their operators require careful training.4 Unless users have demonstrated mastery of the system, it may be appropriate to sacrifice sophistication for fluency.5 A final, behaviorally significant complication that can arise with two- person distributed dec~sion-maldng systems is inconsistencies in the goals of the parties. They may have similar values, but differ over the goals relevant to a particular case; they may have a common opponent, yet stand to share 4Samet (1975) showed that a commonly used military system required information to be char- acterized in terms of reliability and validity even though these concepts were not distinguished in the minds of users. 5A case to consider in this regard is the hot line between the United States and the Soviet Union. Although it might seem like a technical advance to upgrade the quality of the line so that the leaders could talk to one another directly (e.g., through videophone), perhaps the quality of the communication is better with the current tela: systems with human operators who spend eight hours a day "talking" to one another. lay contrast, given the differences between the cultures, who knows what unintentional cues would be sent by the leaders through their voicm, postures, intonation, etc.?

DISTRIBUTED DECISION MAKING 43 differently from the spoils of victory, they may strive for power within the system, while still concerned about its ability to meet external challenges.6 Like other complications, these can be useful For example, disagreement over the application of general values can uncover labile values that might be undetected by an individual; competition might sharpen the wits of the competitors; by some accounts, conflict itself is part of what binds social units together (Coser, 1954~. Moreover, the chances to address the liabilities of conflict are probably increased by how well they are known: working at cross purposes, distorting and withholding information, mistrust. Whether these chances are realized depends on how well those who design and operate such systems can identity and address the ways in which they are most vulnerable to competition. At times, this may mean introducing sharply defined reward systems to create the correct mm of incentives.7 Multiple-Person Systems Most of the issues aming in the design and diagnosis of two-person systems also arise with multiple decision-maker systems, although with somewhat new wrinkles. The simplest level involves individuals with com- mon goals, shared expenence, and hardened communication links. As before, having more people around means having the opportunity for more views to evolve and be heard. Yet this advantage may backfire if the shared (past and present) experience leads them to think similarly while taking confidence in numbers (Lanir, 1982~. As the number of parties multiplies, so does the volume of messages (and perhaps information). If hardened communications links mean that everyone hears everything, then there may be loo much going on to ensure that everyone hears anything. More gener- ally, it may be hard to keep Uack of who knows What. With an automated aid, it may be possible to reconstruct who heard what. With some modest modeling of the decision-making situations faced by different individuals, it may be possible to discern who needs to know what. As organizational size increases, the possibility of completely shared experiences decreases. The mamnurn might be found in a hierarchical organization whose leaders had progressed through the ranks from the very bottom, so that they have a deep understanding of the reality of their subordinates' worlds, such that they can imagine what they might be 6A common variant within larger organizations is that they reward individuals within them for growth (i.e., malting their own subunits larger), while striving as a whole for profit (Baumol, 1959). 70ne example of the difficulty of diagnosing and designing such systems may be seen in the current debate over whether competition among the armed services improves national defense (by ensuring that there are technicalhy qualified critics of each service's new weapons proposals3 or degrades it (by fostering waste, duplication, and interoperabilibr problems).

44 DISTRIBl~ED DECISION MAKING thinking and how they might respond in particular circumstances. In such situations, less needs to be said and more can be predicted, malting the organization more intimate that it seems. However, size also makes the liabilities of commonality more extreme. Not only is shared misunderstanding more likelier, but it is also more difficult to treat because it is so broadly entrenched and the organizational climate is likely to be very rough for those who think differently. Indeed, the heterogeneity of an organization's selection and retention policies may be a good indicator of itS resilience within a complex and changing reality. If there are any common biases in communications between individuals (e.g., underestimation of costs, exaggeration of expectations from subordinates), then the cumulative bias may be well out of hand by the time communica- tions have cascaded up or down the organizational chart. When the world is changing rapidly, then the experience of having once been at every level in the organization may give an illusory feeling of understanding its reality. For example, the education, equipment, and challenges of foot soldiers (or sales representatives) may be quite different now than when their senior officers were in the trenches. An indicator of these threats might be the degree of technological change (or instability) in the organization and its environment. A treatment might be periodic rotation through the ranks and opportunities to cut through the normal lines of communication in order to find out what is really happening at diverse places, so as to reveal the discrepancies in the models held by different parties. Problems might be reduced somewhat by resisting opportunities to change the organization, unless the promised improvements will be so great as to compensate for the likely decrements in internal understanding. Both the problems and promises of unshared experience increase as one goes from two- to multiperson systems. More people do bring more perspectives to a problem and with it the chances of challenging m~sconcep- tions. However, the intricacies of sharing and coordinating that information may become unmanageable. Even more senously, with so many communi- cation little, it may become nearly impossible even to discover the existence of misunderstandings, such as differences in unspoken assumptions or the usage of seemingly straightforward terms. If communications are decen- tralized, then various subunits may learn to speak to one another, solving their local problems but leaving the system as a whole unstable. If they are centralized, then the occupants of that controlling node have an opportunity to create a common picture, but doing so requires extraordinary attention to detail, regarding who believes what when and how they express those beliefs. One aid to tracking these complex realities is to maintain formal models of the decision-making problems faced at different places. Even if these models could capture only a portion of those situations, comparing the models held at headquarters and in the field might provide a structured

DISTRIBUTED DECISION MAKING 45 way of focusing on discrepancies. When theory of data suggest that those discrepancies are large and persistent, then it may be command, rather than communications, strategies that require alteration When leaders cannot understand their subordinates' world, one suggestion is to concentrate on telling them what to do, rather than how to do it, so as to avoid microman- agement that is unlikely to be realistic. A second is to recognize (and even solicit) signals suggesting that they, the leaders, are badly out of touch with their subordinates' perceptions (so that one or both sets of beliefs need adjustment). Reliabilitr problems in multiperson systems begin with those of two- person systems. As before, their cause may be external (e.g., disruptions, equipment failure) or internal (e.g., the desire for flexibility or autonomy). As before, the task of those in them is to discern when communications have failed, how they have failed (i.e., what messages have been interrupted or garbled), and how the system can be kept together. The multiplicity of com- munications means a greater need for a structured response, if the threat of unreliability is real and recognized. Depending on the organization's capabilities, one potential coping mechanism might be a communications protocol that emphasized staying in touch, even when there was nothing to say, in order to monitor reliability continually; another might be analyses of the backlash effect of actions or messages, considering how they discourage or restrict future communications (e.g., by suggesting the need for secrecy or revealing others' positions); another might be by reporting intentions along with current status, to facilitate projecting what incommunicant oth- ers might be doing; another might be creating a "black box" from which one could reconstruct what had happened before communications went down. A complicating factor in reliability problems, which emerges here but could be treated with two-person systems, is that lost communications may reflect loss of the link or loss of the communicator at the other end of the link. That loss could reflect defection, disinterest, or destruction. Such losses simplitr communications (by the number of links involving that individual) and can provide diagnostic information (about possible threats to the rest of the system). However, they require reformulation of all models within the system involving the lost individual. Where that reformulation cannot be confidently done or disseminated, then contingency plans are needed, expressing a best guess at how to act when the system may be shrinking. Whether drawn up for vanishing links or individuals, those plans should create realistic degrees of autonomy for making new decisions and for deviating from old ones (e.g., provide answers to: Are my orders still valid? Will I be punished for deviating from them?~. A final complicating factor with multiple-person systems, for which the two-person version exists but is relatively uninteresting, concerns the

46 DISTRIBUTED DECISION MAMNG heterogeneity of its parts. At one extreme lies a homogeneous organization whose parts interact in an additive fashion, with each performing roughly the same functions and the system's strength depending on the sum of such parts. At the other extreme lies a heterogeneous organization having specialized parts dependent on one another for vital services, with itS strength coming from the sophistication of itS design and the effectiveness of its dedicated components. Crudely speaking, a large undifferentiated infantry group might anchor one end of this continuum and an integrated carrier strike force the other. The operational benefits of a homogeneous system are its ability to use individuals and materials interchangeably, as well as its relative insensitivity tO the loss of any particular units (insofar as their effect is additive). Common benefits as a distributed decision-making system are the existence of a shared organizational culture, the relative simplicity of organizational models, the ease with which components can interpret one another's actions, and the opportunity to create widely applicable organizational policies. Inherent limitations may include homogeneity of perspectives and skills, leaving the system relatively vulnerable to deeply shared misconceptions (what might be called "intellectual common-mode failures") and relatively devoid of the personnel resources need tO initiate significant changes (or even detect the need for them without very strong, and perhaps painful, messages from the environment). The operational benefits of a heterogeneous system lie in itS ability tO provide a precise response to any of the anticipated challenges posed by a complex environment. Its advantages as a distributed decision-making system lie in its ability to develop task-specific procedures, policies, and communications. One inherent disadvantage in this respect may be the difficulty of bearing in mind or modeling the operations of a complex interactive system, so it is hard to know who is doing what when and how their actions affect one another. For example, backlash and friendly fire may be more likely across diverse units than across similar ones. Even if they do have a clear picture of the whole, the managers of such a system may find it difficult to formulate an organizational philosophy with equivalent meanings in all the diverse contexts it faces. The diversity of parts may also create interoperability problems, hampering the parts' ability to communicate and cooperate amongst themselves. Both kinds of systems may be most vulnerable to the kinds of threats against which the other is most strongly defended. The additive character of homogeneous systems means that it is numbers that count. A command system adapted to this reality may be relatively inattentive to those few ways in which individual units are indispensable, such as their ability to reveal vital organizational intelligence or to embarrass the organization as a whole. Conversely, the command structure that has evolved to orchestrate the

DISTRIBUTED DEC SION ~lKING 47 pieces of a heterogeneous system may be severely challenged by situations in which mainly numbers matter. An inevitable by-product of specialization is having fewer of every specialty and less ability to transcend specialty boundaries. Inhere may therefore be less staying power in protracted engagements. Perhaps the best response to these limitations is incorporating some properties of each kind of system into the other. Thus, for example, homogeneous organizations could actively recruit individuals with diverse prior experience in order to ensure some heterogeneity of views; they might also develop specialist positions for dealing with nonadditive issues wherever those appear in the organization (e.g., intelligence officers, publishers' libel watchdogs). Heterogeneous organizations might promote generalists with the aim of mediating and attenuating the differences among their parts; they might also transfer specialists across branches so as to encourage the sharing of perspectives (at the price of their being less well equipped to do the particular job). Whether such steps are possible, given how antithetical they are to the ambient organizational philosophy, would be a critical design question. PRINCIPLES IN DESIGNING DISTRIBUTED DECISION-MAKING SYSTEMS Goals of the Analysis The preceding task analysis began with the problems faced in designing the simplest of decision-making systems, those involving single individuals grappling with their fate under conditions of certainly, with no attempt at formalization. It proceeded to complicate the lives of those single individ- uals and then to consider several levels of complication within two-person and multiperson organizations. A full-blown version of this analysis would consider, at each stage, first, how the problems that arose in simpler systems were complicated or ameliorated and, second, what new problems arose. For each set of problems, it would try to develop a set of solutions based, as far as possible, on the available research literature in behavioral decision theory, cognitive psychology, human factors, communications research, or organizational theory. The recommendations offered here are therefore but speculations, suggestive of what would emerge from a fuller generic analysis or the consideration of specific systems. That fuller analysis would proceed on two levels. One is to inves- tigate solutions to highly specific problems, such as the communications protocol or optimal visual display for a particular heterogeneous system. The second is to develop general design principles, suggesting what to do

48 DISTRIBUTED DECISION MAKING in lieu of detailed specific studies. In reality, these two efforts are highly intertwined, with the general principles suggesting what behavioral dimen- sions merit detailed investigation and the empirical studies substantiating (or altering) those beliefs. Were a more comprehensive analysis in place, a logical extension would be to consider the interaction between two dis- tributed decision-making systems, each characterized in the same general terms. Such an analysis might show how the imperfections of each might be exploited by the other as well as how they might lead to mutually unde- sirable circumstances. For example, an analysis of the National Command Authorities of the United States and the Soviet Union might show the kinds of challenges that each is least likely to handle effectively. That kind of diagnosis might seine as the basis for unilateral recommendations (or bilateral agreements) to the effect, "Don't test us in this way unless you really mean it We're not equipped to respond flexibly." Design Guidelines Although still in its formative stages, the analysis tO date suggests a number of general conclusions that might emerge from a more comprehen- sive analysis of distributed decision-makir~g systems. One is that the design of the system needs to bear in mind the reality of the individuals at each node in it. If there is a tendency to let the design process be dominated by issues associated with the most recent complication, then it must be re- sisted. If the designers are unfamiliar with the world of the operators, then they must learn about it. For example, one should not become obsessed with the intricacies of displaying vast quantities of information when the real problem is not knowing what polisher to apply. Given the difficulty of individual decision making, one must resist the temptation tO move on to other, seemingly more tractable problems. A second general conclusion is that many group problems may be seen as variants of individual problems or even as reflections of those problems not having been resolved. For example, a common crisis in the simplest individual decision-maldog situations is determining what the individual wants from them. The group analog is determining what specific policies tO apply or how to interpret general policies in those circumstances. As another example, individuals' inability tO deal coherently with uncertain may underlie their (unrealistic) demands for certainty in communications from others. A third conclusion is that many problems that are attributed tO the imposition of novel technologies can be found in quite low-tech situations. To people living in the same household can have difficulty communicating; allowing them to use only phone or telex may make matters better or worse. The speed of modern systems can induce enormous time pressures,

DISTRIBUTED DECISION MAKING 49 yet many decisions cannot be made comfortably even with unlimited time. Telecommunications systems can generate information overload, yet the fundamental management problem remains the simple one of determining what is relevant. In such cases, the technology is best seen as giving the final form to problems that would have existed in any case and as providing a possible vehicle for either creating solutions or putting solutions out of reach. A fourth conclusion is that is pays to accentuate the negative when evaluating the designs of distributed dec~sion-making systems, and to ac- centuate the positive when adapting people to those systems. That is, the design of systems is typically a top-down process beginning with a set of objectives and normative constraints. The idealization that emerges is something for people to strive for but not necessarily something that they can achieve. Looking at how the system keeps people from doing their jobs provides more realistic expectations of overall system performance as well as focuses attention on where people need help. The point of departure for that help must be their current thought processes and capabilities, so that they can be brought along from where they are toward where one would like them to be. People can change, but only under carefully structured conditions and not that fast. When they are pushed too hard, then they risk losing touch with their own reality. Design Ideologies A fifth conclusion is that the design of disported decision-making systems requires detailed empirical work. A condition for doing that work is resisting simplistic design philosophies. There is a vaneW of such principles, each having the kind of superficial appeal that is capable of generating strong organizational momentum, while frustrating efforts at more sensitive design. One such family of simple principles concentrates on dealing with a system's mistakes, by claiming to avoid them entirely ~ prospect (as expressed in "zero defects" or "quality is free" slogans), lo adapt tO them promptly in process (as expressed in "muddling through"), or tO respond to them in hindsight ("learning from experienced. A second family concentrates on being ready for all contingencies, by instituting either rigid flexibility or rigid inflexibility, leaving all options open or planning for all contingencies. A third family emphasizes controlling He human element in systems, either by selecting the right people or by creating the right people (through proper training and incentives). A fourth family of principles proposes avoiding the human element either when it is convenient (because viable alternatives exist), when it is desirable (because humans have known flaws), or in all possible circumstances whether or not human fallibility has been demonstrated (in hopes of increasing system predictability).

so DISTRIBUTED DECISION AfAKING Rigid subscription to any of these principles gives the designers (and operators) of a system an impossible task For example, the instruction "to avoid all errors" implies that time and price are unimportant. When this is not the ~se, the designers are left adrift, forced to make trade- offs without explicit guidance. When fault-free design is impossible, then the principle discourages treatment of those faults that do remain. Many fail-safe systems work only because the people in them have learned, by trial and error, to diagnose and respond to problems that are not supposed lo happen. Because the existence of such unofficial intelligence has no place in the official design of the system, it may have to be hidden, may be unable to get needed resources (e.g., for record keeping or realistic exercises), and may be destroyed by any uncontrollable change in the system (which invalidates operators' understanding of those intricacies of its operation that do not appear in any plans or training manuals). From this perspective, when perfection is impossible, it may be advisable tO abandon near-perfection as a goal as well, so as to ensure that there are enough problems for people to learn to cope with them. In addition, when perfection is still (but) an aspiration, steps toward it should be very large before they justify disrupting accustomed (unwritten) relationships. That is, technological instability is a threat to system operation. Additional threats of this philosophy Include unwillingness to face those intractable problems that do remain and setting the operators up to take the rap when their use of the system proves impossible. Similar analyses exist for the limitations of each of the other simple rules. In response, proponents might say that the rules are not meant to be taken literally and that compromises are a necessary part of all design. Yet the categorical nature of such principles is an important part of their appeal and, as stated, they provide no guidance or legitimation for compromises. Moreover, they often tend to embody a deep misunderstanding of the role of people in person-machine systems, reflecting, in one way or another, a belief in the possibility of engineering the human side of the operation in the way that one might hope to engineer the mechnical or electronics side. Human Factors As the long list of human factors failures in technical systems suggests, the attempts to implement this belief are often needlessly clumsy (National Research Council, 1983; Perrow, 1984; Rasmussen and Rouse, 1981~. The extensive body of human factors research is either unknown or is invoked at such a late stage in the design process that it can amount to little more than the development of warning labels and training programs for coping with inhuman systems. It is so easy to speculate about human behavior (and provide supporting anecdotal evidence) that systematic empirical research

DISTRIBUTED DECISION MAKING 51 hardly seems needed. Common concomitants of insensitive design are sit- uations in which the designers (or those who manage them) have radically different personal experiences from the operators, themselves work in or- ganizations that do not function very well interpersonally, or are frustrated in trying to understand why some group of others (e.g., the publics does not like them. However, even when the engineering of people is sensitive, its ambi- tions are often misconceived. The complexity of systems places some limits on their perfectability, malting it hard to understand the intricacies of a design. As a result, one can neither anticipate all problems nor confidently treat those one can anticipate, without the fear that corrections made in one domain will create new problems in another.8 Part of the genius of people is their ability to see (and hence respond tO) situations in unique (and hence unpredictable) ways. Although this creativity can be seen in even the most structured psychomotor task;, it is central and inescapble in any interesting distributed decision-maldng system (Fischhoff, T. anir, and Johnson, in press). Once people have to do any real thinking, the system becomes complex (and hence unperfectable). In such cases, the task of engineering is to help the operators understand the system, rather than to manage them as part of it. A common sign of insensitivity in this regard is use of the term operator error to describe problems arising from the interaction of operator and system. A sign of sensitivity is incorporating operators in the design process. A rule of thumb is that human problems seldom have purely technical solutions, while technical solutions typically create human problems (Reason, in press). THE POSSIBILITY OF DISTRIBUTED DECISION MAKING Pursuing this line of inquiry can point to specific problems arising in destn~uted decision-making systems and focus technical efforts on solving them. Those solutions might include displays for uncertain information, protocols for communication in complex systems, training programs for making do with unfriendly systems, contingency plans for coping with predictable system failures, and terminology for coordinating diverse units. Denving such solutions is technically difficult, but part of a known craft. blue nuclear indust~y's attempts to deal with the human factors problems identified at Three Mile Island provide a number of clear examples. ~ take but two: (a) increasing the number of potentially dangerous situations in which it is n~=ry to shut down a reactor has increased the frequency with which reactom are in transitory states in which they are less well controlled and in which their components are subject to greater stress" (thereby reducing their life arpeclan~y by some poorly understood amount); (b) increasing the number of human factors-related regu- lations has complicated operators' job at the plant and created lucrative opportunities for oper- ators to work as consultants to industry (thereby reducing the qualified labor force at the plants).

52 DISTRIBl7ED DEC SION Af4KING Investigators Wow how to describe such problems, devise possible remedies, and subject those remedies to empirical test. When the opportunities to develop solutions are limited, these kinds of perspectives can help characterize existing systems and improvise balanced responses to them. However, although these solutions might make systems better, they cannot make them whole. The pursuit of them may even pose a threat to systems design if it distracts attention from the broader question of how systems are created and conceptualized. In both design and operation, healthy systems enjoy a creative tension between various conflicting pres- sures. One is between a top-down perspective (worldug down toward reality from an idealization of how the system should operate) and a bottom-up perspective (working up from reality toward some modest improvement of the current presenting symptoms). Another is between bureaucratization and innovation (or inflexibility and flexibility). Yet others are between planning and reacting, between a stress on routine and crisis operations, between risk acceptance and risk aversion, between human and technology orientation. A common thread in these contrasts is the system's attitude toward uncertainty: Does it accept that as a fact of life or does it live in the future, oriented toward the day when everything is predictable or controllable? Achieving a balance between these perspectives requires both the insight needed to be candid about the limitations of one's system and the leadership needed to withstand whichever pressures dominate at the moment. When a (dynamic) balance is reached, the system can use its personnel most effectively and develop realistic strategies. When it is not reached, the organization is in a state of crisis, vulnerable to events or tO hostile actions that exploit its imbalances. The crisis is particularly great when the need for balance is not recognized or cannot be admitted (within the current organizational culture), and when an experimental gulf separates management and operators. In this light, one can tell a great deal about how a system functions by looldng at its managers' philosophy. If that is oversimplified or overconfident, then the system will be too, despite any superficial complexity. The goal of a task analysis then becomes to expose the precise ways in which this vulnerability expresses itself. REFERENCES Armstrong, J.S. 1985 Lon~range forecasang. Second edition. New York: Wiley. Bailey, FEW. 1982 Hum' Perfonnance in Engineering. Englewood Cliffs, NJ: Prentice Hall Bar Hillel, M. 1980 Abe base-rate fallacy in probability judgments Acta Psychologica 44:211-233.

DISTRIBUTED DECISION AMONG 53 1984 Rcprmentativeness and fallacies of probability judgment. Acta P~yc*olo~zea 55:91-107. Baumol, WJ. 1959 ~ Beer, Vale and Grows New York Mamnillam. Beach, L~R., and Mitchell, I:R 1978 A contingency model for the selection of decision strategies. uncanny of Mana~nera Review 3:439449. Beach Lip, Townes, B.D., Campbell, F.L, and Keating, G.W. 1976 Developing and testing a decision aid for bird planning decisions Organi- mimal Behavior and Hump Pagan 15:99-116. Behn, RD., and Vaupel, J.W 1983 Quick Analysis Or BU{V D~ciswn Makes. New York: Basic Books Berkeley, D., and Humphrgys, P.C 1982 Structunag decision problems and the "bias heunstic" Acta Ps~*ologica 50;201-252. Behth-Marom, R. 1982a How probable is probable? Numencal translation of verbal probability expressions. Jo~mal of Forecasting 1:257-269. 1982b P-r~ntinn calf cation reexamined. Memos and Comic 10:511-Sl9. Beyth-Marom 1985 ~ -& ~ ~ ~ ~^ ~ ~ A _ ~ `, R., Dekel' S., Gombo, R and Shaked1 M. An El~nta~y Approach to 17~g Unbar Uncma~y. lIillscale, NY: Erlbaum. Brehmer, B. 1980 Effect of ale validity on learning of complex rules in probabilistic inference tasks. Acta Psychology 44:201-210. Brown, R.V., Kahr, NS., and Peterson, C 1 974 _ ~ Bunn, M., and lliipis, K 1983 , Decinan Analysis for the Manager. New York: Holt, Rinehart and Winston. Lee uncertainties of preemptive nuclear attach Scientific American 249~5~: 3847. Corbin, R. 1980 On decisions that might not get made. In 1: Wallsten, ea., Cognawe Processes in Chob;c and Decisiort Process. Hillsdale, NJ: Erlbaum. Coser, LA. 1954 1~ Social Functions of Consist. Glencoe, IL; The Free Press. Davis, J.H. 1982 Group P=formarux. Reading, MA: Addison-Wesley. Dawes, AM. 1979 The robust beauty of improper linear models in decision making. Demean Psychologist 34:571-582. Edwards, W 1954 The theory of decision making. Psychological Gibe 51:2111-214. 1961 Behavioral decision theory. Annual Pew of Psy~lo~D~ 12:473498. Einhorn, HJ., and Hoganh, AM. 1978 1981 Ericsson, A., and Simon, H. Confidence in judgment: Persistence in the illusion of validity. Psychological P-ew 85:395416. Behavioral decision theory: Processes of judgment and choice. Annul Reflow of Psychology 32:53~8. 7 ~. 1980 Verbal reports as data. Psycholo~c~ ~ 87:215-~1.

54 DISTRIBUTED DECISION MAKING Feather, N., ed. 1982 E~xc~, Incenave and Action. Hillsdale, NJ: Erlbaum. Fischhoff B _ _, _ 1975 Hindsightiforesight: Lee effect of outcome knowledge on judgment under uncenainW. Joumal of E~inz~ntal Psychology: Hums P=cepion and Proms 1:288-299. 1980 Clinical decision analysis Operanons Rich 2~:2843. 1982 Debiasing. In D. Kahneman, P. Slavic and ~ Iverskv. eds ~d~nu7zt Under 19fJ4 1987 ,, , ~ Unc==nty: Heunstms and Bass. New York Cambridge University Press. Setting standards: A systematic approach to managing public health and safeby risks Mana~nent Sconce 30:834 843. Judgment and decision making. In R. Steinberg and E.E. Smith, eds., 17ze P-ology of 1h~g. New York Cambridge University Press 1988 Eliciting expert judgment. IEEE Transactions on Systems, Man, and Cyber nencs 13:448 661. 1980 Fischho~, B., and Beyth-Marom, R. 1983 Hypothesis evaluation from a Bayesian perspective. Psychological Review 90:239-260. Fischhoff, B., and Cox, It, Jr. 1985 Conceptual framework for benefit assessment. In J.D. Ben tkover, AT. Covello, and ]. Mumpower, eds., Bazepts Assessmaz~ 17:e State of the Art. Dordrecht, Lee Netherlands: D. Reidel. Fischhoff, B., Lanir, Z., and Johnson, S. in press Risky lessons: A framework for analyzing attempts to learn in organizations. Or - lion Science. ~schho~, B., Slavic, P., and Lichtenstein, S. 1978 Fault trees: Sensitivity of estimated failure probabilities to problem rep resentation Jown a l of Exp~r~al Psychology: Hum ~ Perc epion on d Perfomu~ce 4:330-344 1980 Knowing what you want: Measuring labile values. In 1: Wallsten, ea., Connive Processes us Choice and Decision Behavior. Hillsdale, NJ: Erlbaum. Fischholl, B., Svenson, O., and Slavic, P. 1987 Active responses to environmental hazards. In D. Stokols and I. Altman, eds-, Handbook of Environmental Psychology. New York. Wiley. Fischho~, B., Watson, S., and Hope, C 1984 Defining disk. Policy Sciences 17:123-139. Fiske, S., and Taylor, S.K 1984 Social Cot~ninan. Reading, MA: Addison Wesley. Gettys, C~F., Pliske, AM., Manning, C, and Cask, J.T 1987 An evaluation of human act generation performance. Or~ganzzatzonal Behavior and He Decision Processes 39 Z3-51. Goldberg, LR. 1968 Simple models or simple processes? Some research on clinical judgment. American Psychology 23:483~96. Grether, D.M., and Plott, CR 1979 Economic theory of choice and the preference reveal phenomenon. Amer ican Econo~ruc review 69:623~8. Hammer, W. 1980 Prodder Safes and Management Engyneermg. Englewood aids, NJ: Prentice Hall.

DISTRIBUTED DECISION MAKING 55 Hechter, M., Cooper, L, and Nadel, L, eds. in press Vends. Stanford, Calif.: Stanford Universitr Press. Hershey, J.C., Kunreuther, H.C, and Schoemaker, PJ.H. 1982 Sources of bias in assessment procedures for utility functions. Mana`~nent Sconce 2f3:936-954. Hogarth, R.M. 1982 Beyond discrete biased Functional and dysfunctional aspects of judgmental heuristics. Psychological Can 90:197-217. Humphreys, P., Svenson, O., and Vari, A., eds., 1983 Ana~yzZn,~ and Aided Deciswn Processes. Amsterdam: North Holland. Janis, I.L" 1972 victims of Groupth~c. Boston: Houghton Mifflin. 1982 Counseling on Personal Decks. New Haven: Yale University Press. Janis, I.L, and Mann, L 1977 Deciswn Making. New York: Free Press. Jungermann, H. 1984 The two camps on rationality. In R.W. Scholz, ea., Deczsu~n Manna Under Uncmamay. Amsterdam: Elsevier. Kahneman, D., and Tversky, ~ 19~72 Subjective probability. A judgment of representativeness Collusive Psychol- o'~ 3:430 454. 1979 Prospect theory. Ecorzome~cs 47:263-292. Kahneman, D., Slovic, P., and Iversky, ~ 1982 ~ud&rnent Under Unclamp: H~cs and Biases. New York: Cambridge University Press. Keeney, ILL, and RaiBa, H. 1976 Decawns With hialtiple Objectives: Preferences and Value Tradeoffs. New York: Wiley. Kidd, J.B. 1970 The utilization of subjective probabilities in production planning. Acta P-ychologica 34:338-347. Koriat, A., Lichtenstein, S., and F~schhoff, B. 1980 Reasons for confidence. Journal of E - rinental Psychology: Humus Leaning and Memory 6:107-118. Lanir, T. 1982 Strategic Spouses. Ramat Avid. Tel Aviv University Press. Iichtenstein, S., and F~hoff, B. 1980 Raining for calibration. Organ~nal Beh=:or Ad Human Performance 26:149-171. Lichtenstein, S., F~schhoff, B., and Phillips, LD. 1982 Calibration of probabilities State of the art to 1980. In D. Kahneman, P. Slovic, and An, Iversly, edls., hd~gr7~ Under Uncertainly: Heunsucs and Busses. New Yoric: Cambridge Umversi~ Press. I ichtenstein, S., Slovic, P., and Zink, D. 1969 Effect of instruction in expected value on optimality of gambling decisions. Formal of Mental Psychology 7~.236 2A0. March, J.G. 1978 Bounded rationality, ambiguity, and the engineering of choice. We Bed Jouma1 of Economics 9:587 608. McCormick, N.J. 1981 Re~bili~ and Risk Analysis. New YoAc: Academic Press.

56 DISTRIBUTED DECISION EKING Meehl, P.E. 1954 Clbucal vs. Sta~cal Prediction: A TImoreical Analogs and a Review of Me Evi~e. Minneapolis: University of Minnesota Press Miller, G^. 1956 The magical number seven, plus or minus halo: Some limits on our capacity for processing information. Psychological Review 63:81-97. Mischel, ~ 1968 Personal and Assessment. New Yorlc Wiley. Montgomery, H. 1983 Decision rules and the search for a dominance structure: Towards a process model of decision making. In P. Humphreys, O. Svenson, and A. Vari, eds., Ana~yzzng and Aiding Decision Processes. Amsterdam: North Holland. Murphy, AH., and Wrinkler, R.L 1984 Probability of precipitation forecasts. Joumal of He Amp an Sta!tistic~ Association 79:3914~. Myers, D.G. and I^mm, H. 1976 The group polarization phenomenon. Psychological Bulletin 83(~4~:602-627. National Interagency Incident Management System 1982 The What, Why, and How of NIIMS. Washington, DC: U.S. Dept. of Agnculture. National Research Council. 1981Surveys of Subjective Phenomena. Committee on National Statistics. Wash ington DC: National Academy Press. Beseech Needs in Human Factors. Committee on Human Factors. Wash ington, DC; National Academy Press. and Ross, L Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Clips, NJ: Prentice-Hall. Nisbett, R.E., and Wilson, I:D. 1977 Telling more than we can know: Psychological Review 84:Z31-259. 1983 Nisbett, R.E., 1980 Verbal reports on mental processes. Payne, J.W. 1982 Contingent decision behavior. Psychological Bulletm 92:382401. Perrow, C. 1984 Normal Accidents. New York: Basic Books. Peterson, C.R., and Beach, LR. 1967 Man as an intuitive statistician. Psychological Bulletin 63:~ 46. Peterson, C.R., ed. 1973 Special issue: Cascaded inference. Organ~ional Behavior and Human Performance 10:31043Z Pitz, G.S., and Sachs, NJ. 1984 Behavioral decision theory. Annual Review of Psychology 35. Pitz, G.F., Sachs, N.J., and Heerboth, J. 1980 Procedures for eliciting choices in the analysis of individual decisions. Organ~ional Behavior and Human Performance 26:396408. Polanyi, M. 1962 Raifla, H. 1968 I:)ecision Analysis. Reading, MA Addison-Wesley. Rappoport, A., and Wallsten, J.S. 1972 Individual decision behavior. Annual Review of Psychology 23:131-175. Personal Knowledge. London: Routledge and Kegan Paul.

DISTRIBUTED DECISION FINKING Rasmussen, J., and Rouse, W.B., eds. 1981 Human Detection and Diagnosis of System Failure. New York: Plenum. Reason in press Hums Error. New York: Cambridge University Press. Rokeach, IvI. 1973 17w Nature of Human Values. New York: The Free Press. 57 Ross, L 1977 The intuitive psychologist and his shortcomings: distortions in the attribution process. Pp. 17~177 in L Berkowitz, ea., Advances u: E~penmental Social Psychology (Vol. 10~. New Yoric: Academic Press. Samet, M.G. 1975 Quantitative interpretation of two qualitative scales used to rate military intelligence. Human Factors 17:192-202. Schoemaker, PJ.H. 1983 The expected utility model: Its vanants, purposes, evidence and limitations. Journal of Econorruc Literature 2~):52f3-563. Shaklee, H., and Mimms, b1. 1982 Sources of error in judging event covariations: Effects of memory demands. J=nm1 of E~erb7~tal Psychology: Learned, Memory, and Cognition 8:208- 22J4. Shaklee, H., and locker, D. 1980 A rule analysis of judgments of covanation events Memory and Cognition 8:459467. Simon, H. 1957 Models of Man: Social and Rational. New Yorl`: Wiley. Slovic, P. 1972 Psychological study of human judgment Implications for investment decision making. Journal of Finance 27:779-799. ~ _ ~ , Slovic, P., and F~schhoff, B. 1977 On the psychology of experimental surprised Joumal of E~7zenta Pi olo~ Human Percepnon and Performance 3:1-39. Slovic, P., Fischho~, B., and Lichtenstein, S. 1977 Behavioral decision theory. An7u~al Whew of Psychology 28, 1-39. Slovic, P., and Lichtenstein, S. 1983 Preference reversals: A broader perspective. American Economic Review 73:59~605. Slovic, P., Lichtenstein, S., and Fischhoff, B. 1988 Slovic. P.. and Iversky, Decision making. In R.C. Atkinson, RJ. Hernstein, G. Lindzey, and R.D. Lllce, eds., Stevens' Handbook of E:`perunen~al Psychology (second edition). New York: Wiley. . _, ., _ _ ,, 1974 Who accepts Savage's axiom? Behavioral Science 19:368-373. Stokey, E., and Zeckhauser, R. 1978 A Pruner for Policy Analysis. New York: Norton. Svenson, O. 1981 Are we all less risly and more skillful than our fellow drivers? Acra Psycholo~ca 47:143-148. Tihansky, D. 1976 Confidence assessment of military air frame cost predictions. Operations Research 24:26-43.

58 DISTRIBUTED DECISION MAKING Tversly, ~ 1969 Intransitivity of preference. P~holo~cal ~av 76:31 48. Tversly, A, and Kahneman, D. 1973 Availability A heuristic for judging frequency and probability. Collusive P-ola,gy 4:207-232. 1981 The naming of decisions and the psychology of choice. Science 211:453~58. U.S. Nuclear Regulatory Commission 1983 PRO Procedures Guide (NUREG/CR-2300~. Washington, DC: The Commis- sion. van W~nterfeldt, D., and Edwards, W. 1982 Costs and payout in perceptual r' search. P~ho~.pcal Bulletin 93:609~22. 1986 Decision Making and Behavioral USA. New York: Cambridge Univemi~ Press. Wagenaar, W., and Sagana, S. 1976 Misperception of experimental growth. Percepbon and Psyc):ophysics. Wallsten, T., and Budesa~, D. 1983 Encoding subjective probabilities: A psychological and psychometric review. Management Science 29:151-173. Weinstein, N.D. 1980 Unrealistic optimism about future life events. Joumal of Personal and Social PI hology 39:8(K820. Wheeler, D.D., and Janis, I.L 1980 A Practical Guide Or Making Dcc~. New York: The Free Press. Wilson, R. and Crouch, E. 1982 P~klB~fitAr~ysis. Cambridge, MA Ballinger. Yates, ~ F. 1989 Judgment and Deciswn Making. Chichester, Eng.: Wiley.

Next: Appendix B: Background Materials »
Distributed Decision Making: Report of a Workshop Get This Book
×
 Distributed Decision Making: Report of a Workshop
Buy Paperback | $40.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

Decision making in today's organizations is often distributed widely and usually supported by such technologies as satellite communications, electronic messaging, teleconferencing, and shared data bases. Distributed Decision Making outlines the process and problems involved in dispersed decision making, draws on current academic and case history information, and highlights the need for better theories, improved research methods and more interdisciplinary studies on the individual and organizational issues associated with distributed decision making. An appendix provides additional background reading on this socially and economically important problem area.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!