Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.

Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter.
Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 169

Appendix H
Definitions and Examples of
Operational Systems Engineering
Tools and Concepts
Agent-based models: See definition of simulation models below.
Bayesian networks are probabilistic graphical models that describe
v
ariables of interest and possible relationships (e.g., a patient’s true
medical status, field experience, test results, pre-existing status) and their
probabilistic interdependencies. Bayesian networks encode probabilistic
relationships among variables and account for circumstances in which
data are missing and can be used to discover causal relationships (e.g.,
the relationship between symptoms and diseases). They are a good
method for combining prior knowledge and newly collected data.
Bayesian decision trees: See definition of decision trees below.
Cellular automata (CA) models: See definition of simulation models
below.
Cognitive task analysis is a method of identifying the cognitive de-
mands on a user’s cognitive resources (e.g., memory, attention, and deci-
sion making) from various aspects of system designs. This type of analy-
sis is a way of looking at a system from the point of view of users and
determining the thought processes that users follow to perform specific
tasks. The information gained from such an analysis can help designers
169

OCR for page 169

170 Systems Engineering to Improve Traumatic Brain Injury CARE
and users to focus on system features that users find hard to learn and to
identify the points at which cognitive challenges might arise.
Decision-tree analysis is a tool that enumerates all possible outcomes
of different choices in a given situation and computes the most likely
result(s) of each. The purpose is to help a decision maker choose among
decision options and to identify the strategy most likely to reach a
particular goal. A decision tree takes the form of a graph with tree-like
branches that shows all of the possible consequences of each decision
option—including the probability, resource cost, and utility. In short,
decision trees are visual and analytical decision-support tools for calcu-
lating the expected values (or utility) of competing alternatives.
Bayesian decision trees are a more advanced method that incor-
porates Bayesian networks into decision trees in order to account for
uncertainties in the values and outcomes of decisions. Decision trees are
also closely related to influence diagrams (see below).
Discrete-event models: See definition of simulation models below.
Fuzzy logic models are predictive or control models developed from
fuzzy set theory that deal with reasoning and relationships that are ap-
proximate, approximately known, or estimated (rather than precise).
Similar conceptually to probability theory (but different mathemati-
cally), fuzzy set theory is based on a graduated valuation of the degree of
“membership” in various elements in a set (e.g., as a patient’s screening
test score increases, his or her degree of membership for a particular
level of traumatic brain injury [TBI] severity rises or falls). The extent
to which each element is true is described with a membership function
valued on the (0, 1) interval. In fuzzy logic, the degree of truth of a state-
ment ranges from 0 to 1 and is not constrained to the two truth values
(e.g., does or does not have mild TBI [mTBI]). For example, fuzzy logic
predictive models can assimilate “degrees of truth,” or membership
v
alues, based on the results of screening tests to determine the most
likely “state” (or status) of a patient.
Influence diagrams (also called decision networks) are compact graphi-
cal and mathematical representations of a decision situation (in a sense,
they are generalizations of Bayesian networks) in probabilistic inference
problems and decision-making problems. Influence diagrams are a

OCR for page 169

appendix H 171
tool for identifying and displaying the essential elements of a decision
problem (e.g., decisions, uncertainties, and objectives) and how they
influence each other.
Judgment models are a qualitative approach to making estimates based
on consultation with one or more experts who have experience in the
problem domain. For example, an expert-consensus mechanism, such
as the Delphi technique, might be used to estimate the likelihood that
a patient with a certain combination of presenting conditions does in
fact have mTBI.
Markov chain models are stochastic processes in which a system (e.g.,
a patient or facility) transitions among a series of states (e.g., a patient
being healthy, mildly sick, extremely sick, or dead; or a facility being
empty, at half capacity, or full) and the Markovian (or “memoryless”)
property exists. The memoryless property means that the conditional
probability of the system being in any given state in the future depends
only upon its present state and is independent of any past states. (More
advanced types of Markov chains can include several past states in the
transition probabilities, but are memoryless beyond that amount of
history.) Future states are reached by transitioning from one state to
another with certain probabilities, rather than deterministically or with
certainty. For example, given today’s weather (state i at time t), tomor-
row (time t + 1) it will be raining, cloudy, or clear j (j = 1, 2, 3) with
defined transition probabilities pi,j. At each step, the system may change
from its current state to another state or remain in the same state, ac-
cording to these transition probabilities. Changes between states are
called transitions, and the probabilities associated with these changes
are called transition probabilities.
Markov decision processes (MDPs) and Markov decision theory are
extensions of Markov chains that provide a mathematical framework for
modeling sequential decision making in situations in which outcomes
are partly random (depending on actions or decisions by the decision
maker). These models are often used to determine the optimal schedule
of decisions, taking into account probabilistic events, demands, out-
comes, and resource constraints. An MDP is a discrete-time stochastic
control process characterized by a set of states (e.g., a patient’s condition
or the number of patients in a facility) and random (stochastic) future

OCR for page 169

172 Systems Engineering to Improve Traumatic Brain Injury CARE
events. In each state, at discrete points in time, a decision maker can
choose among several “control” actions (e.g., level of treatment, capacity
expansion). For a current state, s, and an action, a, a state transition
function, Pa(s), determines the transition probabilities to each of the next
possible states. The decision maker often earns a reward or penalty for
each state that actually occurs. The state transitions of an MDP have the
memoryless property described above (given that the state of the MDP
at time t is known, transition probabilities to a new state at time t + 1
are independent of all previous states or actions). Note that the differ-
ence between Markov chains and MDPs is that MDPs include actions
(allowing choice) and rewards (motivation).
Mixed-integer programming (MIP) models are mathematical opti-
mization models that minimize or maximize a specified objective func-
tion, subject to a set of constraints (either linear or nonlinear); in MIP
models, some of the decision variables are integers (e.g., the optimal
number of facilities or medical personnel to locate in a given region).
MIPs are heavily used in practice for solving problems in transportation
and manufacturing, but they are also useful for some aspects of TBI
care. For example, in a resource-location-allocation study, an MIP model
was used to locate TBI treatment units in the Department of Veterans
Affairs. The objective was to simultaneously determine optimal facility
locations and the optimal assignment of patients to those facilities.
Monte Carlo models: See definition of simulation models below.
Partially observable Markov decision processes (POMDPs) are a
variation of MDPs in which the current true state may not be known
with certainty (e.g., a patient’s true TBI status); instead, decisions (e.g.,
treatment, removal from field) are made based on current knowledge
about the current state.
Sensitivity analysis is a general term for studying the impact on results
of uncertainties in a model’s logic or data, such as how different values
of an independent variable or different processing steps will impact
results. Typically, sensitivity analyses are conducted on uncertain values
to explore how much the impact of a decision or policy will (or will
not) change under different assumptions. Sensitivity analyses can be
conducted on an ad hoc basis or more scientifically, such as by using the

OCR for page 169

appendix H 173
theory of experimental design. Sensitivity analyses can also be helpful for
identifying the model assumptions that have the least (or most) impact
on the results, which can be helpful when there are uncertainties in data.
In particular, sensitivity analyses can help identify those data elements
for which better estimates would be the most helpful and those that do
not have to be specified very accurately in order to make a good decision.
In this way, one can make better decisions about how to invest research
time and money in the development, collection, and researching of data
for a model.
Signal-detection theory (SDT) is used to analyze and optimize situ-
ations in which a decision is made that classifies ambiguous informa-
tion (e.g., test results) into one of two categories (e.g., patient is sick or
not) by trying to distinguish whether the observed result was created
by the category of interest (called the signal in the SDT framework) or
by random chance (called the noise). A common medical example is a
blood test for a disease for which positive patients present with a range
of numeric values and negative patients with a different range of values,
but the ranges overlap—thus complicating the task of deciding whether
a high result is a true “signal” or noise. SDT provides a mathematical
framework for assessing such decisions—for quantifying the test’s ability
to discriminate and for determining the optimal threshold for calling a
patient positive or negative.
Simulation models are computer models that emulate the logic of a
process and use randomly generated data whenever a chance or ran-
dom event (e.g., develops TBI, passes test, time durations) occurs in
the model. Simulation models are very useful for studying the range of
outcomes and most likely results of possible alternative process designs
and courses of action. Such models often are used to analyze “what if ”
situations (e.g., what if we did something this way instead of that), or
they can be used as part of an optimization computer program to find
an overall optimal solution. Because of the flexibility and utility of com-
puter simulations, they are widely used in operations research. Several
types of simulation models might be helpful in modeling TBI:
• Discrete-event models are used to model the sequential/ran-
dom flow of “things” (e.g., patients, personnel) through pro-
cesses (e.g., the military field or the health care process), typically

OCR for page 169

174 Systems Engineering to Improve Traumatic Brain Injury CARE
with the patient requiring various services that require various
r
esources for various amounts of time. A discrete-event model is
often used to assess system design and optimize flow, capacity,
resource requirements, policies, and so on.
• Monte Carlo models are often used to analyze statistical prob-
lems that are otherwise “difficult” to solve. An example might
be a series of integrated screening tests in which decisions are
made after each screening (e.g., to conduct the next test, remove
the individual from the field, or redeploy the individual); the
analyst might be interested in determining the overall cost and
accuracy (sensitivity, specificity) of a given process or protocol.
For example, Monte Carlo models have been used to analyze and
optimize cancer screening decision processes.
• Agent-based models are based on the idea of “agents” that
r
epresent each autonomous or semi-autonomous decision maker
who chooses his or her next action based on the current status of
the surrounding environment. This type of model is often used
to model wartime theaters and other engagement activities, but
for TBI it might be used to model medical decision making.
• Cellular automata (CA) models are used to model geographic
movements in situations where the probability that a number of
“things” (e.g., soldiers, patients) will move from their current grid
locations to adjoining cells is dictated by the activities and state
of affairs around them. CA models are widely used to model the
spread of disease, species migration, forest fires, and other such
events. For TBI, a CA model might be used to model the general
geographic dispersal and flow of patients through different medi-
cal states or physical locations.
Value-stream analysis (VSA) is a tool used to evaluate all of the specific
actions involved in a process, determine the relative value added of each
action, and identify waste. VSA is often used to eliminate wasteful steps
and create efficient processes comprising only value-added activities
that maximize performance. With this type of analysis, one can separate
a
ctivities that contribute to value creation from activities that create
waste and then identify opportunities for improvement.