National Academies Press: OpenBook

Review of the Department of Homeland Security's Approach to Risk Analysis (2010)

Chapter: 4 Evaluation of DHS Risk Analysis

« Previous: 3 Challenges to Risk Analysis for Homeland Security
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

4
Evaluation of DHS Risk Analysis

In evaluating the quality of Department of Homeland Security’s (DHS’s) approach to risk analysis—element (a) of this study’s Statement of Task—we must differentiate between DHS’s overall conceptualization of the challenge and its many actual implementations. Within the former category, the department has set up processes that encourage disciplined discussions of threats, vulnerabilities, and consequences, and it has established the beginning of a risk-aware culture. For example, the interim Integrated Risk Management Framework (IRMF), including the risk lexicon and analytical guidelines (primers) being developed to flesh it out, represents a reasonable first step. The National Infrastructure Protection Plan (NIPP) has appropriately stipulated the following four “core criteria” for risk assessments: that they be documented, reproducible, defensible, and complete (DHS-IP, 2009, p. 34). Similarly, the Office of Risk Management and Analysis (RMA) has stated that DHS’s integrated risk management should be flexible, interoperable, and transparent and based on sound analysis.

Some of the tools within DHS’s risk analysis arsenal are adequate in principle, if applied well; thus, in response to element (b) of the Statement of Task, the committee concludes that DHS has some of the basic capabilities in risk analysis for some portions of its mission. The committee also concludes that Risk = A Function of Threat, Vulnerability, and Consequences (Risk = f(T,V,C)) is a philosophically suitable framework for breaking risk into its component elements. Such a conceptual approach to analyzing risks from natural and man-made hazards is not new, and the special case of Risk = T × V × C has been in various stages of development and refinement for many years. However, the committee concludes that Risk = T ×V ×C is not an adequate calculation tool for estimating risk in the terrorism domain, for which independence of threats, vulnerabilities, and consequences does not typically hold and feedbacks exist. In principle, it is possible to estimate conditional probability distributions for T, V, and C that capture the interdependencies and can still be multiplied to estimate risk, but the feedbacks—the way choices that affect one factor influence the others—cannot be represented so simply.

Based on the committee’s review of the six methods and additional presentations made by DHS to the committee, there are numerous shortcomings in the implementation of the Risk = f(T,V, C) framework. In its interactions the committee found that many of DHS’s risk analysis models and processes are weak—for example, because of undue complexity that undercuts their transparency and,

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

hence, their usefulness to risk managers and their amenability to validation—and are not on a trajectory to improve. The core principles for risk assessment cited above have not been achieved in most cases, especially with regard to the goals that they be documented, reproducible, transparent, and defensible.

This chapter begins with the committee’s evaluation of the quality of risk analysis in the six illustrative models and methods that it investigated in depth. Then it discusses some general approaches for improving those capabilities.

DETAILED EVALUATION OF THE SIX ILLUSTRATIVE RISK MODELS EXAMINED IN THIS STUDY

Natural Hazards Analysis

There is a solid foundation of data, models, and scholarship to underpin the Federal Emergency Management Agency’s (FEMA’s) risk analyses for earthquakes, flooding, and hurricanes which uses the Risk = T × V × C model. This paradigm has been applied to natural hazards, especially flooding, more than a century. Perhaps the earliest use of the Risk = T × V × C model—often referred to as “probabilistic risk assessment” in other fields—dates to its use in forecasting flood risks on the Thames in the nineteenth century. In present practice, FEMA’s freely-available software application HAZUS™ provides a widely used analytical model for combining threat information on natural hazards (earthquakes, flooding, and hurricanes) with consequences to existing inventories of building stocks and infrastructures as collected in the federal census and other databases (Schneider and Schauer, 2006).

For natural hazards, the term “threat” is represented by the annual exceedance probability distribution of extreme events associated with specific physical processes, such as earthquakes, volcanoes, or floods. The assessment of such threats is often conducted by applying statistical modeling techniques to the record of events that have occurred at the site or at sites similar to that of interest. Typically a frequentist approach is employed, where the direct statistical experience of occurrences at the site is used to estimate event frequency. In many cases, evidence of extreme natural events that precede the period of systematic monitoring can be used to greatly extend the period of historical observation. Sometimes regional information from adjacent or remote sites can be used to help define the annual exceedance probability (AEP) of events throughout a region. For example, in estimating flood frequencies in a particular river the historical period of recorded flows may be only 50 to 100 years. Clearly, that record cannot provide the foundation for statistical estimates of 1,000-year events except with very large uncertainty, nor can it represent with certainty probabilities that might be affected by overarching systemic change, such as from climate changes. To supplement the instrumental record, the frequency of

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

paleoflows is inferred from regional geomorphologic evidence and is being increasingly used. These are often incorporated in the statistical record using Bayesian methods in which prior, nonstatistical information can be used to enhance the statistical estimates. Other prior information may arise from physical modeling, expert opinions, or similar non-historical data.

The vulnerability term in the Risk = T × V × C model is the conditional probability of protective systems or infrastructure failing to contain a particular hazardous event. For example, the hurricane protection system (HPS) of New Orleans, consisting of levees, flood walls, gates, and pumping stations, was constructed to protect the city from storm surges caused by hurricanes of a chosen severity (the design storm). In the event of Hurricane Katrina, the HPS failed to protect the city. In some areas the storm surge overtopped the levee system (i.e., the surge was higher than that for which the system was designed), and in some areas the system collapsed at lower water levels than those for which the HPS was designed because the foundation soils were weaker than anticipated. The chance that the protective system fails under the loads imposed by the threat is the vulnerability.

For most natural hazard risk assessments such as that performed for New Orleans (Interagency Performance Task Force, 2009), the vulnerability assessment is based on engineering or other physics-based modeling. For some problems, such as storm surge protection, these vulnerability studies can be complicated and expensive, involving multiple experts, high-performance computer modeling, and detailed statistical analysis. For other problems, such as riverine flooding in the absence of structural protection, the vulnerability assessment requires little more than ascertaining whether flood waters rise to the level of a building.

Assessing consequences of extreme events of natural hazards has typically focused on loss of lives, injuries, and resulting economic losses. Such assessment provides valuable knowledge about a number of the principal effects of natural disasters and other events. The assessment of natural disaster risks is quite advanced on these dimensions of consequences. These consequences can be estimated based on an understanding of the physical phenomena, such as ground acceleration in an earthquake or the extent and depth of inundation associated with a flood. Statistical models based on the historical record of consequences have become commonly available for many hazards. Statistical models, however, especially for loss of life, usually suffer from limited historical data. For example, according to information from the U.S. Geological Survey (USGS; http://ks.water.usgs.gov/pubs/fact-sheets/fs.024-00.html), only about 20 riverine floods in the United States since 1900 have involved 10 or more deaths. This complicates the validation of models predicting the number of fatalities in a future flood. As a result, increasing effort is being invested in developing predictive models based on geospatial databases (e.g., census or real property data) and simulation or agent-based methods. These techniques are maturing and appear capable of representing at least the economic and loss-of-life consequences for natural disasters. The National Infrastructure Simulation and

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

Analysis Center (NISAC) is advancing the state of art of natural hazard consequence analysis by studying the resiliency and interrelated failure modes of critical infrastructure.

Still, the full range of consequences of natural hazard events includes effects that lie outside current evaluations, such as loss of potable water, housing, and other basic services; diverse effects on communities; impacts on social trust; psychological effects of disasters; distributional inequities; and differential social vulnerability.1 In addition, social and behavioral issues enter into response systems (e.g., community preparedness and human behavior from warnings during emergencies). Indeed, the second national assessment of natural hazards (Mileti, 1999) listed as one of its major recommendations the need to “take a broader, more generous view of social forces and their role in hazards and disasters.” As risk assessment of natural hazards moves forward over the longer term, incorporating social dimensions into risk assessment and risk management will have to be a major priority in building a more complete and robust base of knowledge to inform decisions.

While models are constantly being developed and improved, risk analysis associated with natural hazards is a mature activity in which analytical techniques are subject to adequate quality assurance and quality control, and verification and validation procedures are commonly used. Quality control practices are actions taken by modelers or contractors to eliminate mistakes and errors. Quality assurance is the process used by an agency or user to ensure that good quality control procedures have in fact been employed. Verification means that the mathematical representations or software applications used to model risk actually do the calculations and return the results that are intended. Validation means that the risk models produce results that can be replicated in the world. To achieve this last goal, studies are frequently conducted retrospectively to compare predictions with actual observed outcomes.

A second indicator that risk analyses for natural hazards are fairly reliable is that the limitations of the constituent models are well known and adequately documented. For example, in seismic risk assessment the standard current model is attributed to Cornell (1968). This model identifies discrete seismic source zones, assigns seismicity rates (events per time) and intensities (probability distributions of the size of the events) to each zone, simulates the occurrence of seismic events, and for each simulated event, mathematically attenuates peak ground accelerations to the site in question according to one of a number of attenuation models. Each component of Cornell’s probabilistic seismic hazard model is based on either statistics or physics. The assumptions of each component are identified clearly, the parameters are based on historical data or well-documented expert elicitations using standard protocols, and the major limitations (e.g., assuming log-linearity of a relationship when data suggest some nonlinearity) have been identified and studied.

It is important to note, however, that there are aspects of natural hazard dis-

1

See Heinz Center (2000) for examples of recent research.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

asters that are less easily quantifiable. With regard to the “vulnerability” component, a well-established base of empirical research reveals that specific population segments are more likely to experience loss of life, threatened livelihoods, and mental distress in disasters. Major factors that influence this social vulnerability include lack of access to resources, limited access to political influence and representation, social capital including social networks and connections, certain beliefs and customs, frail health and physical limitations, the quality and age of building stock, and the type and density of infrastructure and lifelines (NRC, 2006). Natural disasters can produce a range of social and economic consequences. For example, major floods disrupt transportation networks and public health services, and they interfere with business activities creating indirect economic impacts. The eastern Canadian ice storm of 1998 and the resulting power blackout, while having moderate direct economic impact, led to catastrophic indirect economic and social impacts. Transportation, electricity, and water utilities were adversely affected or shut down for weeks (Mileti, 1999). An indirect impact seldom recognized in planning is that among small businesses shut down by floods, fires, earthquakes, tornadoes, or major storms, a large fraction never reopen.2

An important consideration in judging the reliability of risk analysis procedures and models is that the attendant uncertainties in their results be identifiable and quantifiable. For example, in flood risk assessments the analytical process may be divided into four steps, along the Risk = T × V × C model (Table 4-1). The discharge (water volume per time) of the river is the threat. The frequency of various river discharges is estimated from historical instrumental records. This estimates the inherent randomness in natural river flows (i.e., the randomness of nature) and is called aleatory uncertainty. Because the historical record is limited to perhaps several decades, and is usually shorter than a century, there is statistical error in the estimates of aleatory frequencies. There is also uncertainty in the model itself (i.e., the uncertainty due to limited data), which is called epistemic uncertainty. Each of the terms in the TVC model has both aleatory and epistemic uncertainties. In making risk assessments, good practice is either to state the final aleatory frequency with confidence bounds representing the epistemic uncertainty (typical of the relative frequency approach) or to integrate aleatory and epistemic uncertainty together into a single probability distribution (typical of the Bayesian approach).

An important characteristic of mature risk assessment methods is that the (epistemic) uncertainty is identifiable and quantifiable. This is the case for most natural hazards risk assessments. To say, however, that the uncertainties are identifiable and quantifiable is not to say that they are necessarily small. A range of uncertainty, in terms of a factor ranging from 3 to 10 or even more, is not uncommon in mature risk assessments, not only of natural hazards but of

2

A report from the Insurance Council of Australia, Non-insurance and Under-insurance Survey 2002, estimates that 70 percent of small businesses that are underinsured or uninsured do not survive a major disaster such as a storm or fire.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

TABLE 4-1 Risk = T × V × C: TVC components of Natural Hazard Risk Methodologies for Flood Risk

Risk = T × V × C

Component of Analysis

Prediction

Frequency (Aleatory)

Uncertainty (Epistemic)

Routinely Addressed

Deeper Uncertainties

T

Flood frequency

River discharge (flux)

Annual exceedance probability

Statistical parameters of regression equation for stream flow

Nonstationarity introduced by watershed development and climate change

V

River hydraulics

Stage (height) of river for given discharge

Probability distribution for given discharge

Model parameters of stage-discharge relationship

Channel realignment during extreme floods

Levee performance

Does levee withstand river stage?

Probability that levee withstands water load of given stage

Geotechnical uncertainties about soil’s foundations

Future levee maintenance uncertain

C

Consequences

Direct economic loss to structures in the flood-plain

Probability distribution of property losses for flood of given extent

Statistical imprecision in depth-damage relations from historical data

Enhanced protection induces flood-plain development and increased damages

industrial hazards as well (e.g., Bedford and Cooke, 2001). These uncertainty bounds are not a result of the risk assessment itself: the uncertainties reside in our historical and physical understandings of the natural and social processes involved. They are present in deterministic design studies as well as in risk assessments, although in the former they are masked by factors of safety and other traditional means of risk reduction.


Conclusion: DHS’s risk analysis models for natural hazards are near the state of the art. These models—which are applied mostly to earthquake, flood, and hurricane hazards—are based on extensive data, have been validated empirically, and appear well suited to near-term decision needs.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

Recommendation: DHS’s current natural hazard risk analysis models, while adequate for near-term decisions, should evolve to support longer-term risk management and policy decisions. Improvements should be made to take into account the consequences of social disruption caused by natural hazards; address long-term systemic uncertainties, such as those arising from effects of climate change; incorporate diverse perceptions of risk impacts; support decision making at local and regional levels; and address the effects of cascading impacts across infrastructure sectors.

Analyses of Critical Infrastructure and Key Resources (CIKR)

DHS has processes in place for eliciting threat and vulnerability information, for developing consequence assessments, and for integrating threat, vulnerability, and consequence information into risk information briefings to support decision making. Since CIKR analyses are divided into three component analyses (threat, vulnerability, and consequence) the committee reviewed and evaluated these component elements.

Threat Analyses

Based on the committee’s discussions and briefings, DHS does strive to get the best and most relevant terrorism experts to assess threats. However, regular, consistent access to terrorism experts is very difficult. Due to competing priorities at other agencies, participation is in reality a function of who is available. This turnover of expertise can help prevent bias, but it does put a premium on ensuring that the training is adequate. Rotation of subject matter experts (SMEs) also puts a premium on documenting, testing, and validating assumptions, as discussed further below. Importantly, a disciplined and structured process for conducting the threat analysis is needed, but the current process as described to the committee was ad hoc and based on experts’ availability.

The Homeland Infrastructure Threat and Risk Analysis Center (HITRAC) has made efforts to inject imagination into risk assessments through processes such as red-team exercises. As a next step, there needs to be a systematic and defensible process by which ideas generated by red teams and through alternative analysis sessions are incorporated into the appropriate models and the development of new models.

Another concern is how the assumptions used by the SMEs are made visible and can be calibrated over time. The assumptions as to intent and capabilities that the SMEs make when assigning probabilities need to be documented. Attempts must be made to test the validity of these assumptions, as well as track and report on their reliability or correctness over time. Equally important is bringing to light and documenting dissenting views of experts, explaining how

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

they differ (in terms of gaps in information or in assumptions), and applying those results to inform future expert elicitation training and elicitation processes as well as modifying and updating the attack scenarios.

Many DHS investments, such as those made through the Homeland Security Grant Programs (HSGPs) in FEMA and CIKR protection programs, are meant to reduce vulnerabilities or increase resilience for the long term. DHS recognizes that the use of generic attack scenarios (threats) based on today’s knowledge can leave risk analyses vulnerable to the unanticipated “never-before-seen” attack scenario (the black swan) or to being behind the curve in emerging terrorist tactics and techniques.3 (For that reason, FEMA reduced the weighting of threat from 0.20 to 0.10 for some of the HSGPs so that the resulting risk prioritizations are less dependent on the assumptions about threat.) Attacks that differ from those DHS is currently defending against might have greater consequences or a higher chance of success. However, it is difficult to design a model to account for new tactics, techniques, or weapons. Asking experts to “judge” what they have not yet observed (or perhaps even conceived, until the question is posed) is fraught with more subjective assumptions and issues. Also, introducing too many speculative threats adds to the assumptions and increases the uncertainties in the models; lengthy lists of speculative attack scenarios could be generated, but the inherent uncertainty can be disruptive, rather than helpful, to planning and decision making. There is a large, unsolved question of how to make a model that can capture emerging or new threats, and how to develop the best investment decisions for threats that might not appear for years.

To provide the best possible analyses of terrorism threats, DHS has a goal to incorporate more state and local threat information into its risk assessments and has started numerous outreach programs. I&A, for example, conducts a weekly conference call and an assortment of conferences with state and local partners such as the regional fusion centers, at which threat information is shared or “fused.” I&A plans to use the fusion centers as one of its primary means of disseminating information to the local level and collecting information from the local level. DHS has made increasing the number and its resources of the regional fusion centers a priority. There are currently 72 fusion centers around the country, and I&A plans to deploy additional intelligence analysts to these centers. Between 2004 and the present, DHS has provided more than $320 million to state and local governments to support the establishment and maturation of fusion centers. In 2007 testimony to the House Committee on Homeland Security Subcommittee on Intelligence, the HITRAC Director said that as part of this outreach plan, “we are regularly meeting with Homeland Security Advisors and their staffs to integrate State information and their analysis into the creation of

3

The Black Swan Theory refers to high-impact, hard-to-predict, and rare events beyond the realm of normal expectations. Unlike the philosophical “black swan problem,” the “Black Swan Theory” (capitalized) refers only to events of large magnitude and consequence and their dominant role in history. Black Swan events are considered extreme outliers (Taleb, 2007).

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

state critical infrastructure threat assessments. By doing this we hope to gain a more comprehensive appreciation for the threats in the states” (Smislova, 2007).

Despite these efforts, information sharing between the national and local levels and among state and local governments still faces many hurdles. The most significant challenges are security policies and clearances, common standards for reporting and data tagging, numbers and skill levels of analysts at the state and local levels, and resources to mature the information technology (IT) architecture. The committee cannot assess the impact of all these efforts to increase the information and dialogue between national and local levels with respect to DHS risk analysis, and specifically with regard to threat assessments and probabilities. The majority of information gathered by the fusion centers is on criminal activities, not foreign terrorism. In a 2007 report by the Congressional Research Service, a DHS official was quoted as saying that the local threat data still were not being incorporated into threat assessments at the federal level in any systematic or meaningful manner (Masse et al., 2007, p. 13). The committee assumes that the fusion centers and processes need to mature before any significant impact can be observed or measured.

It is important that insights gained during expert elicitation processes about threats, attack scenarios, and data gaps be translated into requests that influence intelligence collection. DHS’s I&A Directorate and HITRAC program have processes that generate collection requirements. Due to security constraints the committee did not receive a full understanding of this process or its adequacy, but such a process should be reviewed by a group of fully cleared outside experts to offer recommendations.


Recommendation: The intelligence data gathering and vetting process used by I&A and HITRAC should be fully documented and reviewed by an external group of cleared experts from the broader intelligence community. Such a step would strengthen DHS’s intelligence gathering and usage, improve the process over time, and contribute to linkages among the relevant intelligence organizations.


Threat analyses can also be improved through more exploration of attack scenarios that are not considered in the generic attack scenarios presented to SMEs. DHS has tackled this problem by creating processes designed for imagining the future and trying to emulate terrorist thinking about new tactics and techniques. An example is HITRAC’s analytic red teaming, which contributes new ideas on terrorist tactics and techniques. DHS has even engaged in brainstorming sessions where terrorism experts, infrastructure specialists, technology experts, and others work to generate possibilities. These efforts to inject imagination into risk assessments are necessary.


Recommendation: DHS should develop a systematic and defensible process by which ideas generated through red teaming and alternative analysis sessions get incorporated into the appropriate models and the de-

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

velopment of new models. DHS needs to regularly assess what leaps could be taken by terrorist groups and be poised to move new scenarios into the models when their likelihood increases, whether because of a change inferred about a terrorist group’s intent or capabilities, the discovery or creation of new vulnerabilities as new technologies are introduced, or an increase in the consequences of such an attack. These thresholds need to be established and documented, and a repeatable process must be set up.


The committee’s interactions with I&A staff during a committee meeting and a site visit to the Office of Infrastructure Protection (IP) did not reveal any formal processes or decision criteria for updating scenarios or threat analyses. The real payoff in linking DHS risk assessment processes and intelligence community collection operations and analysis lies in developing a shared understanding for assessing and discussing risk.4 I&A and HITRAC have created many opportunities for DHS analysts to interact with members from agencies in the intelligence community through conferences, daily dialogue among analysts, analytic review processes, and personnel rotations, and all of these efforts are to be applauded. However, there also need to be specific, consistent, repeatable exchanges, and other actions focused just on risk modeling. These interactions with experts from the broader intelligence community—including those responsible for collection operations—should be focused on building a common terminology and understanding of the goals, limitations, and data needs specific to DHS risk assessments.

To forge this link in cultures around the threat basis of risk, even risk-focused exchanges might not be enough. There needs to be some common training and even perhaps joint development of the next generation of national security-related risk models.

Vulnerability Analyses

DHS’s work in support of critical infrastructure protection has surely instigated and enabled more widespread examination of vulnerabilities, and this a positive move for homeland security. IP’s process for conducting vulnerability analyses appears quite thorough within the constraints of how it has defined “vulnerability,” as evidenced, for example, in its establishment of coordinating groups (Government Coordinating Councils and Sector Coordinating Councils) for each CIKR sector, to effect collaboration across levels of government and between government and the private sector. These councils encourage owners and operators (85 percent of whom are in the private sector) to conduct risk assessments and help establish expectations for the scope of those assessments (e.g., what range of threats to consider and how to assess vulnerabilities beyond

4

While DHS is a part of the intelligence community, this subsection is focused on building ties and common understanding between DHS and the complete intelligence community.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

simply physical security).5 Vulnerability assessment tools have been created for the various CIKR sectors. IP has worked to create some consistency among these, but it appears to be flexible in considering sector-recommended changes. To date, it seems that vulnerability is heavily weighted toward site-based physical security considerations.

IP has also established the Protective Security Advisors (PSA) program, which places vulnerability specialists in the field to help local communities carry out site-based vulnerability assessments. The PSAs collect data for HI-TRAC use while working with CIKR site owners or operators on a structured assessment of facility protections, during which PSAs also provide advice and recommendations on how to improve site security. The committee was told by IP that the average time spent on such an assessment is 40 hours, so a significant amount of data is collected for each site and a detailed risk assessment is developed jointly by a PSA and the site owner-operator. Sites identified by HITRAC as high risk are visited at least yearly by a PSA. HITRAC is currently working on a project called the Infrastructure Vulnerability Assessment (IVA) that will integrate this site-specific vulnerability information with vulnerability assessments from Terrorism Risk Assessment and Measurement (TRAM), Transportation Security Administration (TSA), MSRAM, and elsewhere to create a more integrated picture of vulnerabilities to guide HITRAC risk assessment and management efforts.

However, vulnerability is much more than physical security; it is a complete systems process consisting at least of exposure, coping capability, and longer-term accommodation or adaptation. Exposure used to be the only thing people looked at in a vulnerability analysis; now there is consensus that at least these three dimensions have to be considered. The committee did not hear these sorts of issues being raised within DHS, and DHS staff members do not seem to be drawing from recent books on this subject.6

IP is also working toward combining vulnerability information to create what are called “Regional Resiliency Assessment Projects (RRAPs),” in an attempt to measure improvements in security infrastructure by site, by industry sector, and by cluster. RRAPs are analyses of groups of CIKR assets and perhaps their surrounding areas (what is known as the buffer zone, which is eligible for certain FEMA grants). The RRAP process uses a weighted scoring method to estimate a “regional index vulnerability score” for a cluster of CIKR assets, so that their risks can be managed jointly.

Examples of such clusters are New York City bridges, facilities surrounding Exit 14 on the New Jersey Turnpike, the Chicago Financial District, Raleigh-Durham Research Triangle, and the Tennessee Valley Authority.

However, the complexity of the RRAP methodology seems incommensu-

5

Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve, GAO-07-706R (July 10, 2007), evaluated the capabilities of a sample of these councils and found mixed results.

6

See for example Ayyub et al., 2003; Bier and Azaiez, 2009; Bier et al., 2008; Haimes, 2008, 2009; McGill et al., 2007; and Zhuang and Bier, 2007.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

rate with the innate uncertainty of the raw data. To determine the degree to which risk is reduced by investments in “hardening” of facilities (reduction of vulnerabilities), the RRAP process begins by asking SMEs (including PSAs and sector experts) to identify the key factors for security, rank them, estimate their relative importance, and aggregate those measures into a Protective Measure Index (PMI). For example, for physical security, the SMEs estimate those factors for fences, gates, parking, access control, and so forth, to develop weighted PMIs, which are the product of PMIs and a weight for each security component. The weighted PMIs were shown to four significant figures in a presentation to the committee, and different values are estimated for physical security, security forces, and security management. Those three weighted PMIs are averaged to obtain an overall index for a piece of critical infrastructure. Within a given region, the overall indexes for each relevant facility are likewise averaged to obtain a Regional Vulnerability Index (to four significant figures). Then, if certain of those facilities improve their security or otherwise reduce their vulnerabilities, the regional vulnerability is deemed to have dropped, and the change in Regional Vulnerability Index is taken as a measure of the degree to which risk has been bought down. It was not clear to the committee how, or even if, threat and consequence were folded into the buying down of risk. Using four significant figures is not justified, quite misleading, and an example of false precision.

An example shared with the committee, using notional data for a downtown area of Chicago, computed a Regional Vulnerability Index of 52.48 before hardening steps and an index of 68.78 after. The claim was made that these hypothetical steps have then led to “a 31.06 percent decrease in vulnerability.” It is not clear what this 31.06 percent reduction actually means. Has the expected yearly loss been reduced by 31 percent (say from $1 billion to $690 million)? Have the expected casualties been reduced from 1,000 to 690? Is this a reduction on some normalized scale in which case the actual expected loss reduction could be significantly more? It would be essential to answer these questions if risks across DHS elements were being compared. Furthermore, it is difficult to believe that any of these metrics are as accurate as the numbers imply. If their accuracy is ±20 percent, the first Regional Vulnerability Index could just as readily be 63 (120 percent of 52.48), and the second could just as easily be 55 (80 percent of 68.78), in which case the relative vulnerabilities would be reversed. In addition, as noted earlier, the emphasis on physical security might in some cases divert attention from other aspects of vulnerability that would be more valuable if addressed.

Consequence Analyses

The consequence analyses done in support of infrastructure protection, mostly by NISAC, is carried out with skill. Recent requests for modeling and analysis at NISAC covered topics such as an influenza outbreak, projecting the economic impacts of Hurricane Ike around Houston, an electric power disrup-

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

tion around Washington, D.C., and evaluation of infrastructure disruption if there were an earthquake along the New Madrid Fault zone. The committee was told by NISAC that it has developed some 70-80 models or analysis tools for modeling consequences of disruptive events. However, in a number of cases examined by the committee, it is not clear what problem is being addressed (i.e., what decision is to be supported by the analysis). A clear example of this is a new project to explore a wide range of ramifications of an earthquake along the New Madrid Fault. Neither NISAC nor IP leaders could explain why the work had been commissioned, other than to build new modeling capabilities. One would hope that a risk analysis had been performed somewhere in DHS that identified such an earthquake as one of the highest-priority concerns of DHS, but if that decision was made, it does not seem to be the result of a documented risk analysis. Other NISAC analyses in support of well-recognized DHS concerns—one that examined the economic consequences of a hurricane hitting the Houston area and another evaluating the evolution of a flu outbreak, with and without different intervention options—still seemed disconnected from any real decision-making process. In both cases, the committee also was concerned about missed opportunities for validation.

IP and HITRAC rely primarily on NISAC for consequence analyses. However, the committee observed that documentation, model verification and validation, model calibration and back-testing, sensitivity analyses, and technical peer reviews—which are necessary to ensure that reliable science underlies those models and analyses—vary widely across the different models and analyses. For instance, after projection of the economic consequences from Hurricane Ike, no comparison was made between the simulated projections and actual data. NISAC modelers noted that the consequences would surely be different because the actual storm track was not the same as the projected storm track used in the model. Such a retrospective analysis would seem to provide useful feedback to the modelers. As an alternative, the models could have been run retrospectively with the actual storm track to compare to actual damage. In another example, NISAC conducted an analysis of likely ramifications of a flu outbreak, but no one compared those multidimensional results with what is known from many seasonal outbreaks of flu.

More generally, the committee makes the following recommendation about consequence modeling for CIKR analysis,7 to bring them up to the standards of the core criteria8 previously cited from the 2009 NIPP (DHS-IP, 2009).


Recommendation: DHS should ensure that vulnerability and consequence analyses for infrastructure protection are documented, transparent, and repeatable. DHS needs to agree on the data inputs, understand the

7

The only consequence modeling that the committee examined in detail is that performed by NISAC, primarily in support of IP. Clearly, all consequence analyses need to be documented, transparent, repeatable, and based on solid science.

8

The four “core criteria” for risk assessments: that they be documented, reproducible, defensible, and complete (DHS, 2009b).

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

technical approaches used in models, and understand how the models are calibrated, tested, validated, and supported over the lifecycle of use.


The committee was also concerned that none of DHS’s consequence analyses—including, but not limited to, the analyses done in support of infrastructure protection—address all of the major impacts that would come about from a terrorist attack. Consequences of terrorism can range from economic losses to fatalities, injuries, illnesses, infrastructure damage, psychological and emotional strain, disruption to our way of life, and symbolic damage (e.g., an attack on the Statue of Liberty, Washington Monument, or Golden Gate Bridge). DHS had initially included National Icons in its CIKR lists but seemed to eliminate them from CIKR analyses over time, since it was not clear what metrics should be used to assess that dimension of risk.


Recommendation: The committee recommends focusing specific research at one of the DHS University Centers of Excellence to develop risk models and metrics for terrorism threats against national icons.


The range of consequences considered is also affected by the mandate of the entity performing the risk analysis. It should be DHS’s role to counter limitations that occur because private owners-operators, cities, tribes, and states have boundaries (geographic and functional) that delimit their sphere of responsibility. A fundamental step to clarifying this role is for DHS to be very clear about the decision(s) to be informed by each risk analysis it develops. This is discussed below in the section titled “Guidelines for Risk Assessment and Transparency in Their Use for Decisions.” There are people at DHS who are aware of these current limitations, but the committee did not hear of efforts to remedy them.

For example, when the Port Authority of New York and New Jersey evaluates the potential consequences of a terrorist attack on one of its facilities, it does not try to model larger-scale consequences such as social disruption; its mandate is to minimize the physical consequences of an attack on facilities under its control. From the standpoint of its management, that might be a good risk analysis, but it is not adequate for evaluating the real risk of such an attack.

Integration of Threat, Vulnerability, and Consequence Analyses

DHS is currently using Risk = f(T,V,C) to decompose the overall analysis problem into more manageable components (i.e., a threat analysis to identify scenarios, a vulnerability analysis given threat scenarios, and a consequence analysis given successful threats against identified vulnerabilities). However, the component analyses themselves can be quite complex, and there exist theoretical as well as practical problems in combining these individual components into an aggregate measure of risk. For example, the performance of modern

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

infrastructures relies on the complicated interaction of interdependent system components, so C is in general dependent in complex, nonlinear ways on the particular combination of individual components that are affected by an attack or disaster.

When it comes to the interaction of intelligent adversaries, the assumption that it is possible to characterize the values of T and V as independent probabilities, even when assessed by SMEs, is problematic. Rasmussen noted this issue when he cautioned, “One of the basic assumptions [in the original WASH-1400 study] is that failures are basically random in nature… in the case of deliberate human action, as in imagined diversion scenarios, such an assumption is surely not valid” (Rasmussen, 1976). As noted by Cox (2009), the values for V and C depend on the allocation of effort by both the attacker and the defender.

Additional problems arise for different functional forms of the Risk = f(T,V,C) equation. For example, some models rely on the succinct product, R = T × V × C, where SMEs assess the threat and vulnerability terms as probabilities and the consequence terms in units of economic replacement costs or fatalities (ASME, 2008; Willis, 2007). The appeal here is simplicity, and the probabilities are conventionally drawn as numeric values attached to colors (e.g., red, yellow, green) assigned to cells in risk matrices. Cox (2008) illustrates with a number of simple examples how such formulas can render nonsensical advice.

Although the committee reviewed a number of specific component analyses, often it was unable to determine how the component analyses would actually be combined in the Risk = f(T,V,C) paradigm. In many cases, these analyses were conducted using different assumptions and for different reasons, making it very difficult to recombine components back into a Risk = f(T,V,C) calculation.

As a first example, the committee reviewed the 2007 pandemic influenza report prepared by NISAC, Infrastructure Analysis and Strategy Division, and IP. Seven scenarios of disease, response, and mitigation were determined and approved by the DHS Office of Health Affairs. Using these seven threat scenarios as inputs, NISAC exercised various simulation models to characterize the vulnerability of the U.S. population given the specific scenario. Vulnerability, conditional on exposure to influenza as described by the specific threat scenario, is measured in terms of estimated attack rate (proportion of the population that becomes infected during a set period of time) and estimated mortality rate (proportion of the population that dies from influenza during the set period of time). The vulnerability analysis also considers a typical seasonal influenza outbreak scenario, giving the decision maker estimates of attack rate and mortality rate for seasonal flu. Given each threat scenario and the associated rate of attack and mortality estimates (vulnerability estimates), the NISAC modelers calculated reduction in labor in critical infrastructure sectors (i.e., absenteeism rate and duration [population consequences]). Given these estimated population consequences, the modeling team then considered how workforce absenteeism would influence various CIKR sectors such as energy, water, telecommunications, public health and health care, transportation, agriculture and food, and banking and

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

finance. In reading the NISAC report, it is clear that the separate threat, vulnerability, and consequence analyses are not actually intended to be combined into a single risk measure, sidestepping the challenge of computing a risk measure and instead passing on component analyses under the assumption that decision makers could use their individual judgment to develop an understanding about the risks from an influenza outbreak.

As a second example, the committee reviewed the Chemical Facility Anti-Terrorism Standards (CFATS), a regulatory program run by DHS to address security at high-risk chemical facilities. Facilities that are deemed “high risk” are a subset of those that have certain dangerous chemicals onsite in quantities over designated thresholds. As a first step in identifying high-risk facilities that will be subject to DHS regulation, more than 29,000 were identified that met these thresholds for dangerous chemicals onsite, and each filled out a first-level quick questionnaire. Data were collected from site owner-operators through a web-based IT decision support system. A consequence screening of this data identified some 6,400 of the 29,000 facilities as being risky enough to merit DHS regulation. Those 6,400 facilities were sorted into four preliminary tiers, based on the riskiness of the site. Roughly speaking, the threats under consideration are those associated with terrorists releasing, stealing, sabotaging, or contaminating chemicals that are toxic, flammable or explosive. Risk appears to be based almost exclusively on consequences, which reflect casualties only. The committee was unable to determine exactly what went into this definition of risk, and no documentation was provided beyond briefing slides. It was unclear how vulnerability and threat are used in determining the risk rating of various facilities. Economic losses do not yet appear to be included. For each of these sites, a Security Vulnerability Assessment (SVA) and a Site Security Plan (SSP) are being developed. The SSP must demonstrate how the site will meet DHS regulatory standards. The CFATS Program currently has 20 inspectors who support site assessment visits and sign off and validate SSPs, and the committee was told that plans exist to increase that number soon to about 150.

Incorporation of Network Interdependencies

DHS, through a variety of activities, has made considerable progress in identifying and collecting relevant site-specific data that can be used for analyzing vulnerabilities and consequences within and across the 18 critical infrastructure sectors and in interdependent networks of operations. Such activities include the HITRAC Level 1/Level 2 program for differentiating high-risk targets, 18 Individual Sector Risk Profiles, development of a National Critical Foreign Dependencies list, deployment of PSAs to support Site Assistance Visits (SAVs), and the Enhanced Critical Infrastructure Protection Initiative (ECIP) that is under development.

One challenge that makes CIKR vulnerability and consequence analyses difficult is that multiple levels of analysis are required: federal, state, local,

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

tribal, regional, and site specific, for example. Further, while risk analysis to support protection and prevention investments are often implemented at the site-specific or local level,9 the cost-benefit trade-offs to understand risk reduction effects must be assessed at multiple levels and in the broader context of maintaining and enhancing the functionality of critical interdependent infrastructure networks or clusters of operations. DHS has begun exploring how to aggregate site-based vulnerability analyses into analyses of infrastructure clusters or networks with Regional Resiliency Assessment Projects, as noted above. However, the current site-based and regional cluster vulnerability index scoring methods being used are very narrow in scope, focusing only on physical security criteria such as fences, gates, access control, and lighting.

A second challenge in analyzing CIKR is the decision time frame. Short-term vulnerability models can take intelligence about enemy intent as an input, and thus have some confidence (lower uncertainty) about the type of threat to defend against. Longer-term resiliency models (similar to those used in Department of Defense strategic planning) plan around capabilities needed to respond to and recover from a variety of possible disruption events, whether terrorist attacks, natural hazards, or industrial accidents. DHS recognizes the need for analysis and planning for both short-term protective measures and longer-term risk-based investments in prevention, protection, response, and resiliency. Site Assistance Visits conducted by DHS in partnership with other federal, state, and local entities and in collaboration with owners-operators of critical infrastructures, are reasonably well suited to identify vulnerabilities related to specific threats (as identified by the intelligence community), to provide security recommendations at critical sites, and to rapidly address and defend against such threats. For the longer-term infrastructure investment decisions, RRAPs have begun focusing on analyzing vulnerabilities in critical infrastructure clusters. While this methodology is a first step in performance-based facilities protection, the RRAP approach does not fully capture the disruption impact in interdependent networks of critical infrastructure, does not account for vulnerability criteria other than physical security at individual locations, and is not easily extendable to account for socioeconomic vulnerabilities in local workforce and community.

Eighty-five percent or more of the critical infrastructure is owned and operated by private entities, and DHS has established processes to enable it to work collaboratively with those entities. Infrastructure operators have much experience and financial incentive in dealing with and effectively recovering from all types of disruptions, such as accidents, system failures, machine breakdowns, and weather events. Yet by moving beyond the focus on protective security to a resilience modeling paradigm, DHS might find that the private sector owners-operators are even more amenable to collaboration on improving infrastructure, particularly because resilience models promote using common metrics of inter-

9

See, for example, some of the state-of-the-art technical literature on site-based risk analysis and infrastructure protection, such as Ayyub et al., 2003; Bier et al., 2008; Bier and Azaiez, 2009; Haimes, 2008, 2009; McGill et al., 2007; and Zhuang and Bier, 2007.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

est to both DHS and the private sector, namely the ability of a site to operate as intended and at lowest cost to provide critical products or services. Boards of directors of private sector companies tend to think about resilience in terms such as additional capacity, redundant physical operations, and suppliers. Security and resilience have a cost, and their impact on revenue generation is less clear because it is difficult to show the benefits of loss avoidance and the value of continuity plans and built-in system resiliency. Decades of work to develop lean, just-in-time manufacturing and service operations have led to systems that may also be brittle and easy to disrupt. Heal and Kunreuther (2007) and Kunreuther (2002) show the disincentives of individual firms to invest in security, unless all owners-operators in a sector agree to it simultaneously. This line of research provides insight for DHS to consider in developing federal policy or industry sector voluntary action to achieve sector-wide security goals. Recent work by Golany et al. (2009) has shown that optimal resource allocation policies can differ depending on whether a decision maker is interested in dealing with chance events (probabilistic risk) or events caused by willful adversaries.


Conclusion: These network disruption and systems resilience models (which supplant and move away from current limitations of TVC analyses for CIKR) are ideal for longer-term investment decisions and capabilities planning to enhance infrastructure systems’ resiliency, beyond just site-based protection. Such models have been used in other private sector and military applications to assist decision-makers in improving continuity of operations.


Recommendation: DHS should continue to enhance CIKR data collection efforts and processes and should rapidly begin developing and using emerging state-of-the art network and systems disruption resiliency models to understand and characterize vulnerability and consequences of infrastructure disruptions.


Such network disruption and systems resiliency models have value for a variety of reasons. These models can be used to

  1. Assess a single event at a single site, multiple simultaneous events at single or multiple sites, or cascading events at single or multiple sites;

  2. Understand event impacts of terrorism attacks, industrial accidents, and natural hazards using a single common model (integrated all-hazards approach where threat is any disruption event that impacts system ability to function as intended);

  3. Assess impacts of worst case when none of the response and mitigation options work, best case assuming all the response and mitigation options work, and any of the middle- ground response and mitigation scenarios in between;

  4. Conduct threat, vulnerability, and consequence analyses in an integrated and straightforward manner;

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
  1. Support multiscale modeling and analysis results “roll-up” (from site-based to local cluster to state-level, regional, and national impacts);

  2. Analyze cost-benefit trade-offs for various protection and prevention investments, since investments are made typically at a single site but must be assessed at multiple levels to understand system benefits;

  3. Allocate scarce resources optimally within and across 18 interdependent CIKR sectors to maximize system resilience (ability of a network of operations or interdependent cluster of sites to function as intended)—in particular, such return-on-investment analyses are intended to help identify and justify budget levels to enhance security and resilience;

  4. Evaluate interdiction and mitigation “what-if” options in advance of events, optimize resource allocation, and improve response and recovery;

  5. Measure and track investments and improvement in overall system resiliency over time;

  6. Provide a common scalable framework for modeling interdependencies within and across CIKR sectors;

  7. Easily and visually demonstrate disruption events and ripple effects for table-top exercises; and

  8. Incorporate multiple measures and performance criteria so that multiple stakeholders and multiple decision makers can understand decisions and compare risks in a single common integrated network-system framework.

This modeling approach has a long history in the operations research academic and practitioner communities, is mathematically well accepted, and technically defensible, and permits peer review and documentation of models, whether modelers use optimization, simulation, game-theoretic, defender-attacker-defender, or other approaches to analyze decisions in the CIKR net-works.10 More broadly, this approach has been applied successfully across industry sectors for supply chain risk analysis,11 military or defense assets planning,12 airline irregular operations recovery (e.g., recovery of flight schedules interrupted due to bad weather, shutdown of a major airport hub),13 and managing enterprise production capacity.14 This type of approach has also been used successfully in conjunction with more traditional site-based facilities management and security or fire protection operations15 and is recommended by the American Society of Civil Engineers’ Critical Infrastructure Guidance Task Committee in its recently released Guiding Principles for the Nation’s Critical Infrastructure (ASCE, 2009).

For example, over the past 10 years, the Department of Operations Research at the Naval Postgraduate School has offered an introductory master’s level

10

See Ahuja et al., 1993; Bazaraa and Sherali, 2004; and Golden, 1978.

11

See Chapman et al., 2002; Elkins et al., 2004.

12

See Cormican et al., 1998; Smith et al., 2007.

13

See Barnhart, 2009; Yu and Qi, 2004.

14

See Bakir and Savachkin, 2010; Savachkin et al., 2008.

15

Elkins et al., 2007.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

course to focus attention on network disruption and resilience analysis. In this course, graduate students identify critical infrastructure networks of various types from their technical areas of expertise (primarily associated with their military duty assignments) and use network analysis tools, publically available open source data, and rapid prototyping in Microsoft Excel, Microsoft Access, and Visual Basic for Applications to develop decision support tools. Students have conducted more than 100 studies of infrastructure networks spanning electric power transmission, water supplies, fuel logistics, key road and bridge systems, railways, mass transit, and Internet service providers. A few of these case studies and the bilevel or trilevel optimization approach used (i.e., defender-attacker or defender-attacker-defender sequential decision-making models) are described in recent journal articles, and they demonstrate both the technical rigor (including peer-review quality of model documentation) and the real-world applicability of this method (Alderson, 2008; Alderson et al., 2009; Brown et al., 2006). This sort of approach would seem to be of great value to DHS, and the committee was pleased to see some initial efforts by DHS-IP to tap into this knowledge base.


Recommendation: DHS should exploit the long experience of the military operations research community to advance DHS capabilities and expertise in network disruptions modeling and resiliency analysis.

Risk-Based Grant Allocations

In 2008, FEMA awarded more than 6,000 homeland security grants totaling over $7 billion. More than 15 grant programs administered by FEMA that are designed to enhance the nation’s capability to reduce risks from manmade and natural disasters, all of which are authorized and appropriated through the normal political process. Some have histories dating to the establishment of FEMA in the mid-1970s.

Five of these programs, covering more than half of FEMA’s grant money—the State Homeland Security Program, the Urban Areas Security Initiative, the Port Security Grant Program, the Transit Security Grant Program, and the Inter-operable Emergency Communications Grant Program—incorporate some form of risk analysis in support of planning and decision making. Two others inherit some risk-based inputs produced by other DHS entities—the Buffer Zone Protection Program, which allocates grants to jurisdictions near critical infrastructure if they are exposed to risk above a certain level as ascertained by the Office of Infrastructure Protection; and the Operation Stonegarden Grant Program, which provides funding to localities near sections of the U.S. border that have been identified as high risk by Customs and Border Patrol. All other FEMA grants are distributed according to formula.

FEMA has limited latitude with respect to tailoring even the risk-based grant programs, and the grants organization does not claim to be expert in risk.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

Congress has defined which entities are eligible to apply for grants, and for the program of grants to states it has determined that every state will be awarded at least a minimum amount of funding. Congress stipulated that risk is to be evaluated as a function of T, V, and C, and it also stipulated that consequences should reflect economic effects, and population, and should take special account of the presence of military facilities and CIKR.

However, FEMA is free to create the formula by which it estimates consequences and how it incorporates T, V, and C into an overall estimate of risk. For example, it has set vulnerability equal to 1.0, effectively removing that factor from the risk equation. That move was driven in part by the difficulty of performing vulnerability analyses for all the entities that might apply to the grants programs. FEMA does not have the staff to do that, and the grant allocation time line set by Congress is too aggressive to allow applicant-by-applicant vulnerability analyses. Also, although Congress has specified which threats are to be included in the analyses, DHS has latitude to define which kinds of actors it will consider. In the past, it defined threat for grant making as consisting solely of the threat from foreign terrorist groups or from groups that are inspired by foreign terrorists. That definition means that the threat from narcoterrorism, domestic terrorism, or other such sources was not considered.

For most grant allocation programs, FEMA weights the threat as contributing 20 percent to overall risk and consequence as contributing 80 percent. For some programs that serve multihazard preparedness, those weights have been adjusted to 10 percent and 90 percent in order to lessen the effect that the threat of terrorism has on the prioritizations. Because threat has a small effect on FEMA’s risk analysis and population is the dominant contributor to the consequence term, the risk analysis formula used for grant making can be construed as one that, to a first approximation, merely uses population as a surrogate for risk. FEMA staff told a committee delegation on a site visit that this coarse approximation is relatively acceptable to the entities supported by the grants programs.

It appears that the choice of weightings in these risk assessments, and the parameters in the consequence formulas, are chosen in an ad hoc fashion and have not been peer reviewed by technical experts external to DHS. Such a review should be carried out at a more detailed level, and by people with specific, targeted expertise, than was feasible for the committee.


Recommendation: FEMA should undertake an external peer review by technical experts outside DHS of its risk-informed formulas for grant allocation to identify any logical flaws with the formulas, evaluate the ramifications of the choices of weightings and parameters in the consequence formulas, and improve the transparency of these crude risk models.

Recommendation: FEMA should be explicit about using population density as the primary determinant for grant allocations.


Since a large majority of the grants programs are terrorism related, grant applicants write targeted grant applications to qualify for terrorism-related fund-

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

ing. At the same time, grantees clearly recognize the potential multiuse benefits of investment in community infrastructure and preparedness (e.g., hospital infrastructure investments to respond to bioterrorism are also useful in dealing with a large food poisoning outbreak). In response to the grassroots understanding in the public planning, public health, and emergency response professional communities of the potential compounding benefits of investments in community preparedness, response, and recovery, FEMA is exploring a Cost-to-Capability (C2C) Initiative as explained in Chapter 2. C2C is a “model” to “develop, test, and implement a method for strategically managing a portfolio of grant programs at the local, state, and federal levels.” It is intended to “create a culture of measurement, connect grant dollars to homeland security priorities, and demonstrate contributions of preparedness grants to the national preparedness mission.”16

The C2C model replaces “vulnerability” with “capability,” in a sense replacing a measure of gaps with a measure of the ability of a system or community to withstand an attack or disaster or to respond to it. These measures of local hardness can more readily be aggregated to produce regional and national measures of security—a macro measure of national “hardness” against homeland security hazards—whereas vulnerabilities are inherently localized: there is no definition of national vulnerability. A focus on hardening of systems will emphasize those steps that stakeholders wish to address proactively, whereas a focus on vulnerabilities can be a distraction because some vulnerabilities are acceptable and need not be addressed. Conceptually, it makes sense to think of building up capabilities in order to add to our collective homeland security.

While the C2C initiative is clearly in a conceptual stage, it appears to be a reasonable platform by which the homeland security community can begin charting a better path toward preparedness. A contractor is creating simple, Java-based software, which is now ready for its first pilot test. This software is intended to allow DHS grantees to perform self-assessments of the value of their preparedness projects, create multiple investment portfolios and rank them, and track portfolio performance. The plans are to distribute this software to the field so that grantees themselves can use it. It has not yet been vetted by outsiders, although the upcoming pilot test will begin that process.


Recommendation: FEMA should obtain peer review by experts external to DHS of the C2C concepts and plans, with the goal of clarifying ideas about measures, return on investment, and the overall strategy.

16

Ross Ashley presentation July 8, 2009, Washington, D.C. subgroup meeting with FEMA to discuss Homeland Security Grants Program and Cost-to-Capability Initiative.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

TRAM

The general framework for TRAM is fairly standard Risk = f(T,V,C) risk analysis. The tool provides value by enforcing a rigorous, disciplined examination of threats, vulnerabilities, and consequences. Its outputs present a comparative depiction of risk against critical assets that is useful to inform decision makers.

However, the TRAM methodology appears to be unduly complex, given the necessarily speculative nature of the TVC analysis. For example, as part of the threat analysis, SMEs are guided to assess how attractive various assets are to potential terrorists. They do this by assigning a number on a scale of 1 to 5 for factors that are assumed to be important to terrorists (e.g., potential casualties associated with a successful attack, potential economic impact of a successful attack, symbolic importance). The SMEs are also asked to decide the scale of importance to terrorists of each of those factors, and this provides a weighting. Each value assessment is multiplied by the weight of that factor, and the products are summed to develop numbers that represent the “target value” of each asset. A similar process is followed to develop numbers that represent the deterrence associated with each asset. Deterrence factors are aspects such as the apparent security and visibility of each asset, and SME are also asked to weight those factors. The weighted sum for deterrence is multiplied by the target value to arrive at a “target attractiveness” number for each asset. It is unlikely that SMEs can reliably estimate so many factors and weightings from their small body of experience, so this complexity is unsupportable. Also, rather than attaching some measure of uncertainty to this concatenation of estimates, the TRAM experts who presented to the committee at its first meeting gave notional target attractiveness numbers to three significant figures, which implies much more certainty and precision than can be justified.

Overall, TRAM produces a threat rating for various attack scenarios that is calculated as the product of an SME rating of the attack likelihood and an SME rating of the scenario likelihood. The presentation of notional results at the committee’s first meeting showed scenario likelihoods for a range of assets to three significant figures, which is unrealistic, and overall threat ratings for each of those assets (the product of attack likelihood and scenario likelihoods) to four significant figures.

Another indication of unsupportable complexity is the way that TRAM develops what is called Criticality Assessment. A set of Critical Asset Factors (CAFs) is defined, which represents different ways in which loss or damage of an asset will affect overall operations. CAFS include dimensions such as the potential for casualties, potential for business continuity loss, potential national strategic importance, potential economic impact, potential loss of emergency response function, and so on.17 For each asset considered in the risk analysis, the effect of a given attack on each CAF for that asset is subjectively assessed

17

From TRAM Methodology Description, dated May 13, 2009.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

on a scale of 0 to 10, and other numbers between 1 and 5 are assigned to each CAF to reflect that factor’s importance to the jurisdiction. Ultimately, the severity estimates are multiplied by the importance estimates (weights) and summed. This process introduces unnecessary complexity, a loss of transparency, and the possibility of misleading assessments of criticality. In some categories an arbitrary upper cap must be imposed.

TRAM vulnerability computation uses the product of three factors, each of which would seem to be a very uncertain estimate. The factors are called Access Control (likelihood that access will be denied), Detection Capabilities (likelihood that the attack would be detected), and Interdiction Capabilities (likelihood that the attack, if detected, will be interdicted). These estimates are different for each type of asset, each type of attack, and each class of security countermeasure. Each of these factors ranges from 0.0 to 1.0, again implying a sense of speculation. The difference between, say, a 0.5 and a 0.6 must be fairly arbitrary, so it is likely that the uncertainty is greater than ±10 percent for each of these SME-generated numbers.

Later in the process, TRAM estimates Response Capability Factors for capabilities such as law enforcement, fire service, and emergency medical services. The assessment for each is the product of rankings (from 0.0 to 1.0) of staffing, training, equipment, planning, exercise, and organizational abilities, all equally weighted.

Users of TRAM should be wary of assuming too much about its reliability. For example, the calculations leading to Threat Ratings and Target Attractiveness rely on SME estimates of numerous factors that may or may not be independent, none of which can be known with much certainty. Then weightings and (in the case of Target Attractiveness) a guess at Attack Elasticity are factored into the calculations. Similarly complex calculations are included in the analyses of vulnerabilities and in an assessment of local emergency response capabilities, as noted in Chapter 2.

This complexity also makes it very difficult to evaluate the model. On a site visit to the Port Authority of New York and New Jersey, a committee delegation asked the TRAM contractor team whether the tool had been validated. The response was that the toolkit had undergone “face validation,” meaning that SMEs have not seen it produce results that were counterintuitive or, if it did, the model was adjusted to preclude that sort of outcome. This sort of feedback is useful but incomplete. Because the model has not been peer reviewed by technical experts external to DHS or the contractor, or truly validated, it is impossible to know the extent of the complexity problem or the effectiveness of TRAM at addressing uncertainty.

Nevertheless, as mentioned at the beginning of this section, TRAM’s structured approach and capability to assist decision makers by providing “informed” relative risk information should be recognized. A tool such as TRAM enforces a disciplined look at risk and provides the organization that uses it with the credibility of applying a recognized tool rather than relying on the unstructured “educated or experienced guesses” of agency personnel or the “gut feelings” of

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

SMEs. The outputs produced by TRAM appear quite useful as a means of conceptualizing the risk space and the relative rankings of various risks. The tool is apparently also quite helpful in displaying how various risk mitigation options affect those relative rankings, allowing some degree of benefit-cost analysis. However, because of the uncertainties associated with all of these outputs and the fact that the uncertainties are not well characterized, TRAM should be used cautiously. In its current status, TRAM’s outputs lack transparency for users and present a misleading degree of mathematical precision.

Finally, it is claimed that the TRAM software can estimate the risk buy-down from enhancing certain security systems. Revised scatter graphs are produced periodically that show how the risk to critical assets is redistributed following the implementation of various countermeasures or mitigation options, and in this way a benefit-cost analysis is achieved. However, when the tool is used for benefit-cost analysis, it produces only estimates of how the risk rankings change under various risk mitigation options, and these could be misleading. Contractor experts said they can estimate the uncertainty in these risk analyses, although the output becomes complicated, and that information is normally interpreted by contractor staff and not presented to the end users.

The Port Authority of New York and New Jersey (PANYNJ) is the first client for TRAM and has been assisting DHS and its contractors in the toolkit’s development from its inception. The PANYNJ users of TRAM have not questioned whether the model is validated, and they do not seem to be bothered by its tendency toward overquantification. They seem pleased to have a disciplined process that helps their staff identify the full spectrum of risks facing the Port Authority, and leaders of the agency point out the liability advantages to following a recognized system of risk analysis.

TRAM and PANYNJ staff told the committee how DHS assisted in a particular project, to mitigate the risk of an explosive attack in a Port Authority asset. DHS assistance began with a computational simulation of a bomb blast, which showed that the damage could be more severe than PANYNJ engineers had judged based on their intuition. Given that vulnerability analysis, the TRAM team developed a range of mitigation steps—various combinations of reinforcement and additions to physical security—and estimated the risk buy-down for each combination. Based on this analysis, Port Authority leadership was able to choose a mitigation strategy that fit within its risk tolerance and budget constraints. This was considered a successful application of TRAM.


Recommendation: DHS should seek expert, external peer review of the TRAM model in order to evaluate its reliability and recommend steps for strengthening it.

Biological Threat Risk Assessment Model

DHS’s Biological Threat Risk Assessment (BTRA) model, which has been

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

used to produce biennial assessments of bioterrorism risks since 2006, was thoroughly reviewed by a 2008 NRC report. The primary recommendation of that report reads as follows (NRC, 2008, p. 5):

The BTRA should not be used as a basis for decision making until the deficiencies noted in this report have been addressed and corrected. DHS should engage an independent, senior technical advisory panel to oversee this task. In its current form, the BTRA should not be used to assess the risk of biological, chemical, or radioactive threats (NRC, 2008, p.5).

The complexity of this model precludes transparency, and the present committee does not know how it could be validated. The lack of transparency potentially obscures large degrees of disagreement about and even ignorance of anticipated events. Even sensitivity analysis is difficult with such a complex model. Finally, the model’s complexity means it can only be run by its developers at Battelle Memorial Institute, not by those responsible for bioterrorism risk management.

While DHS reports that it is responding to most of the recommendations in the 2008 NRC report, the response is incremental, and a much deeper change is necessary. The proposed responses will do little to reduce the great complexity of the BTRA model. That complexity requires many more SME estimates than can be justified by the small pool of relevant experts and their base of existing knowledge. The proposed response does not move away from the modeling of intelligent adversaries with estimated probabilities. Whether meant as a static assessment of risks or a tool for evaluating risk management options, these shortcomings undermine the method’s reliability. Finally, the proposed response does not simplify the model and its instantiation in software in a way that would enable it to be of greater use to decision makers and risk managers (e.g., by allowing near-real-time exploration of what-if scenarios by stakeholders). Therefore, the committee has serious doubts about the usefulness and reliability of BTRA.

The committee’s concerns about BTRA are echoed by the Department of Health and Human Services (DHHS), which declined to rely on the results of the 2006 or 2008 BTRA assessments, a Chemical Terrorism Risk Assessment (CTRA), or the 2008 integrated Chemical, Biological, Radiological, and Nuclear (CBRN) assessment that builds on BTRA and CTRA.18 DHHS is responsible for the nation’s preparedness to withstand and respond to a bioterror attack, and in order to learn more about how DHS coordinates with another federal agency in managing a homeland security risk, a delegation of committee members made a site visit to DHHS’s Biomedical Advanced Research and Development Authority (BARDA), which is within the office of the Assistant Secretary for Pre-

18

Staff from Biomedical Advanced Research and Development Authority (BARDA), Department of Health and Human Services, at the committee members’ site visit to DHHS. May 6, 2009, Washington, D.C.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

paredness and Response. According to BARDA’s web site, it “provides an integrated, systematic approach to the development and purchase of the necessary vaccines, drugs, therapies, and diagnostic tools for public health medical emergencies. BARDA manages Project BioShield, which includes the procurement and advanced development of medical countermeasures for chemical, biological, radiological, and nuclear agents, as well as the advanced development and procurement of medical countermeasures for pandemic influenza and other emerging infectious diseases that fall outside the auspices of Project BioShield.” BARDA provides subject matter input to DHS risk analyses and relies on DHS for threat analyses and risk assessments.

One fundamental reason that BARDA declined to rely on these DHS products is that it received only a document, without software or the primary data inputs, and thus could not conduct its own “what-if” analyses that could guide risk mitigation decision making. For example, DHHS would like to group those outcomes that require respiratory care, because then certain steps could provide preparedness against a range of agents. Neither BTRA nor CTRA allows a risk manager to adjust the output. More generally, BARDA staff expressed “frustration” because DHS provides such a limited set of information in comparison to the complex strategic decisions that DHHS must make. Staff gave the following examples:

  • The amount of agent included in an aerosol release was not stated in the CTRA report. This lack of transparency about basic assumptions undermines the credibility of the information coming out of the models that bear on consequence management.

  • BARDA’s SMEs believe that DHS was asking the wrong questions for an upcoming CTRA, but DHS was not willing to change the questions.

  • The consequence modeling in CTRA and BTRA is getting increasingly complicated. When DHHS pointed this out, it was told that simplifications are being slated for upgrades several years in the future.

One of the BARDA staff members said that if BARDA had DHS’s database and intermediate results, DHHS would be able to make “much better decisions” than it can seeing just the end results of an analysis. Under those preferred conditions, BARDA could vary the inputs (i.e., conduct a sensitivity analysis) to make better risk management decisions. For example, it could see where investments would make the most difference—that is, which countermeasures provide the best return on investment. More generally, BARDA staff would like to see a process with more interaction at the strategic level, allowing DHS and DHHS staff to jointly identify risks in a more qualitative manner. This should include a better definition of the threat space, with DHS defining scenarios and representing the threats for which DHHS and other stakeholders must prepare for. From BARDA’s perspective, the risk modeling is less important than getting key people together and red-teaming a particular threat.

BARDA clearly has a compelling need for reliable assessments of bioterror-

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

ism risk, and it is a primary customer for the BTRA. At the committee’s site visit to BARDA, a DHS representative noted that Food and Drug Administration, the Department of Defense, and the White House Homeland Security Council are also customers of BTRA and that they are satisfied with the product.19 However, the committee does not believe that that is a reason to disregard the valid concerns of DHHS.

Integrated Risk Management Framework

The committee can develop only a preliminary impression of DHS’s adoption of the Integrated Risk Management Framework because, as a developing process rather than a model, it is not yet in its final state. That is normal: instantiations of Enterprise Risk Management (ERM) in the private sector may take several years of work, and the results might be difficult to judge until even later. ERM is still an evolving field of management science. Companies from regulated sectors (finance, insurance, utilities) are the leaders in ERM sophistication, but those in nonregulated sectors (e.g., General Motors, Reynolds Corp., Delta Airlines, Home Depot, Wal-Mart, Bristol-Myers Squibb) are also practicing elements of ERM.20

Other federal agencies have also begun exploring the applicability of ERM to their own internal management challenges.21 It is an appealing construct because of its potential for breaking down the stovepipes that afflict many agencies. However, absent the profit-making motive of private industry (which gives companies a motivation for judiciously taking on some known risks), ERM does not map directly onto the management of a public entity. Government agencies recognize that they cannot simply adopt best practices in risk management from industry. In addition to the less-obvious “upside” of risk, government agencies might have more heterogeneous missions than a typical private sector corporation, and they might have responsibility to plan for rare events for which few data exist. In addition, societal expectations are vastly different for a government agency compared to a private firm, and many more stakeholder perspectives must be taken into account by government agencies when managing their risks.

Successful implementations of ERM in the private sector employee processes reveal risks and emerging challenges early and then manage them proactively. These processes are shared across units where possible, both to minimize the resources required and to enable comparison of risks and sharing of mitigation steps. Such ERM programs need not be large and their resource require-

19

Statement from Steven Bennett, DHS-RMA at site visit to DHHS, May 6, 2009, Washington, D.C.

20

For additional background, see, for example, United Kingdom Treasury, 2004; Office of Government Commerce, United Kingdom Cabinet Office, 2002.

21

For example, a Government Accountability Office summit on ERM was held on October 21, 2009.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

ments can be minimal because they leverage existing activities. However, it is often the case that the group implementing ERM must have clear “marching orders” from top management. Many corporations and businesses have identified a senior executive (i.e., chief financial officer, chief executive officer, or chief risk officer) and provided that person with explicit responsibility for overseeing the management of all risks across the enterprise.

At present, DHS’s ERM efforts within the Office of Risk Management and Analysis appear to be on the right track.22 RMA has established a Risk Steering Committee (RSC) for governance, it has inventoried current practices in risk analysis and risk management, it has begun working on coordination and communication, and it is developing longer-term plans. RMA is a modest-size office, and it “owns” very few of the risks within DHS’s purview. In terms of identifying and managing risks that cut across DHS directorates, formation and management of the RSC is a key enabler. In practice so far, most committee meetings seem to involve a Tier 3 RSC that consists of lower-level staff who have been delegated responsibilities for their components. Agendas for two recent meetings of the Tier 3 RSC suggest that those two-hour meetings were focused on updates and information sharing.

A NUMBER OF ASPECTS OF DHS RISK ANALYSIS NEED ATTENTION

Based on its examination of these six illustrative risk analysis models and processes, the committee came to the following primary conclusion, which addresses element (a) of the Statement of Task:


Conclusion: DHS has established a conceptual framework for risk analysis (risk is a function of threat (T), vulnerability (V), and consequence (C), or R = f(T,V,C) ) that, generally speaking, appears appropriate for decomposing risk and organizing information, and it has built models, data streams, and processes for executing risk analyses for some of its various missions. However, with the exception of risk analysis for natural disaster preparedness, the committee did not find any DHS risk analysis capabilities and methods that are yet adequate for supporting DHS decision making, because their validity and reliability are untested. Moreover, it is not yet clear that DHS is on a trajectory for development of methods and capability that is sufficient to ensure reliable risk analyses other than for natural disasters.

Recommendation: To develop an understanding of the uncertainties in its terrorism-related risk analyses (knowledge that will drive future im-

22

The committee was informed at its meeting of November 2008 that the U.S. Coast Guard and Immigration and Customs Enforcement are also developing ERM processes that span just those component agencies, but the committee did not examine those processes.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

provements), DHS should strengthen its scientific practices, such as documentation, validation, and peer review by technical experts external to DHS. This strengthening of its practices will also contribute greatly to the transparency of DHS’s risk modeling and analysis. DHS should also bolster its internal capabilities in risk analysis as part of its upgrading of scientific practices.


The steps implied by this primary conclusion are laid out in the next chapter. The focus on characterizing uncertainties is of obvious importance to decision makers and to improving the reliability of risk models and analysis. The treatment of uncertainty is recognized as a critical component of any risk assessment activity (Cullen and Small, 2004; NRC, 1983, 1994, 1996, 2008; an overview is presented in Appendix A). Uncertainty is always present in our ability to predict what might occur in the future, and it is present as well in our ability to reconstruct and understand what has happened in the past. This uncertainty arises from missing or incomplete observations and data; imperfect understanding of the physical and behavioral processes that determine the response of natural and built environments and the people within them; subjectivity embedded within analyses of threat and vulnerability and in the judgments of what to measure among consequences; and our inability to synthesize data and knowledge into working models able to provide predictions where and when we need them.

Proper recognition and characterization of both variability and uncertainty are important in all elements of a risk assessment, including effective interpretation of vulnerability, consequence, intelligence, and event occurrence data as they are collected over time. Some DHS risk work reflects an understanding of uncertainties—for example, the uncertainty in the FEMA floodplain maps is well characterized, and the committee was told that TRAM can produce output with an indication of uncertainty (though this is usually suppressed in accordance with the perceived wishes of the decision makers and was not shown to the committee). However, DHS risk analysts rarely mentioned uncertainty to the committee , and DHS appears to be in a very immature state with respect to characterizing uncertainty and considering its implications for ongoing data collection and prioritization of efforts to improve its methods and models.

Closely tied with the topic of uncertainty is that of reflecting properly the precision of risk analyses. The committee saw a pattern of DHS personnel and contractors’ putting too much effort into quantification and trusting numbers that are highly uncertain. Similarly, the committee observed a tendency to make risk analyses more complex than needed or justified. Examples were given earlier in this chapter with respect to TRAM and the Regional Resiliency Assessment Project in IP. Another example arose during the committee’s site visit to IP, wherein a briefing provided examples of protective measures assigned by SMEs to different security features. For example, a high metal fence with barbed wire received a Protective Measure Index of 71, while a 6-foot wooden fence was given a PMI of 13. None of the DHS personnel could say whether the ratio

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

71/13 had any meaning with respect to vulnerability, yet the presentation continued with several graphics comparing the PMIs of various types of physical security measures as examples of the analyses provided by DHS. Another example comes from a presentation at the committee’s first meeting on the National Maritime Strategic Risk Assessment, which included a Risk Index Number with four significant figures. These numbers are apparently used to compare different types of risk. There was no indication, however, that the National Maritime Strategic Risk Assessment relied on that false precision.

Uncertainty characterization cannot be addressed without clearer understanding (such as that obtained through documentation and peer review by technical experts external to DHS), strengthening of the internal skill base, and adherence to good scientific practices. Those topics are taken up in Chapter 5. The remainder of this chapter addresses other cross-cutting needs that are evident from the risk models and processes discussed above.

Comparing Risks Across DHS Missions

DHS is working toward risk analyses that are more and more comprehensive, in an attempt to enable the comparison of diverse risks faced by the department. For example, HSPD-18 directed DHS to develop an integrated risk assessment that covered terrorism with biological, chemical, radiological, and nuclear (CBRN) weapons. The next generation of TRAM is being developed to include the ability to represent a range of hazards—human-initiated, technological, and natural—and measure and compare risks using a common scale.23 More generally, RMA created the Integrated Risk Management Framework in order to support disparate types of risk assessment and management within DHS and eventually across the homeland security enterprise.24 The DHS Risk Steering Committee’s vision for integrated risk management is as follows: “…to enable individual elements, groups of elements, or the entire homeland security enterprise to simultaneously and effectively assess, analyze, and manage risk from multiple perspectives across the homeland security mission space” (DHS-RSC, 2009).

The concept calls for risk to be assessed and managed in a consistent manner from multiple perspectives, specifically: (a) managed across missions within a single DHS component (e.g., within the Immigration and Customs Enforcement agency); (b) assessed by hazard type (e.g., bioterrorism or chemical terrorism); (c) managed by homeland security functions (e.g., physical and information-based screening of travelers and cargoes); and (d) managed by security domain (e.g., aviation security strategies) (DHS-RSC, 2009). If an approach

23

Chel Stromgren (SAIC) presentation to the committee, November 24-25, 2008, Washington, D.C.

24

Tina Gabbrielli (RMA) presentation to the committee, November 24-25, 2008, Washington, D.C.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

to integrated risk management can be successfully developed and implemented, the opportunities for improving the quality and utility of risk analyses carried out by many components of DHS and by many partners should be extensive.

In the committee’s view, this emphasis on integrated risk analyses is unwise given DHS’s current capabilities in risk analysis and the state of the science. Integrated risk analysis collects analyses for all potential risks facing an entity, here DHS, and combines those risks into one complete analysis using a common metric. This is contrasted with comparative risk analysis which omits the last step. In comparative risk analysis, potential risks to the entity from many different sources are analyzed and the risks then compared (or contrasted), but no attempt is made to put them into a common metric. As previously noted, there are major differences in carrying out risk analyses for (1) natural disasters, which may rest on the availability of considerable amounts of historical data to help determine the threat (e.g., flood data derived from years of historical records produced from a vast network of stream gages, earthquake-related data concerning locations and frequency of occurrence of seismic disturbances), and (2) terrorist attacks, which may have no precedents and are carried out by intelligent adversaries resulting in a threat that is difficult to predict or even to conceptualize (e.g., biological attacks). Whereas natural disasters can be modeled with relative effectiveness, terrorist disasters cannot. A recent report by a group of experts concludes that it is simply not possible to validate (evaluate) predictive models of rare events that have not occurred, and unvalidated models cannot be relied upon” (JASON, 2009, p. 7).

The balancing of priorities between natural hazards and terrorism is far more than an analytic comparison of effects such as health outcomes. Political factors are major in such balancing, and these are affected by public and government (over)reaction to terrorism.

Even though many DHS components are using quantitative, probabilistic risk assessment models based to some extent on Risk = f(T,V,C), the details of the modeling, and the embedded assumptions, vary significantly from application to application. Aggregation of the models into ones that purport to provide an integrated view is of dubious value. For example, NISAC sometimes manually integrates elements from the output of various models. There is large room for error when the outputs of one model serve as inputs for another. It might be wiser to build and improve CIKR interdependent systems simulation models by designing modules and integrating elements over time, rather than taking the current collection of models and “jamming them together.” Decision support systems that provide risk analysis must be designed with particular decisions in mind, and it is sometimes easier to build new integrated models rather than trying to patch together a collection of risk models developed over time and for various purposes. A decision support system designed from the beginning to be integrated minimizes the chance of conflicting assumptions and even mechanical errors that can accrue when outputs are manually merged.

While corporations that practice ERM do integrate some risk models from across the enterprise and have developed disciplined approaches for managing

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

portfolios of risk, their risks are relatively homogeneous in character. They are not dealing with risks as disparate as the ones within the purview of DHS.

It is not clear that DHS recognizes the fundamental limitations to true integration. A working paper from DHS-RMA, “Terrorism Risk and Natural Hazard Risk: Cross-Domain Risk Comparisons” (March 25, 2009; updated July 14, 2009), concludes that “it is possible to compare terrorism risk with natural hazard risk” (p. 6). The working paper goes on to say that “the same scale, however, may be an issue” (p. 6). The committee agrees; however, it does not agree with the implication in the RMA paper (which is based on an informal polling of self-selected risk analysts) that such a comparison should be done. It is clear from the inputs of the polled risk analysts that this is a research frontier, not an established capability. There does not exist a general method that will allow DHS to compare risks across domains (e.g., weighing investments to counterterrorism risk versus those to prepare for natural disasters or reduce the risk of illegal immigration).

Even if one moves away from quantitative risk assessment, the problem does not disappear. There are methods in qualitative risk analysis for formally eliciting advice for decision making, such as Delphi analysis, scoring methods, and expert judgment, that can be used to compare risks of very different types. There is a well established literature on comparative risk analysis that can be used to apply the Risk = f(T,V,C) approach to different risk types (Davies, 1996; EPA, 1987; Finkel and Golding, 1995). However, the results are likely to involve substantially different metrics that cannot be compared directly. However, the scope and diversity in the metrics can themselves be very informative for decision making.

Therefore, in response to element (d) of the Statement of Task, the committee makes the following recommendation:


Recommendation: The risks presented by terrorist attack and natural disasters cannot be combined in one meaningful indicator of risk, so an all-hazards risk assessment is not practical. DHS should not attempt an integrated risk assessment across its entire portfolio of activities at this time because of the heterogeneity and complexity of the risks within its mission.


The risks faced by DHS are too disparate to be amenable to quantitative comparative analysis. The uncertainty in the threat is one reason, the difficulty of analyzing the social consequences of terrorism is a second, and the difference in assessment methods is yet another. One key distinguishing characteristic is that in a terrorist event, there is an intelligent adversary intending to do harm or achieve other goals. In comparison, “Mother Nature” does not cause natural hazard events to occur in order to achieve some desired goal. Further, the intelligent adversary can adapt as information becomes available or as goals change; thus the likelihood of a successful terrorism event (T × V) changes over time. Even comparing risks of, say, earthquakes and floods on a single tract of land raises difficult questions. As a general principle, a fully integrated analysis that

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

aggregates widely disparate risks by use of common metric is not a practical goal and in fact is likely to be inaccurate or misleading given the current state of knowledge of methods used in quantitative risk analysis. The science of risk analysis does not yet support the kind of reductions in diverse metrics that such a purely quantitative analysis would require.

The committee is more optimistic about using an integrated approach if the subject of the analysis is a set of alternative risk management options, for example, in an analysis of investments to improve resilience. The same risk management option might have positive benefits across different threats—for example, for both a biological attack and an influenza pandemic or an attack on a chemical facility and an accidental chemical release. In these cases, the same risk management option might have the ability to reduce risks from a number of sources such as natural hazards and terrorism. The analysis of the alternative risk management options that could mitigate risks to a set of activities or assets could be analyzed in a single quantitative model in much the same way that cost-effectiveness analysis can be used to select the least-cost investment in situation in which benefits are generally incommensurate. An example might be an analysis of emergency response requirements to reduce the consequences of several disparate risks.

Leading thinkers in public health and medicine have argued that preparedness and response systems for bioterrorism have the dual benefit of managing the consequences of new and emerging infections and food-borne outbreaks. Essential to management of both biological attacks and naturally occurring outbreaks is a robust public health infrastructure (Henderson, 1999; IOM, 2003). Among the shared requirements for outbreak response (whether the event is intentional or natural in origin) are a public health workforce schooled in the detection, surveillance, and management of epidemic disease; a solid network of laboratory professionals and diagnostic equipment; physicians and nurses trained in early detection of novel disease who are poised to communicate with health authorities; and a communication system able to alert the public to any danger and describe self-protective actions (Henderson, 1999; IOM, 2003).

Recent evidence supports this dual-use argument. State and local public health practitioners who have received federal grants for emergency preparedness and response over the past decade exhibit an enhanced ability to handle “no-notice” health events, regardless of cause (CDC, 2008; TFAH, 2008). Arguably, the additional epidemiologists and other public health professionals hired through the preparedness grants—the number of which doubled from 2001 to 2006—have improved the overall functioning of health departments, not simply their emergency capabilities (CDC, 2008). Hospitals, too, that have received federal preparedness grants originally targeted to the low-probability, high-consequence bioterrorist threat report an enhanced state of resilience and increased capacity to respond to “common medical disasters” (Toner et al., 2009). Among the gains catalyzed by the federally funded hospital preparedness program are the elaboration of emergency operation plans, implementation of communication systems, adoption of hospital incident command system con-

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

cepts, and development of memoranda of understanding between facilities for sharing resources and staff during disasters (Toner et al., 2009).

A parallel example can be found in managing the risks of hazardous chemicals. Improved emergency preparedness, emergency response, and disaster recovery can help contain the consequences of a chemical release, thus lessening the appeal of the chemical infrastructure as a target for terrorists. At the same time, such readiness, response, and recovery capabilities can better position a community to mitigate the effects of a chemical accident (NRC, 2006).

An NRC report (2006) that evaluated the current state of social science research into hazards and disasters noted that there has been no systematic scientific assessment of how natural, technological, and willful hazards agents vary in their threats and characteristics, thus “requiring different pre-impact interventions and post-impact responses by households, businesses, and community hazard management organizations” (p. 75). That report continued (NRC, 2006, pp. 75-76):

In the absence of systematic scientific hazard characterization, it is difficult to determine whether—at one extreme—natural, technological, and willful hazards agents impose essentially identical disaster demands on stricken communities—or at the other extreme—each hazard is unique. Thorough examination of the similarities and differences among hazard agents would have significant implications for guiding the societal management of these hazards.

Recommendation: In light of the critical importance of knowledge about societal responses at various levels to risk management decisions, the committee recommends that DHS include within its research portfolio studies of how population protection and incident management compare across a spectrum of hazards.


It is possible to compare two disparate risks using different metrics, and that might be the direction in which DHS can head, but this requires great care in presentation, and there is a high risk that the results will be misunderstood. The metrics one applies to any one risk might be completely different from some other risk, and any attempt to squeeze them to fit on the same plot is likely to introduce too much distortion. Rather, the committee encourages comparative risk analysis25 (which is distinct from integrated or enterprise risk analysis26) that is structured within a decision framework but without trying to force the risks onto the same scale.

One of the key assumptions in integrated or enterprise risk management (particularly for financial services firms) is that there is a single aggregate risk measure such as economic capital (Bank for International Settlements, 2006).

25

See Davies, 1996; EPA, 1987; Finkel and Golding (ed.), 1995.

26

See, for example, Committee of the Sponsoring Organizations of the Treadway Commission, 2004; Doherty, 2000.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×

Economic capital is the estimated amount of money that a firm must have available to cover ongoing operations, deal with worst-case outcomes, and survive. While many of the concepts of integrated or enterprise risk management can also be applied to DHS (particularly governance, process, and culture), there is currently no single measure of risk analogous to economic capital that is appropriate for DHS use. Thus, DHS must use comparative risk management—specifically multiple metrics to understand and evaluate risks. It is worth noting that most nonfinancial services firms implementing ERM adopt the philosophical concepts but have several metrics for comparative risk across operations. For example, nonfinancial services firms evaluate risks using comparative metrics such as time to recover operations, service-level impact over time, potential economic loss of product or service, number of additional temporary staffing (reallocated resources) to restore operations to normal levels, and other factors. These metrics are much more in line with DHS’s needs to focus on response and recovery.

Comparative analysis works because the conceptual breakdown of risk into threat, vulnerability, and consequence can be applied to any risk. Rather than seeking an optimal balance of investments—and certainly rather than trying to do so through one complex quantitative model—DHS should instead use analytical methods to identify options that are adequate to various stakeholders and then choose among them based on the best judgment of leadership. It seems feasible for DHS to consider a broad collection of hazards, sketch out mitigation options, examine co-benefits, and develop a set of actions to reduce vulnerability and increase resilience. A good collection of risk experts could help execute a plan like that.

Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 52
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 53
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 54
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 55
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 56
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 57
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 58
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 59
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 60
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 61
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 62
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 63
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 64
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 65
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 66
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 67
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 68
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 69
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 70
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 71
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 72
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 73
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 74
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 75
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 76
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 77
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 78
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 79
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 80
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 81
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 82
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 83
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 84
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 85
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 86
Suggested Citation:"4 Evaluation of DHS Risk Analysis." National Research Council. 2010. Review of the Department of Homeland Security's Approach to Risk Analysis. Washington, DC: The National Academies Press. doi: 10.17226/12972.
×
Page 87
Next: 5 The Path Forward »
Review of the Department of Homeland Security's Approach to Risk Analysis Get This Book
×
Buy Paperback | $51.00 Buy Ebook | $40.99
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

The events of September 11, 2001 changed perceptions, rearranged national priorities, and produced significant new government entities, including the U.S. Department of Homeland Security (DHS) created in 2003. While the principal mission of DHS is to lead efforts to secure the nation against those forces that wish to do harm, the department also has responsibilities in regard to preparation for and response to other hazards and disasters, such as floods, earthquakes, and other "natural" disasters. Whether in the context of preparedness, response or recovery from terrorism, illegal entry to the country, or natural disasters, DHS is committed to processes and methods that feature risk assessment as a critical component for making better-informed decisions.

Review of the Department of Homeland Security's Approach to Risk Analysis explores how DHS is building its capabilities in risk analysis to inform decision making. The department uses risk analysis to inform decisions ranging from high-level policy choices to fine-scale protocols that guide the minute-by-minute actions of DHS employees. Although DHS is responsible for mitigating a range of threats, natural disasters, and pandemics, its risk analysis efforts are weighted heavily toward terrorism. In addition to assessing the capability of DHS risk analysis methods to support decision-making, the book evaluates the quality of the current approach to estimating risk and discusses how to improve current risk analysis procedures.

Review of the Department of Homeland Security's Approach to Risk Analysis recommends that DHS continue to build its integrated risk management framework. It also suggests that the department improve the way models are developed and used and follow time-tested scientific practices, among other recommendations.

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    Switch between the Original Pages, where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  7. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  8. ×

    View our suggested citation for this chapter.

    « Back Next »
  9. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!