Cover Image

PAPERBACK
$78.50



View/Hide Left Panel
Click for next page ( 346


The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 345
Appendix A Reprinted Letter Report from the Committee on Deterring Cyberattacks March 25, 2010 Mr. Brian Overington Assistant Deputy Director of National Intelligence Office of the Director of National Intelligence Washington, DC 20511 Dear Mr. Overington: This letter report from the National Research Council’s (NRC’s) Committee on Deterring Cyberat - tacks is the first deliverable for Contract Number HHM-402-05-D-0011, DO#12. This committee (biog - raphies of committee members are provided in Attachment 1) was created to help inform strategies for deterring cyberattacks and to develop options for U.S. policy in this area. The project statement of task is provided below: An ad hoc committee will oversee an activity to foster a broad, multidisciplinary examination of deterrence strategies and their possible utility to the U.S. government in its policies toward preventing cyberattacks. In the first phase, the committee will prepare a letter report identifying the key issues and questions that merit examination. In the next phase, the committee will engage experts to prepare papers that address key issues and questions, including those posed in the letter report. The papers will be compiled in a Na - tional Research Council publication and/or published by appropriate journals. This phase will include a committee meeting and a workshop to discuss draft papers, with authors finalizing the papers following the workshop. This letter report satisfies the deliverable requirement of the first phase of the project by providing basic information needed to understand the nature of the problem and to articulate important questions that can drive research regarding ways of more effectively preventing, discouraging, and inhibiting hostile activity against important U.S. information systems and networks. (Attachment 2 acknowledges the reviewers of this letter report.) The second phase of this project will entail selection of appropriate experts to write papers on questions raised in this report. Much of the analytical framework of this letter NOTE: National Research Council, “Letter Report from the Committee on Deterring Cyberattacks: Informing Strategies and Developing Options for U.S. Policy,” The National Academies Press, Washington, D.C., March 25, 2010, available at http://www. nap.edu/catalog/12886.html. 

OCR for page 345
 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS report draws heavily on reports previously issued by the NRC.1 In particular, it builds in large part on the work of a previous NRC panel (the NRC Committee on Offensive Information Warfare), which issued a report entitled technology, Policy, law, and Ethics Regarding Acquisition and Use of U.S. Cyberattack Capabilities in April 2009, and extracts without specific attribution sections from Chapters 2, 9, and 10 of that report. In addition and as requested by the Office of the Director of National Intelligence (ODNI), the committee reviewed the ODNI-provided compendiums on three summer workshops conducted by the ODNI,2 and incorporated insights and issues from them into this report as appropriate. This report consists of three main sections. Section 1 describes a broad context for cybersecu - rity, establishing its importance and characterizing the threat. Section 2 sketches a range of possible approaches for how the nation might respond to cybersecurity threats, emphasizing how little is known about how such approaches might be effective in an operational role. Section 3 describes a research agenda intended to develop more knowledge and insight into these various approaches. As for the second phase of this project, a workshop will be held in June 2010 to discuss a number of papers that have been commissioned by the committee and possibly additional papers received through the NRC’s call for papers. This call for papers is at the heart of a competition sponsored by the NRC to solicit excellent papers on the subject of cyberdeterrence. The call for papers can be found at http://sites. nationalacademies.org/CSTB/CSTB_056215. 1. THE bROAD CONTExT FOR CybERSECuRITy3 Today, it is broadly accepted that the U.S. military and economic power is ever more dependent on information and information technology. Accordingly, maintaining the security of important information and information technology systems against hostile action (a topic generally referred to as “cybersecu - rity”) is a problem of increasing importance to policy makers. Accordingly, an important policy goal of the United States is to prevent, discourage, and inhibit hostile activity against these systems and networks. This project was established to address cyberattacks, which refer to the deliberate use of cyber operations—perhaps over an extended period of time—to alter, disrupt, deceive, degrade, usurp, or destroy adversary computer systems or networks or the informa - tion and/or programs resident in or transiting these systems or networks.4 Cyberattack is not the same as cyber exploitation, which is an intelligence-gathering activity rather than a destructive activity and refers to the use of cyber operations—perhaps over an extended period of time—to support the goals and missions of the party conducting the exploitation, usually for the purpose of obtaining information resident on or transiting through an adversary’s computer systems or networks. Cyberattack and cyber exploitation are technically very similar, in that both require a vulnerability, access to that vulnerability, and a payload to be executed. They are technically different only in the nature of the payload to be executed. These technical similarities often mean that a targeted party may not be able to distinguish easily between a cyber exploitation and a cyberattack. 1 National Research Council (NRC), technology, Policy, law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabili - ties (William Owens, Kenneth Dam, Herbert Lin, editors), The National Academies Press, Washington, D.C., 2009; NRC, toward a Safer and more Secure Cyberspace (Seymour Goodman and Herbert Lin, editors), The National Academies Press, Washington, D.C., 2007. 2 These workshops addressed the role of the private sector, deterrence, and attribution. 3 The discussion in this section is based on Chapter 1, NRC, technology, Policy, law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities, 2009; and Chapter 2, NRC, toward a Safer and more Secure Cyberspace, 2007. 4 This report does not consider the use of electromagnetic pulse (EMP) attacks. EMP attacks typically refer to nonselective attacks using nuclear weapons to generate an intense electromagnetic pulse that can destroy all unprotected electronics and electrical components within a large area, although a tactical EMP weapon intended to selectively target such components on a small scale is possible to imagine. For a comprehensive description of the threat from EMP attacks, see Report of the Commission to Assess the threat to the United States from Electromagnetic Pulse (EmP) Attack, available at http://www.globalsecurity.org/wmd/library/ congress/2004_r/04-07-22emp.pdf.

OCR for page 345
 APPEndiX A Because of the ambiguity of cyberattack and cyber exploitation from the standpoint of the targeted party, it is helpful to have a word to refer to a hostile cyber activity where the nature of the activity is not known (that is, an activity that could be either a cyberattack or a cyber exploitation)—in this report, the term cyberintrusion is used to denote such activity. The range of possibilities for cyberintrusion is quite broad.5 A cyberattack might result in the destruc- tion of relatively unimportant data or the loss of availability of a secondary computer system for a short period of time—or it might alter top-secret military plans or degrade the operation of a system critical to the nation, such as an air traffic control system, a power grid, or a military command and control system. Cyber exploitations might target the personal information of individual consumers or critical trade secrets of a business, military war plans, or design specifications for new weapons. Although all such intrusions are worrisome, some of these are of greater significance to the national well-being than others. Intrusions are conducted by a range of parties, including disgruntled or curious individuals intent on vandalizing computer systems, criminals (sometimes criminal organizations) intent on stealing money, terrorist groups intent on sowing fear or seeking attention to their causes, and nation-states for a variety of national purposes. Moreover, it must be recognized that nation-states can tolerate, spon - sor, or support terrorist groups, criminals, or even individuals as they conduct their intrusions. A state might tolerate individual hackers who wish to vandalize an adversary’s computer systems, perhaps for the purpose of sowing chaos. Or it might sponsor or hire criminal organizations with special cyber expertise to carry out missions that it did not have the expertise to undertake. Or it might provide sup - port to terrorist groups by looking the other way as those groups use the infrastructure of the state to conduct Internet-based operations. In times of crisis or conflict, a state might harbor (or fail to discour - age, or encourage, or control) “patriotic hackers” or “cyber patriots” who conduct hostile cyberintru - sions against a putative adversary. Note that many such actions would also be plausibly deniable by the government of the host state. The threats that adversaries pose can be characterized along two dimensions—the sophistication of the intrusion and the damage it causes. Though these two are often related, they are not the same. Sophis - tication is needed to penetrate good cyberdefenses, and the damage an intrusion can cause depends on what the adversary does after it has penetrated those defenses. As a general rule, a greater availability of resources to the adversary (e.g., more money, time, talent) will tend to increase the sophistication of the intrusion that can be launched against any given target and thus the likelihood that the adversary will be able to penetrate the target’s defenses. Two important consequences follow from this discussion. First, because nation-state adversaries can bring to bear enormous resources to conduct an intrusion, the nation-state threat (perhaps conducted through intermediaries) is the most difficult to defend against. Second, stronger defenses reduce the likelihood but cannot eliminate the possibility that even less sophisticated adversaries can cause sig - nificant damage. 2. A RANgE OF POSSIbILITIES The discussion below focuses primarily on cyberattacks as the primary policy concern of the United States, and addresses cyber exploitation as necessary. 2.1 The Limitations of Passive Defense and Some Additional Options The central policy question is how to achieve a reduction in the frequency, intensity, and severity of cyberattacks on U.S. computer systems and networks currently being experienced and how to prevent the far more serious attacks that are in principle possible. To promote and enhance the cybersecurity of 5 Chapter 1, NRC, technology, Policy, law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities , 2009.

OCR for page 345
 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS important U.S. computer systems and networks (and the information contained in or passing through these systems and networks), much attention has been devoted to passive defense—measures taken unilaterally to increase the resistance of an information technology system or network to attack. These measures include hardening systems against attack, facilitating recovery in the event of a successful attack, making security more usable and ubiquitous, and educating users to behave properly in a threat environment.6 Passive defenses for cybersecurity are deployed to increase the difficulty of conducting the attack and reduce the likelihood that a successful attack will have significant negative consequences. But experience and recent history have shown that they do not by themselves provide an adequate degree of cybersecurity for important information systems and networks. A number of factors explain the limitations of passive defense. As noted in previous NRC reports, 7 today’s decision-making calculus regarding cybersecurity excessively focuses vendor and end-user attention on the short-term costs of improving their individual cybersecurity postures to the detriment of the national cybersecurity posture as a whole. As a result, much of the critical infrastructure on which the nation depends is inadequately protected against cyberintrusion. A second important factor is that passive defensive measures must succeed every time an adversary conducts a hostile action, whereas the adversary’s action need succeed only once. Put differently, attacks can be infinitely varied, whereas defenses are only as strong as their weakest link. This fact places a heavy and asymmetric burden on a defensive posture that employs only passive defense. Because passive defenses do not eliminate the possibility that an attack might succeed, it is natural for policy makers to seek other mechanisms to deal with threats that passive defenses fail to address adequately. Policy makers understandably aspire to a goal of preventing cyberattacks (and cyber exploitations as well), but most importantly to a goal of preventing serious cyberattacks—cyberattacks that have a disabling or a crippling effect on critical societal functions on a national scale (e.g., military mission readiness, air traffic control, financial services, provision of electric power). In this context, “deterrence” refers to a tool or a method used to help achieve this goal. The term “deterrence” itself has a variety of connotations, but broadly speaking, deterrence is a tool for dissuading an adversary from taking hostile actions. Adversaries that might conduct cyberintrusions against the United States span a broad range and may well have different objectives. Possible adversaries include nation-states that would use cyberat - tacks to collect intelligence, steal technology, or “prepare the battlefield” for use of cyberattacks either by themselves or as part of a broader effort (perhaps involving the use or threat of use of conventional force) to coerce the United States; sophisticated elements within a state that might not be under the full control of the central government (e.g., Iranian Revolutionary Guards); criminal organizations seeking illicit monies; terrorist groups operating without state knowledge; and so on. In principle, policy makers have a number of approaches at their disposal to further the broad goal of preventing serious cyberattacks on the United States. In contrast to passive defense, all of these approaches depend on the ability to attribute hostile actions to specific responsible parties (although the precise definition of “responsible party” depends to a certain extent on context). The first approach, and one of the most common, is the use of law enforcement authorities to investigate cyberattacks, and then identify and prosecute the human perpetrators who carry out these attacks. Traditionally, law enforcement actions serve two purposes. First, when successful, they remove such perpetrators from conducting further hostile action, at least for a period of time. Second, the pun - ishment imposed on perpetrators is intended to dissuade other possible perpetrators from conducting similar actions. However, neither of these purposes can be served if the cyberattacks in question cannot be attributed to specific perpetrators. 6 Asan example, see NRC, toward a Safer and more Secure Cyberspace, 2007. 7 NationalResearch Council, Cybersecurity today and tomorrow: Pay now or Pay later, The National Academies Press, Washington, D.C., 2002; NRC, toward a Safer and more Secure Cyberspace, 2007.

OCR for page 345
 APPEndiX A In a cyber context, law enforcement investigations and prosecutions have had some success, but the time scale on which such activities yield results is typically on the order of months, during which time cyberattacks often continue to plague the victim. As a result, most victims have no way to stop an attack that is causing ongoing damage or loss of information. In addition, the likelihood that any given attack will be successfully investigated and prosecuted is low, thus reducing any potential deterrent effect. Notwithstanding the potential importance of law enforcement activities for the efficacy of pos - sible deterrence strategies, law enforcement activities are beyond the scope ofthis report and will not be addressed further herein. A second approach relies on deterrence as it is classically understood. The classical model of deterrence (discussed further in Section 0) seeks to prevent hostile actions through the threat of retaliation or responsive action that imposes unacceptable costs on a potential adversary or denies an adversary the benefits that may result from taking those hostile actions. Deterrence thus includes active defense, in which actions can be taken to neutralize an incoming cyberattack. A third approach takes note of the fact that the material threat of retaliation underlying deterrence is not the only method of inhibiting undesirable behavior. Behavioral restraint (discussed further in Section 1.2) is more often the result of formal law and informal social norms, and the burden of enforcement depends a great deal on the robustness of such rules and the pressures to conform to those rules that can be brought to bear through the social environ - ment that the various actors inhabit. These approaches—and indeed an approach based on passive defense—are by no means mutually exclusive. For example, some combination of strengthened passive defenses, deterrence, law enforce - ment, and negotiated behavioral restraint may be able to reduce the likelihood that highly destructive cyberattacks would be attempted and to minimize the consequences if cyberattacks do occur. But how well any of these approaches can or will work to prevent cyberattacks (or cyberintrusions more broadly) is open to question, and indeed is one primary subject of the papers to be commissioned for this project. 2.2 Classical Deterrence8 Many analysts have been drawn to the notion of deterring hostile activity against important IT sys - tems and networks, rather than just defending against such activity. Deterrence seems like an inevitable choice in an offense-dominant world—that is, a world in which offensive technologies and tactics are generally capable of thwarting defensive efforts. As noted in Section 1.1, a major difficulty of defending against hostile actions in cyberspace arises from the asymmetry of offense versus defense. Deterrence was and is a central construct in contemplating the use of nuclear weapons and in nuclear strategy. Because effective defenses against nuclear weapons are difficult to construct, using the threat of retaliation to persuade an adversary to refrain from using nuclear weapons is regarded by many as the most plausible and effective alternative to ineffective or useless defenses. Indeed, deterrence of nuclear threats in the Cold War establishes the paradigm in which the conditions for successful deterrence are largely met. Although the threat of retaliation is not the only possible mechanism for practicing deterrence, such a threat is in practice the principal and most problematic method implied by use of the term. 9 Extending traditional deterrence principles to cyberattack (that is, cyberdeterrence) would suggest an approach that seeks to persuade adversaries to refrain from launching cyberattacks against U.S. interests, recognizing that cyberdeterrence would be only one of a suite of elements of U.S. national security policy. 8 The discussion in Section 2.2 is based on Chapter 9, NRC, technology, Policy, law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities, 2009. 9Analysts also invoke the concept of deterrence by denial, which is based on the prospect of deterring an adversary through the prospect of failure to achieve its goals—facing failure, the adversary chooses to refrain from acting. But denial is—by defini - tion—difficult to practice in an offense-dominant world.

OCR for page 345
0 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS But it is an entirely open question whether cyberdeterrence is a viable strategy. Although nuclear weapons and cyber weapons share one key characteristic (the superiority of offense over defense), they differ in many other key characteristics, and the section below discusses cyberdeterrence and when appropriate contrasts cyberdeterrence to Cold War nuclear deterrence. What the discussion below will suggest is that nuclear deterrence and cyberdeterrence do raise many of the same questions, but indeed that the answers to these questions are quite different in the cyber context than in the nuclear context. The U.S. Strategic Command formulates deterrence as follows:10 Deterrence [seeks to] convince adversaries not to take actions that threaten u.S. vital interests by means of decisive influence over their decision-making. Decisive influence is achieved by credibly threatening to deny benefits and/or impose costs, while encouraging restraint by convincing the actor that restraint will result in an acceptable outcome. For purposes of this report, the above formulation will be used to organize the remainder of this section, by discussing at greater length the words in bold above. Nevertheless, the committee does rec - ognize that there are other plausible formulations of the concept of deterrence, and that these formula - tions might differ in tone and nuance from that provided above. 2.2.1 “Conince” At its root, convincing an adversary is a psychological process. Classical deterrence theory assumes that actors make rational assessments of costs and benefits and refrain from taking actions where costs outweigh benefits. But it assumes unitary actors (i.e., a unitary decision maker whose cost-benefit cal - culus is determinative for all of the forces under his control), and also that the costs and benefits of each actor are clear, well-defined, and indeed known to all other actors involved, and further that these costs and benefits are sufficiently stable over time to formulate and implement a deterrence strategy. Classical deterrence theory bears many similarities to neoclassical economics, especially in its assumptions about the availability of near-perfect information (perfect in the economic sense) about all actors. Perhaps more importantly, real decisions often take place during periods of crisis, in the midst of uncertainty, doubt, and fear that often lead to unduly pessimistic assessments. Even a cyberattack con - ducted in peacetime is more likely to be carried out under circumstances of high uncertainty about the effectiveness of technology on both sides, the motivations of an adversary, and the effects of an attack. In addition, cyber conflict is relatively new, and there is not much known about how cyber conflict would or could evolve in any given situation. History shows that when human beings with little hard information are placed into unfamiliar situations in a general environment of tension, they often sub - stitute supposition for knowledge. In the words of a former senior administration official responsible for protecting U.S. critical infrastructure, “I have seen too many situations where government officials claimed a high degree of confidence as to the source, intent, and scope of a [cyber]attack, and it turned out they were wrong on every aspect of it. That is, they were often wrong, but never in doubt.” 11 As an example, cyber operations that would be regarded as unfriendly during normal times may be regarded as overtly hostile during periods of crisis or heightened tension. Cyber operations X, Y, and Z undertaken by party A (with a history of neutrality) may be regarded entirely differently if under- taken by party B (with a history of acting against U.S. interests). Put differently, reputations and past behavior matter—how we regard or attribute certain actions that happen today will depend on what has happened in the past. This point has particular relevance as U.S. interest in obtaining offensive capabilities in cyberspace becomes more apparent. The United States is widely regarded as the world leader in information tech - nology, and such leadership can easily be seen by the outside world as enabling the United States to 10 U.S. Department of Defense, deterrence operations: Joint operating Concept, Version 2.0, December 2006, available at http:// www.dtic.mil/futurejointwarfare/concepts/do_joc_v20.doc. 11 See NRC, technology, Policy, law, and Ethics Regarding Acquisition and Use of U.S. Cyberattack Capabilities , 2009, page 142.

OCR for page 345
1 APPEndiX A conceal the origin of any offensive cyber operation that it might have conducted. That is, many nations will find it plausible that the United States is involved in any such operation against it, and even if no U.S.-specific “fingerprints” can be found, such a fact can easily be attributed to putative U.S. technologi - cal superiority in conducting such operations. Lastly, a potential adversary will not be convinced to refrain from hostile action if it is not aware of measures the United States may take to retaliate. Thus, some minimum of information about deterrence policy must be known and openly declared. This point is further addressed in Section 1.1.3. 2.2.2  “Adersaries” In the Cold War paradigm of nuclear deterrence, the world is state-centric and bipolar. It was reason- able to presume that only nation-states could afford to assemble the substantial infrastructure needed to produce the required fissile material and develop nuclear weapons and their delivery vehicles. That infrastructure was sufficiently visible that an intelligence effort directed at potential adversaries could keep track of the nuclear threat that possible adversaries posed to the United States. Today’s concerns about terrorist use of nuclear weapons arise less from a fear that terrorists will develop and build their own nuclear weapons and more from a fear that they will be able to obtain nuclear weapons from a state that already has them. These characteristics do not apply to the development of weapons for cyberattack. Many kinds of cyberattack can be launched with infrastructure, technology, and background knowledge easily and widely available to nonstate parties and small nations. Although national capabilities may be required for certain kinds of cyberattack (such as those that involve extensive hardware modification or highly detailed intelligence regarding truly closed and isolated system and networks), substantial damage can be inflicted by cyberattacks based on ubiquitous technology. A similar analysis holds for identifying the actor responsible for an attack. In the nuclear case, an attack on the United States would have been presumed to be Soviet in origin because the world was bipolar. In addition, surveillance of potential launch areas provided high-confidence information regard - ing the fact of a launch, and also its geographical origin—a missile launch from the land mass of any given nation could be safely attributed to a decision by that nation’s government to order that launch. Sea-based or submarine-based launches are potentially problematic in this regard, although in a bipolar world, the Soviet Union would have been deemed responsible. In a world with three potential nuclear adversaries (the United States, Soviet Union, and China), intensive intelligence efforts have been able to maintain to a considerable extent the capability for attributing a nuclear attack to a national power, through measures such as tracking adversary ballistic missile submarines at sea. Identification of the distinctive radiological signatures of potential adversaries’ nuclear weapons is also believed to have taken place. The nuclear deterrence paradigm also presumes unitary actors, nominally governments of nation- states—that is, it presumes that the nuclear forces of a nation are under the control of the relevant gov - ernment, and that they would be used only in accordance with the decisions of national leaders. These considerations do not hold for cyberattack, and for many kinds of cyberattack the United States would almost certainly not be able to ascertain the source of such an attack, even if it were a national act, let alone hold a specific nation responsible. For example, the United States is constantly under cyberattack today, and it is widely believed (though without conclusive proof) that most of these cyberattacks are not the result of national decisions by an adversary state, though press reports have claimed that some are. In general, prompt technical attribution of an attack or exploitation—that is, identification of the responsible party (individual? subnational group? nation-state?) based only on technical indicators associated with the event in question—is quite problematic, and any party accused of launching a given cyberintrusion could deny it with considerable plausibility. Forensic investigation might yield the identity of the responsible party, but the time scale for such investigation is often on the order of weeks

OCR for page 345
2 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS or months. (Although it is often quite straightforward to trace an intrusion to the proximate node, in general, this will not be the origination point of the intrusion. Tracing an intrusion to its actual origina - tion point past intermediate nodes is what is most difficult.) Three factors mitigate to some (unknowable) degree this bleak picture regarding attribution. First, for reasons of its own, a cyberattacker may choose to reveal to its target its responsibility for a cyberat - tack. For example, it may conduct a cyberattack of limited scope to demonstrate its capability for doing so, acknowledge its responsibility, and then threaten to conduct a much larger one if certain demands are not met.12 Second, over time a series of cyberintrusions might be observed to share important technical features that constitute a “signature” of sorts. Thus, the target of a cyberattack may be able to say that it was victimized by a cyberattack of type X on 16 successive occasions over the last 3 months. An inference that the same party was responsible for that series of attack might under some circumstances have some plausibility. Third, the target of a cyberattack may have nontechnical information that points to a perpetrator, such as information from a well-placed spy in an adversary’s command structure or high-quality signals intelligence. If such a party reports that the adversary’s forces have just launched a cyberattack against the United States, or if a generally reliable communications intercept points to such responsibility, such information might be used to make a plausible inference about the state responsible for that attack. Political leaders in particular will not rely only on technical indicators to determine the state respon - sible for an attack—rather, they will use all sources of information available to make the best possible determination. Nevertheless, it is fair to say that absent unusually good intelligence information, high confidence in the attribution of a cyberattack to a nation-state is almost certain to be unattainable during and immediately after that attack, and may not be achievable for a long time afterward. Thus, any retaliatory response to a cyberattack using either cyber or kinetic weaponry may carry a significant risk of being directed improperly, perhaps with grave unintended consequences. 2.2. “Actions that threaten U.S. vital interests” What actions is the United States trying to deter, and would the United States know that an action has occurred that threatens its vital interests? A nuclear explosion on U.S. territory is an unambiguously large and significant event, and there is little difficulty in identifying the fact of such an explosion. The United States maintains a global network of satellites that are capable of detecting and locating nuclear explosions in the air and on the ground, and a network of seismic sensors that provide additional information to localize nuclear explosions. Most importantly, a nuclear explosion would occur against the very quiet background of zero nuclear explosions happening over time. But U.S. computer and communications systems and networks are under constant cyberintrusion from many different parties, and against this background noise, the United States would have to notice that critical systems and networks were being attacked and damaged. A cyberattack on the United States launched by an adversary might target multiple sites—but correlating information on attacks at different sites against a very noisy background to determine a common cause is today technically challenging. Target sets may be amorphous and complex, especially when massively complex and globally scaled supply chains are involved. And the nature of a questionable event (an intrusion) is often in doubt—is it an attack or an exploitation? If an attack, does a destructive cyberattack take place when the responsible 12 O f course, a forensic investigation might still be necessary to rule out the possibility that the putative attacker was only claim - ing responsibility for the attack when in fact it had no real ability to conduct the attack on its own. To mitigate the possibility that it might not be believed, the party claiming responsibility could leave a “calling card” in the wake of an attack whose contents only it could know.

OCR for page 345
 APPEndiX A software agent is implanted in a critical U.S. system, or when it is actiated? Even knowing the effect or impact of an attack or exploitation is difficult, as the consequences of some intrusions will play out only over an extended period of time. (For example, an attack may be designed to have no immediate impact and only later to show destructive consequences.) Another profound difference between the nuclear and cyber domains is that nuclear weapons are not thought to target individual private sector entities—it would be highly unusual for a major corporation, for example, to be the specific target of a nuclear weapon. By contrast, major corporations are subject to cyberattacks and cyber exploitations on a daily basis. This difference raises the question of whether deterrence of such intrusions on individual private sector entities (especially those that are regarded as a part of U.S. critical infrastructure) is an appropriate goal of U.S. policy—as suggested by recent allega - tions of Chinese cyberintrusions against human rights activists using Google’s gmail.com service and against multiple private sector companies in the United States seeking important intellectual property of these companies.13 The question is important, because targeted private entities might seek to defend themselves by retaliating against attackers or cyber spies, notwithstanding criminal prohibitions, with consequences damaging to U.S. national interests. The question is important for a number of reasons. First, U.S. military forces have not been used in recent years to support the interests of specific private sector entities, at least not as a matter of declared public policy. Thus, an explicit threat to respond with force, whether cyber or otherwise, to a cyberattack on an individual private sector entity would constitute a major change in U.S. policy. Second, targeted private entities might seek to defend themselves by retaliating against attackers or cyber spies, even though such actions are currently illegal under U.S. law, and such retaliation by these entities might well have consequences damaging to U.S. national interests. 2.2. “Credible threat” A credible threat is one that an adversary believes can and will be executed with a sufficiently high probability to dissuade the adversary from taking action. (The definition of “sufficiently high” is subject to much debate and almost certainly depends on the specific case or issue in question. In some cases, even a low absolute probability of executing the deterrent threat is sufficient to dissuade.) In the nuclear domain, the United States developed strategic forces with the avowed goal of making them survivable regardless of what an adversary might do. Survivability means that these forces will be able to execute the retaliatory threat for which they are responsible under any possible set of circumstances. In addition, the United States conducts many highly visible military training exercises involving both its conven - tional and nuclear forces, at least in part to demonstrate its capabilities to potential adversaries. On the other hand, U.S. capabilities for offensive cyber operations are highly classified, at least in part because discussing these capabilities in the open may point the way for adversaries to counter them. That is, at least some capabilities for conducting offensive cyber operations depend on a vulnerability that an adversary would be able to fix, if only he knew about it. To the extent that U.S. capabilities for cyber operations are intended to be part of its overall deterrent posture, how should the United States demonstrate those capabilities? Or is such demonstration even necessary given widespread belief in U.S. capabilities? A credible deterrent threat need not be limited to a response in kind—the United States has a wide variety of options for responding to any given cyberattack, depending on its scope and character; these options include a mix of changes in defense postures, law enforcement actions, diplomacy, economic actions, cyberattacks, and kinetic attacks.14 13 See, for example, Ariana Eunjung Cha and Ellen Nakashima, “Google China Cyberattack Part of Vast Espionage Campaign, Experts Say,” washington Post, January 14, 2010. 14 Chapter 1, NRC, technology, Policy, law, and Ethics Regarding Acquisition and Use of U.S. Cyberattack Capabilities , 2009. As illus- trations, a change in defensive posture might include dropping low-priority services, installing security patches known to cause inconvenient but manageable operational problems, restricting access more tightly, and so on. Law enforcement actions might call

OCR for page 345
 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS Another dimension of making a threat credible is to communicate the threat to potential adversar- ies. A nation’s declaratory policy underpins such communication and addresses, in very general terms, why a nation acquires certain kinds of weapons and how those weapons might be used. For example, the declaratory policy of the United States regarding nuclear weapons is stated in the National Military Strategy, last published in 2004:15 Nuclear capabilities [of the United States] continue to play an important role in deterrence by providing military options to deter a range of threats, including the use of WMD/E and large-scale conventional forces. Additionally, the extension of a credible nuclear deterrent to allies has been an important nonpro - liferation tool that has removed incentives for allies to develop and deploy nuclear forces. For the use of cyber weapons, the United States has no declaratory policy, although the DOD Infor- mation Operations Roadmap of 2003 stated that “the USG should have a declaratory policy on the use of cyberspace for offensive cyber operations.”16 Lastly, a “credible threat” may be based on the phenomenon of blowback, which refers to a bad consequence affecting the instigator of a particular action. In the cyberattack context, blowback may entail direct damage caused to one’s own computers and networks as the result of a cyberattack that one has launched. For example, if Nation X launched a cyberattack against an adversary using a rapidly multiplying but uncustomized and indiscriminately targeted worm over the Internet, the worm might return to adversely affect Nation X’s computers and networks. Blowback might also refer to indirect damage—a large-scale cyberattack by Nation X against one of its major trading partners (call it Nation Y) that affected Nation Y’s economic infrastructure might have effects that could harm Nation X’s economy as well. If concerns over such effects are sufficiently great, Nation X may be deterred (more precisely, self- deterred) from conducting such attacks against Nation Y (or any other major trading partner). Blowback may sometimes refer to counterproductive political consequences of an attack—for example, a cyberat - tack launched by a given government or political group may generate a populist backlash against that government or group if attribution of the attack can be made to the party responsible. For blowback to be the basis of a credible threat, the dependencies that give rise to blowback should be apparent (or at least plausible) to a potential attacker. (As a possible example, it may be that given massive Chinese investment in U.S. securities, the Chinese have a large stake in the stability of U.S. financial markets, and thus might choose to refrain from an attack that might do significant harm to those markets.) 2.2. “denying Benefits” The ability to deny an adversary the benefits of an attack has two salutary results. First, an attack, if it occurs, will be futile and not confer on the adversary any particular advantage. Second, if the adver- sary believes (in advance) that he will not gain the hoped-for benefits, he will be much less likely to conduct the attack in the first place. In the nuclear domain, ballistic missile defenses are believed to increase the uncertainty of an attack’s success. For this reason, they need not be perfect—only good enough to significantly complicate an adversary’s planning to the point at which it becomes impossible to carry out an attack with a high probability of success. In the cyber domain, a number of approaches can be used to deny an adversary the benefits of an attack. Passive defenses can be strengthened in a number of ways, such as reducing the number of vul - nerabilities present in vital systems, reducing the number of ways to access these systems, configuring for investigation and prosecution of perpetrators. Diplomacy might call for demarches delivered to a perpetrator’s government or severing diplomatic relations. Economic actions might involve sanctions. 15 Joint Chiefs of Staff, “The National Military Strategy of the United States of America,” 2004, available at http://www.strategic studiesinstitute.army.mil/pdffiles/nms2004.pdf. 16Available at http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB177/info_ops_roadmap.pdf.

OCR for page 345
 APPEndiX A these systems to minimize their exposed security vulnerabilities, dropping traffic selectively, and so on. Properties such as rapid recoverability or reconstitution from a successful attack can be emphasized. Active defense may also be an option. Active defense against an incoming cyberattack calls for an operation, usually a cyber operation, that can be used to neutralize that incoming attack. A responsive operation (often described within the U.S. military as a “computer network defense response action”) must be conducted while the adversary’s cyberattack is in progress, so that there is an access path back to the facilities being used to mount the attack. In practice, active defense is possible only for certain kinds of cyberattack (e.g., denial-of-service attacks) and even then only when the necessary intelligence information on the appropriate targets to hit is available to support a responsive operation. On the other hand, whether improvements in denying benefits are sufficient to deter a cyber adver- sary is open to question. Experience to date suggests that strengthening a system’s passive defense posture may discourage the casual attacker, but will only suffice to delay a determined one. That is, the only costs to the attacker result from the loss of time and thus an increased uncertainty about its ability to conduct a successful attack on a precise timetable. Such uncertainty arguably contributes to deterrence if (and only if) the action being deterred is a necessary prelude to some other kind of attack that must also be planned and executed along a particular timetable. 2.2. “imposing Costs” Costs that may be imposed on an adversary typically involve the loss of assets or functionality valued by the adversary. In the nuclear case, the ability to attribute an attack to a national actor, coupled with a knowledge of which specific states are nuclear-capable, enables the United States to identify target sets within each potential nuclear adversary, the destruction of which the United States believes would be particularly costly to those adversaries. In the context of cyberattack, an attacker determined to avoid U.S. retaliation may well leave a false trail for U.S. forensic investigators to follow; such a trail would either peter out inconclusively or even worse, point to another nation that might well see any U.S. action taken against it as an act of war. (Catalytic conflict, in which a third party instigates mutual hostilities between two nations, is probably much easier in cyberspace than in any other domain of potential conflict.) That said, the ability to attribute political responsibility for a given cyberattack is the central thresh - old question. If responsibility cannot be attributed, the only hope of imposing any costs at all lies in identifying an access path to the platforms involved in launching the cyberattack on U.S. interests. For example, if it is possible to identify an access path to the attacking platforms in the midst of an ongoing cyberattack, knowledge of the national (or subnational) actor’s identity may not be necessary from a technical per - spective to neutralize those platforms. (An analogy would be an unidentified airplane dropping bombs on a U.S. base—such an airplane could be shot down without knowing anything about the airplane or its pilot other than the fact that it was dropping bombs on a U.S. base.) Under these circumstances, a strike-back has some chance of neutralizing an incoming cyberattack even if the identity of the adver - sary is not known. By developing capabilities to deny the adversary a successful cyberattack through neutralization, the United States might be able to deter adversaries from launching at least certain kinds of cyberattack against the United States. Yet neutralization is likely to be difficult—destroying or degrading the source of a cyberattack while the attack is in progress may simply lead the adversary to launch the attack from a different source. It is also extremely likely that the attacking platforms will belong to innocent parties. The attacking platforms may also be quite inexpensive—personal computers can be acquired for a few hundred dollars, and any software used to conduct an attack is virtually free to reproduce. Thus, the attacking platforms may not be assets that are particularly valuable to the attacker. Intermediate nodes that participate in an attack, such as the subverted computers of innocent parties used in a botnet,

OCR for page 345
 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS 3. A POSSIbLE RESEARCH AgENDA Although the preceding section seeks to describe some of the essential elements of cyberdeter- rence, it is sobering to realize the enormity of intellectually unexplored territory associated with such a basic concept. Thus, the committee believes that considerable work needs to be done to explore the relevance and applicability of deterrence and prevention/inhibition to cyber conflict. At the highest level of abstraction, the central issue of interest is to identify what combinations of posture, policies, and agreements might help to prevent various actors (including state actors, nonstate actors, and organized criminals) from conducting cyberattacks that have a disabling or a crippling effect on critical societal functions on a national scale (e.g., military mission readiness, air traffic control, financial services, pro - vision of electric power). The broad themes described below (lettered A-H) are intended to constitute a broad forward-looking research agenda on cyberdeterrence. Within each theme are a number of elaborating questions that are illustrative of those that the committee believes would benefit from greater exploration and analysis. Thoughtful research and analysis in these areas would contribute significantly to understanding the nature of cyberdeterrence. A. Theoretical Models for Cyberdeterrence 1. Is there a model that might appropriately describe the strategies of state actors acting in an adversarial manner in cyberspace? Is there an equilibrium state that does not result in cyber conflict? 2. How will any such deterrence strategy be affected by mercenary cyber armies for hire and/or patriotic hackers? 3. How does massive reciprocal uncertainty about the offensive cyberattack capabilities of the different actors affect the prospect of effective deterrence? 4. How might adversaries react technologically and doctrinally to actual and anticipated U.S. policy decisions intended to strengthen cyberdeterrence? 5. What are the strengths and limitations of applying traditional deterrence theory to cyber conflict? 6. What lessons and strategic concepts from nuclear deterrence are applicable and relevant to cyberdeterrence? 7. How could mechanisms such as mutual dependencies (e.g., attacks that cause actual harm to the attacker as well as to the attacked) and counterproductivity (e.g., attacks that have negative politi - cal consequences against the attacker) be used to strengthen deterrence? How might a comprehensive deterrence strategy balance the use of these mechanisms with the use of traditional mechanisms such as retaliation and passive defense? b. Cyberdeterrence and Declaratory Policy 8. What should be the content of a declaratory policy regarding cyberintrusions (that is, cyberat - tacks and cyberintrusions) conducted against the United States? Regarding cyberintrusions conducted by the United States? What are the advantages and disadvantages of having an explicit declaratory policy? What purposes would a declaratory policy serve? 9. What longer-term ramifications accompany the status quo of strategic ambiguity and lack of declaratory policy? 10. What is the appropriate balance between publicizing U.S. efforts to develop cyber capabilities in order to discourage/deter attackers and keeping them secret in order to make it harder for others to foil them? 11. What is the minimum amount and type of knowledge that must be made publicly available regarding U.S. government cyberattack capabilities for any deterrence policy to be effective?

OCR for page 345
 APPEndiX A 12. To the extent that a declaratory policy states what the United States will not do, what offensive operational capabilities should the United States be willing to give up in order to secure international cooperation? How and to what extent, if at all, does the answer vary by potential target (e.g., large nation-state, small nation-state, subnational group, and so on)? 13. What declaratory policy might help manage perceptions and effectively deter cyberattack? C. Operational Considerations in Cyberdeterrence 14. On what basis can a government determine whether a given unfriendly cyber action is an attack or an exploitation? What is the significance of mistaking an attack for an exploitation or vice versa? 15. How can uncertainty and limited information about an attacker’s identity (i.e., attribution), and about the scope and nature of the attack, be managed to permit policy makers to act appropriately in the event of a national crisis? How can overconfidence or excessive needs for certainty be avoided during a cyber crisis? 16. How and to what extent, if at all, should clear declaratory thresholds be established to delineate the seriousness of a cyberattack? What are the advantages and disadvantages of such clear thresholds? 17. What are the tradeoffs in the efficacy of deterrence if the victim of an attack takes significant time to measure the damage, consult, review options, and most importantly to increase the confidence that attribution of the responsible party is performed correctly? 18. How might international interdependencies affect the willingness of nations to conduct certain kinds of cyberattack on other nations? How can blowback be exploited as an explicit and deliberate component of a cyberdeterrence strategy? How can the relevant feedback loops be made obvious to a potential attacker? 19. What considerations determine the appropriate mode(s) of response (cyber, political, economic, traditional military) to any given cyberattack that calls for a response? 20. How should an ostensibly neutral nation be treated if cyberattacks emanate from its territory and that nation is unable or unwilling to stop those attacks? 21. Numerous cyberattacks on the United States and its allies have already occurred, most at a rela - tively low level of significance. To what extent has the lack of a public offensive response undermined the credibility of any future U.S. deterrence policy regarding cyberattack? How might credibility be enhanced? 22. How and to what extent, if at all, must the United States be willing to make public its evidence regarding the identity of a cyberattacker if it chooses to respond aggressively? 23. What is the appropriate level of government to make decisions regarding the execution of any particular declaratory or operational policy regarding cyberdeterrence? How, if at all, should this level change depending on the nature of the decision involved? 24. How might cyber operations and capabilities contribute to national military operations at the strategic and tactical levels, particularly in conjunction with other capabilities (e.g., cyberattacks aimed at disabling an opponent’s defensive systems might be part of a larger operation), and how might offensive cyber capabilities contribute to the deterrence of conflict more generally? 25. How should operational policy regarding cyberattack be structured to ensure compliance with the laws of armed conflict? 26. How might possible international interdependencies be highlighted and made apparent to potential nation-state attackers? 27. What can be learned from case studies of the operational history of previous cyberintrusions? What are the lessons learned for future conflicts and crises? 28. Technical limitations on attribution are often thought to be the central impediment in holding hostile cyber actors accountable for their actions. How and to what extent would a technology infra - structure designed to support high-confidence attribution contribute to the deterrence of cyberattack and cyber exploitation, make the success of such operations less likely, lower the severity of the impact

OCR for page 345
 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS of an attack or exploitation, and ease reconstitution and recover after an attack? What are the techni - cal and nontechnical barriers to attributing cyberintrusions? How might these barriers be overcome or addressed in the future? D. Regimes of Reciprocal/Consensual Limitations 29. What regimes of mutual self-restraint might help to establish cyberdeterrence (where regimes are understood to include bilateral or multilateral hard-law treaties, soft-law mechanisms [agreements short of treaty status that do not require ratification], and international organizations such as the International Telecommunication Union, the United Nations, the Internet Engineering Task Force, the Internet Cor- poration for Assigned Names and Numbers, and so on)? Given the difficulty of ascertaining the intent of a given cyber action (e.g., attack or exploitation) and the scope and extent of any given actor’s cyber capabilities, what is the role of verification in any such regime? What sort of verification measures are possible where agreements regarding cyberattack are concerned? 30. What sort of international norms of behavior might be established among like-minded nations collectively that can help establish cyberdeterrence? What sort of self-restraint might the United States have to commit to in order to elicit self-restraint from others? What might be the impact of such self- restraint on U.S. strategies for cyber conflict? How can a “cyberattack taboo” be developed (perhaps analogous to taboos against the use of biological or nuclear weapons)? 31. How and to what extent, if any, can the potency of passive defense be meaningfully enhanced by establishing supportive agreements and operating norms? 32. How might confidence-building and stability measures (analogous to hotline communications in possible nuclear conflict) contribute to lowering the probability of crises leading to actual conflict? 33. How might agreements regarding nonmilitary dimensions of cyberintrusion support national security goals? 34. How and to what extent, if at all, should the United States be willing to declare some aspects of cyberintrusion off limits to itself? What are the tradeoffs involved in foreswearing offensive operations, either unilaterally or as part of a multilateral (or bilateral) regime? 35. What is an act of war in cyberspace? Under what circumstances can or should a cyberattack be regarded as an act of war.25 How and to what extent do unique aspects of the cyber realm, such as revers - ibility of damage done during an attack and the difficulty of attribution, affect this understanding? 36. How and to what extent, if any, does the Convention on Cyber Crime (http://conventions.coe. int/Treaty/EN/Treaties/html/185.htm) provide a model or a foundation for reaching further interna - tional agreements that would help to establish cyberdeterrence? 37. How might international and national law best address the issue of patriotic hackers or cyber patriots (or even private sector entities that would like to respond to cyberattacks with cyber exploita - tions and/or cyberattacks of their own), recognizing that the actions of such parties may greatly com - plicate the efforts of governments to manage cyber conflict? E. Cyberdeterrence in a Larger Context 38. How and to what extent, if at all, is an effective international legal regime for dealing with cyber crime a necessary component of a cyberdeterrence strategy? 39. How and to what extent, if at all, is deterrence applicable to cyberattacks on private companies (especially those that manage U.S. critical infrastructure)? 25 The term “act of war” is a colloquial term that does not have a precise international legal definition. The relevant terms from the UN Charter are “use of force,” “threat of force,” and “armed attack,” although it must be recognized that there are no inter - nationally agreed-upon formal definitions for these terms either.

OCR for page 345
 APPEndiX A 40. How should a U.S. cyberdeterrence strategy relate to broader U.S. national security interests and strategy? F. The Dynamics of Action/Reaction 41. What is the likely impact of U.S. actions and policy regarding the acquisition and use of its own cyberattack capabilities on the courses of action of potential adversaries? 42. How and to what extent, if at all, do efforts to mobilize the United States to adopt a stronger cyberdefensive posture prompt potential adversaries to believe that cyberattack against the United States is a viable and effective means of causing damage? g. Escalation Dynamics 43. How might conflict in cyberspace escalate from an initial attack? Once cyber conflict has broken out, how can further escalation be deterred? 44. What is the relationship between the onset of cyber conflict and the onset of kinetic conflict? How and under what circumstances might cyberdeterrence contribute, if at all, to the deterrence of kinetic conflict? 45. What safeguards can be constructed against catalytic cyberattack? Can the United States help others with such safeguards? H. Collateral Issues 46. How and to what extent do economics and law (and regulation) affect efforts to enhance cyber- security in the private sector? What are the pros and cons of possible solution elements that may involve (among other things) regulation, liability, and standards-setting that could help to change the existing calculus regarding investment strategies and approaches to improve cybersecurity? Analogies from other “protection of the commons” problem domains (e.g., environmental protection) may be helpful. 47. What are the civil liberties implications (e.g., for privacy and free expression) of policy and technical changes aimed at preventing cyberattacks, such as systems of stronger identity management for critical infrastructure? What are the tradeoffs from a U.S. perspective? How would other countries see these tradeoffs? 48. How can the development and execution of a cyberdeterrence policy be coordinated across every element of the executive branch and with Congress? How should the U.S. government be organized to respond to cyber threats? What organizational or procedural changes should be considered, if any? What roles should the new DOD Cyber Command play? How will the DOD and the intelligence com - munity work together in accordance with existing authorities? What new authorities would be needed for effective cooperation? 49. How and to what extent, if any, do private entities (e.g., organized crime, terrorist groups) with significant cyberintrusion capabilities affect any government policy regarding cyberdeterrence? Private entities acting outside government control and private entities acting with at least tacit government approval or support should both be considered. 50. How and to what extent are current legal authorities to conduct cyber operations (attack and exploitation) confused and uncertain? What standards should govern whether or not a given cyber operation takes place? How does today’s uncertainty about authority affect the nation’s ability to execute any given policy on cyberdeterrence? 51. Cyberattack can be used as a tool for offensive and defensive purposes. How should cyberattacks intended for defensive purposes (e.g., conducted as part of an active defense to neutralize an incoming attack) differ from those intended for offensive purposes (e.g., a strategic cyberattack against the critical infrastructure of an adversary)? What guidelines should structure the former as opposed to the latter?

OCR for page 345
 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS Research contributions in these areas will have greater value if they can provide concrete analyses of the offensive actors (states, criminal organizations, patriotic hackers, terrorists, and so on), motiva - tions (national security, financial, terrorism), actor capacities and resources, and which targets require protection beyond that afforded by passive defenses and law enforcement (e.g., military and intelligence assets, critical infrastructure, and so on). 4. CONCLuSION The research agenda described in the questions above is intellectually challenging and fundamen - tally interdisciplinary. The committee hopes that a variety of scholarly communities, including those in political science, psychology, and computer science and information technology, are able to find ways of working together to address the very important question of deterring cyberattacks against the societal interests of the United States. Moving forward and in accordance with the requirements of the relevant contract, the committee has commissioned a number of papers that address some of the questions articulated above. Drafts of these papers will be discussed in a workshop to be held in June 2010. Although resource limitations will constrain the number of papers commissioned, the committee is of the belief that all of these questions are important and deserve further significant attention. Respectfully, John D. Steinbruner, Chair Committee on Deterring Cyberattacks Computer Science and Telecommunications Board Division on Engineering and Physical Sciences Division on Policy and Global Affairs

OCR for page 345
 APPEndiX A ATTACHMENT 1 bIOgRAPHIES OF COMMITTEE MEMbERS AND STAFF Committee Members John D. Steinbruner, Chair, is a professor of public policy at the School of Public Policy at the University of Maryland and director of the Center for International and Security Studies at Maryland (CISSM). His work has focused on issues of international security and related problems of international policy. Steinbruner was director of the Foreign Policy Studies Program at the Brookings Institution from 1978 to 1996. Prior to joining Brookings, he was an associate professor in the School of Organiza - tion and Management and in the Department of Political Science at Yale University from 1976 to 1978. From 1973 to 1976, he served as an associate professor of public policy at the John F. Kennedy School of Government at Harvard University, where he also was assistant director of the Program for Science and International Affairs. He was assistant professor of government at Harvard from 1969 to 1973 and assistant professor of political science at the Massachusetts Institute of Technology from 1968 to 1969. Steinbruner has authored and edited a number of books and monographs, including: the Cybernetic theory of decision: new dimensions of Political Analysis (Princeton University Press, originally published 1974, second paperback edition with new preface, 2002); Principles of global Security (Brookings Institu- tion Press, 2000); “A New Concept of Cooperative Security,” co-authored with Ashton B. Carter and William J. Perry (Brookings occasional Papers, 1992). His articles have appeared in Arms Control today, the Brookings Reiew, dædalus, Foreign Affairs, Foreign Policy, international Security, Scientific American, wash- ington Quarterly and other journals. Steinbruner is currently co-chair of the Committee on International Security Studies of the American Academy of Arts and Sciences, chairman of the board of the Arms Control Association, and board member of the Financial Services Volunteer Corps. He is a fellow of the American Academy of Arts and Sciences and a member of the Council on Foreign Relations. From 1981 to 2004 he was a member of the Committee on International Security and Arms Control of the National Academy of Sciences, serving as vice chair from 1996 to 2004. He was a member of the Defense Policy Board of the Department of Defense from 1993 to 1997. Born in 1941 in Denver, Colorado, Steinbruner received his A.B. from Stanford University in 1963 and his Ph.D. in political science from the Massachu - setts Institute of Technology in 1968. Steven M. bellovin is a professor of computer science at Columbia University, where he does research on networks, security, and especially why the two don’t get along. He joined the faculty in 2005 after many years at Bell Labs and AT&T Labs Research, where he was an AT&T Fellow. He received a B.A. degree from Columbia University, and an M.S. and a Ph.D. in computer science from the Univer- sity of North Carolina at Chapel Hill. While a graduate student, he helped create Netnews; for this, he and the other perpetrators were given the 1995 Usenix Lifetime Achievement Award (The Flame). He is a member of the National Academy of Engineering and is serving on the Department of Homeland Security’s Science and Technology Advisory Committee; he has also received the 2007 NIST/NSA National Computer Systems Security Award. Bellovin is the co-author of Firewalls and internet Security: Repelling the wily Hacker, and he holds a number patents on cryptographic and network protocols. He has served on many National Research Council study committees, including those on information systems trustworthiness, the privacy implications of authentication technologies, and cybersecurity research needs; he was also a member of the information technology subcommittee of an NRC study group on science versus terrorism. He was a member of the Internet Architecture Board from 1996 to 2002; he was co-director of the Security Area of the Internet Engineering Task Force (IETF) from 2002 through 2004. Stephen Dycus, a professor at Vermont Law School, teaches and writes about national security and the law, water rights, and wills and trusts. The courses he has taught at Vermont Law School include International Public Law, National Security Law, Estates, Property, and Water Law. He was founding chair of the National Security Law Section, Association of American Law Schools. Dycus is the lead author of national Security law (the field’s leading casebook), and was a founding co-editor in chief of

OCR for page 345
0 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS the Journal of national Security law & Policy. Dycus earned his B.A. degree in 1963 and his LLB degree in 1965 from Southern Methodist University. He earned his LLM degree in 1976 from Harvard University. He has been a faculty member at Vermont Law School since 1976. Dycus was a visiting scholar at the University of California at Berkeley’s Boalt Hall School of Law in 1983 and at the Natural Resources Defense Council in Washington, D.C., in 1991. He was a visiting professor at the United States Military Academy in West Point, New York, from 1991 to 1992 and at Petrozavodsk State University in Karelia, Russia, in 1997. Dycus is a member of the American Law Institute. Dycus also served as a reviewer of the recent NRC report technology, Policy, law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. Sue E. Eckert is senior fellow at the Thomas J. Watson Jr. Institute for International Studies at Brown University, after having served as assistant secretary of commerce in the Clinton administration. Her current research focuses on issues at the intersection of economic and international security—terror- ist financing, targeted sanctions, and critical infrastructure. At the Watson Institute, she co-directs the projects on terrorist financing and targeted sanctions. Recent publications include: Countering the Financ- ing of terrorism (2008) and “Addressing Challenges to Targeted Sanctions: An Update of the ‘Watson Report’ ” (2009). She works extensively with United Nations bodies to enhance the instrument of targeted sanctions. From 1993 to 1997, she was appointed by President Clinton and confirmed by the Senate as assistant secretary for export administration, responsible for U.S. dual-use export control and economic sanctions policy. Previously, she served on the professional staff of the U.S. House of Representative’s Committee on Foreign Affairs, where she oversaw security/nonproliferation issues, technology transfer policies, and economic sanctions. Jack L. goldsmith III has been a professor of law at Harvard Law School since 2004. From 2003 to 2004 he was the assistant attorney general in the U.S. Department of Justice’s Office of Legal Counsel. He was a professor of law at the University of Virginia Law School from 2003 to 2004. He served on the faculty of the University of Chicago Law School as an associate professor from 1994 to 1997 and as special counsel to the General Counsel in the Department of Defense. Goldsmith received his B.A. in philosophy summa cum laude from Washington and Lee University in 1984, a B.A. in philosophy, poli - tics, and economics with first class honors from Oxford University in 1986, a J.D. from Yale Law School in 1989, and a diploma in private international law from The Hague Academy of International Law in 1992. After law school he clerked for Judge J. Harvie Wilkinson of the United States Court of Appeals for the Fourth Circuit, Justice Anthony M. Kennedy of the Supreme Court of the United States, and Judge George A. Aldrich of the Iran-US Claims Tribunal. He also previously has served as an associate at Covington & Burling. Goldsmith’s scholarly interests include international law, foreign relations law, national security law, conflict of laws, and civil procedure. Goldsmith served on the NRC Committee on Offensive Information Warfare. Robert Jervis is the Adlai E. Stevenson Professor of International Affairs at Columbia University. He specializes in international politics in general and security policy, decision making, and theories of conflict and cooperation in particular. His most recent book is American Foreign Policy in a new Era (Routledge, 2005), and he is completing a book on intelligence and intelligence failures. Among his previous books are System Effects: Complexity in Political and Social life (Princeton, 1997); the meaning of the nuclear Reolution (Cornell, 1989); Perception and misperception in international Politics (Princeton, 1976); and the logic of images in international Relations (Columbia, 1989). Jervis also is a coeditor of the Security Studies Series published by Cornell University Press. He serves on the board of nine scholarly journals and has authored more than 100 publications. He is a fellow of the American Association for the Advancement of Science and of the American Academy of Arts and Sciences. He has also served as president of the American Political Science Association. In 1990 he received the Grawemeyer Award for his book the meaning of the nuclear Reolution. Professor Jervis earned his B.A. from Oberlin Col- lege in 1962. He received his Ph.D. from the University of California, Berkeley in 1968. From 1968 to 1974 he was appointed an assistant (1968-1972) and associate (1972-1974) professor of government at Harvard University. From 1974 to 1980 he was a professor of political science at the University

OCR for page 345
1 APPEndiX A of California, Los Angeles. His research interests include international political, foreign policy, and decision making. Jan M. Lodal was president of the Atlantic Council of the United States from October 2005 until the end of 2006. Currently, Lodal is chairman of Lodal and Company. Previously, he served as princi - pal deputy under secretary of defense for policy and as a senior staff member of the National Security Council. He was founder, chair, and CEO of Intelus, Inc., and co-founder of American Management Sys - tems, Inc. During the Nixon and Ford administrations, Lodal served on the White House staff as deputy for program analysis to Henry A. Kissinger, and during the Johnson administration as director of the NATO and General Purpose Force Analysis Division in the Office of the Secretary of Defense. Lodal is a member of the Board of Overseers of the Curtis Institute of Music, a Trustee of the American Boychoir, and a member of the Council on Foreign Relations and the International Institute of Strategic Studies. He was previously executive director of the Aspen Strategy Group and president of the Group Health Association. He is the author of numerous articles on public policy, arms control, and defense policy, and of the Price of dominance: the new weapons of mass destruction and their Challenge to American lead- ership. Lodal is the recipient of Rice University’s Distinguished Alumnus Award for Public Service and Achievement in Business and was twice awarded the Department of Defense Medal for Distinguished Public Service, the Department’s highest civilian honor. Lodal remains an active member of the Atlantic Council’s Board and its treasurer. Phil venables has graduate and postgraduate qualifications in computer science and cryptography from York University and The Queen’s College, Oxford, and is a chartered engineer. He has worked for more than 20 years in information technology in a number of sectors including petrochemical, defense, and finance. He has held numerous positions in information security and technology risk management at various financial institutions. He is currently managing director and chief information risk officer at Goldman Sachs. Additionally, he is on the board of directors for the Center for Internet Security and is a committee member of the U.S. Financial Sector Security Coordinating Council. Staff Herbert S. Lin, study director, is chief scientist for the National Research Council’s Computer Sci - ence and Telecommunications Board, where he has been a study director for major projects on public policy and information technology. These studies include a 1996 study on national cryptography policy (Cryptography’s Role in Securing the information Society), a 1999 study of Defense Department systems for command, control, communications, computing, and intelligence ( Realizing the Potential of Ci: Fun- damental Challenges), a 2000 study on workforce issues in high-technology (Building a workforce for the information Economy), a 2004 study on aspects of the FBI’s information technology modernization pro - gram (A Reiew of the FBi’s trilogy it modernization Program), a 2005 study on electronic voting (Asking the Right Questions About Electronic voting), a 2005 study on computational biology (Catalyzing inquiry at the interface of Computing and Biology), a 2007 study on privacy and information technology (Engag- ing Priacy and information technology in a digital Age), a 2007 study on cybersecurity research (toward a Safer and more Secure Cyberspace), a 2009 study on health care information technology (Computational technology for Effectie Health Care), and a 2009 study on U.S. cyberattack policy (technology, Policy, law, and Ethics Regarding Acquisition and Use of U.S. Cyberattack Capabilities ). Before his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in phys - ics from MIT. Tom Arrison is a senior staff officer in the Policy and Global Affairs Division of the National Acad - emies. He joined the National Academies in 1990 and has directed a range of studies and other projects in areas such as international science and technology relations, innovation, information technology, higher education, and strengthening the U.S. research enterprise. He holds M.A. degrees in public policy and Asian studies from the University of Michigan.

OCR for page 345
2 PRoCEEdingS oF A woRkSHoP on dEtERRing CYBERAttACkS gin bacon Talati is a program associate for the Computer Science and Telecommunications Board of the National Academies. She formerly served as a program associate with the Frontiers of Engineer- ing program at the National Academy of Engineering. Prior to her work at the Academies, she served as a senior project assistant in education technology at the National School Boards Association. She has a B.S. in science, technology, and culture from the Georgia Institute of Technology and an M.P.P. from George Mason University with a focus in science and technology policy.

OCR for page 345
 APPEndiX A ATTACHMENT 2 ACkNOWLEDgMENT OF REvIEWERS This report has been reviewed in draft form by individuals chosen for their diverse perspectives and technical expertise, in accordance with procedures approved by the National Research Council’s Report Review Committee. The purpose of this independent review is to provide candid and critical comments that will assist the institution in making its published report as sound as possible and to ensure that the report meets institutional standards for objectivity, evidence, and responsiveness to the study charge. The review comments and draft manuscript remain confidential to protect the integrity of the delibera - tive process. We wish to thank the following individuals for their review of this report: Thomas A. Berson, Anagram Laboratories Catherine Kelleher, Brown University Dan Schutzer, Financial Services Technology Consortium Jeffrey Smith, Arnold and Porter, Inc. William A. Studeman, U.S. Navy (retired) Although the reviewers listed above have provided many constructive comments and suggestions, they were not asked to endorse the conclusions or recommendations, nor did they see the final draft of the report before its release. The review of this report was overseen by David Clark, of the Massachu - setts Institute of Technology. Appointed by the National Research Council, he responsible for making certain that an independent examination of this report was carried out in accordance with institutional procedures and that all review comments were carefully considered. Responsibility for the final content of this report rests entirely with the authoring committee and the institution.

OCR for page 345