4
MOVING FORWARD
The law, which is mainly a tool for implementing policy, does not exist in a vacuum. The legal framework for critical information infrastructure protection must be considered in the larger context of the business, social, and technical environment. Phil Reitinger, former deputy chief of the Computer Crime and Intellectual Property Section of the U.S. Department of Justice, argues that critical information infrastructure requires a multidisciplinary response. First, he suggests, we need technical solutions. Vendors have to produce more secure products, and systems and customers have to demand and implement better security. Second, we need management solutions. Companies must adopt and share best practices. The third approach recommended by Mr. Reitinger is to develop public education1 efforts to help all users better understand computer ethics (just as throwing a stone through a neighbor’s window is wrong, so is breaking into someone else’s computer system). Reducing nuisance attacks will allow government to focus resources on the greater threat. Finally, he proposes that we need knowledge solutions. The private sector and law enforcement must gather and share information about threats, vulnerabilities, and remedies. He argues, “[w]e have got to figure out how we can spread the information and better secure systems while protecting privacy and not increasing the threat.”
MOTIVATING THE PRIVATE SECTOR
The U.S. government has historically relied on appeals to patriotism or threats of impending cyberattacks to encourage private sector entities to increase security initiatives. However, successful corporations focus on activities that contribute to increased profits, increased opportunities for profit, reduced constraints, and/or reduced risk. As Eric Benhamou observed, there is a tendency to “have a positive outlook on how technology will be created . . . and the concern about threats is very, very secondary, certainly no more than afterthoughts.” That situation is compounded, suggests Milo Medin (formerly the chief technology officer for Excite@Home), by rapid Internet growth and competition, which have resulted in an environment where reducing expenses trumps infrastructure protection initiatives. This section looks briefly at the incentive problem that complicates policy making for critical information infrastructure protection.
Market Failure?
Eric Benhamou argues that there are several factors that complicate efforts to improve security. First, he points to an imbalance between the low cost of the tools to perpetrate an attack and the high cost of the defense mechanisms needed to protect against these attacks. Second, he notes that there are indeed well-known technical vulnerabilities inside many infrastructures, but enough hasn’t been done to fix them because doing so is very hard. Third, implementation of a strong security policy conflicts with efforts to promote open communication environments. Mr. Benhamou observed that another complicating factor is an IT culture that favors speed and performance over lengthy security procedures and practices.2 Finally, most of the technical vulnerabilities can only be overcome through collective, concerted action—something that has proven hard in numerous contexts, such as contending with gray-market resellers.
Externalities are common in computer network security and, as with pollution, they yield societal problems without motivating sufficient private action. There are a number of reasons—not mutually exclusive—why infrastructure owners may choose not to invest more heavily in security measures. In the absence of good data, itself a problem, one can speculate, and each of the potential reasons illuminates a different aspect of the situation. First, security measures may not be very effective—or effective enough to warrant the investment. Government investments in research and development of computer security measures may be a partial answer to this problem, and legislative and administrative activity through 2002 point to increasing support for such R&D.3 Second, losses from security breaches may not be very large, or they may be covered by insurance (see next section) or self-insurance.4 Third, losses from security breaches may be large but can be dealt with only if large numbers of parties coordinate to make the needed investments. Kunreuther and Heal5 argue that computer network security is an example of interdependent security; the incentive that one conscientious network owner has to invest in security measures is reduced if the owner believes that other connected networks are insecure, which would undermine the impact of the conscientious owner’s measures.6 They argue that a set of positive and negative economic incentives (e.g., insurance, taxation, liability, standards, and coordinating mechanisms) needs to be developed. Fourth, losses from security breaches may be large, but each party expects others to make the needed investments.7 When the cost of poor security is not borne by the source, there is no incentive for the problems to be fixed. In the present context, one might ask why each party apparently attempts to shift the
3 |
See, for example, the Cyber Security Research and Development Act (PL 107-305). |
4 |
Some have argued, for example, that progress may not happen without catastrophic loss. |
5 |
Howard Kunreuther and Geoffrey Heal, “Interdependent Security: The Case of Identical Agents,” at <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=306405>. |
6 |
A reviewer observed that a person’s incentives to invest in security is increased if others also invest in security (i.e., that investments in security are complements) may not always be true. Investments by one information infrastructure player may be at least a partial substitute for investments by another. For example, residential users of cable modems have been encouraged to install firewall software to compensate for the vulnerability they incur as a result of cable-Internet system designs; a different approach by cable system operators would diminish the investment needed by residential users, but if more residential users make this investment, it lowers the incentive for the cable operator. |
7 |
From this perspective, software vendors may create bugs, but their customers and distributors bear the cost of dealing with them; Internet service providers sell access to the entire Internet but guarantee only their part of the network; or individual users can create security hazards but bear no consequences of their actions. |
investment burden to others. One possible answer is that a user cannot easily identify the source of the underinvestment that led to the security breach (e.g., whether it was due to the user’s software, the ISP, the backbone to which the ISP is connected, or software used by others). In other words, the security breach may be a result of decisions made by parties that are outside the control of the party making the investment. Finally, losses from security breaches may be large, but assigning liability8 for them is difficult. Another complicating factor is that computer network externalities are international in scope.
Dr. Semancik argues that there is no economic theory that can calculate the actual benefit from the deployment of computer security technology (i.e., that a certain investment will increase security by some amount). Part of the difficulty with cost-benefit analysis is that the cost of security breaches is not widely available or known—this is part of the data problem noted above. Companies are hesitant to disclose costs because of the effect it might have on shareholder value and/or confidence and associated risks of litigation. A related problem is that the large numbers associated with the cost of publicized national incidents such as distributed denial-of-service attacks are considered suspect because they depend on simple assumptions about the behavior of large numbers of parties and on a simple aggregation of resulting cost projections.9
The economics of computer security is currently a hot research topic; May 2002 saw an important national workshop on this topic co-located with a large technical conference.10 One approach recommended by Dr. Semancik at the symposium is to develop economic models of computer security that would help corporations build a case for investments in security technology. These would vary among industries, yielding sets of investment curves for given levels of security and costs. This kind of research could bring together industry and government. Another option is to use the threat of liability—based on the tort law model that holds companies accountable to a duty of reasonable care—to create an incen
8 |
It should also be recognized that under some circumstances increasing the legal liability of attackers (tort-based liability of players was discussed in Chapter 3), including increasing the ability to enforce such liability, might actually reduce the amount of security in which participants invest. If perpetrators can be more effectively caught and convicted, the need for security may decrease. At the same time, causation may also run in the other direction: Actions that make it easier to prevent attacks may make it more difficult to convict perpetrators. |
9 |
Of course, they are used by vendors and policy makers to encourage more action, because exhortations often sound more compelling when statistics are invoked. |
10 |
Workshop on Economics and Information Security, University of California at Berkeley, May 17-18, 2002. Information is available online at <http://www.sims.berkeley.edu/resources/affiliates/workshops/econsecurity/>. |
tive to improve network security (see Chapter 3). Ultimately the development of a risk-management model for computer network security will be driven by insurance, legal liability, and market forces.11
Insurance: Motivator for Good Behavior
Ty R. Sagalow, executive vice president and chief operating officer of American International Group, Inc. (AIG), eBusiness Risk Solutions, argues that the insurance industry can play a role “in motivating the private sector to protect our national infrastructures and to guard against cyberattacks.” Insurance rewards good behavior by making insurance available or unavailable and increasing or decreasing a company’s premiums. Insurance companies transfer the risk of a loss from the balance sheet of the company to the insurance carrier.
To qualify for insurance, companies must prove they are an acceptable risk. Sagalow suggests three components to managing risk: people, policies, and technology. First, he suggests that companies must have dedicated technology personnel, commitment from the board, and an active crisis management team. Corporate policies should be ISO 17799-compliant and should include regular ongoing training of all employees (including management). Sagalow noted that although no single standard (including ISO 17799) has emerged, a single standard would have a positive effect. He acknowledged that it might be necessary to develop industry-specific standards rather than relying on one all-encompassing standard, such as ISO 17799. Finally, companies must employ appropriate security measures. Examples include firewalls, antivirus software (updated daily), intrusion detection systems, monitoring/log review, scans, and regular backups. Most insurance carriers require companies to undergo an independent security evaluation of their network defenses before granting a policy. Many policies also require companies to pass ongoing random red-team intrusion detection tests in order to maintain coverage. The insurance premium often depends on the security measures implemented.
Although policies vary, AIG manages the following risks in its cyberinsurance packages: legal liability to others (arising out of a denial-of-service attack; transmission of a computer virus; libel, slander, or copyright infringement caused by the content of the company’s Web page); damage, destruction, or corruption of data; loss of revenue due to a DDOS
attack; loss of or damage to reputation; and loss of market capitalization and resulting shareholder lawsuits.
The paucity of data on cyberrelated losses makes it difficult to accurately price cyberinsurance policies. Mr. Sagalow noted, for example, that it was hard to quantify damage due to a DDOS attack (e.g., potential lost customers and damage to reputation). In spite of the dearth of data, the Insurance Information Institute expects cyber insurance to be a $2.5 billion market by 2005.12
R&D to Alter the Costs of Security
Wm. A. Wulf, president of the National Academy of Engineering, noted that “very little progress has been made in computer security, and there is no community of scholars, academic researchers doing basic long term research in computer security.” A contributor to this problem is the lack of a funding agency that focuses on creating a cadre of researchers in computer security. Dr. Wulf argues that “without a long term basic research base, we are not going to make a lot of progress in this area.”
Mr. Benhamou reported that PITAC contemplated recommending increased funding in fundamental R&D in the field of computer network security, specifically calling for research focusing on protecting and securing the information infrastructure and creating hacker-proof networks. Meanwhile, the Office of Science and Technology Policy has moved to coordinate and plan for research relating to CIP and homeland security aid, while the major funders of computer science R&D have been exploring ways to increase their attention to these issues.13 Industry and the intelligence community, suggests Mr. Benhamou, must engage in focused information sharing to develop an understanding of the sophistication of the existing infrastructure, to create scenarios, and to formulate a corresponding defense. Such interaction has been encouraged by Richard Clarke, the former cybersecurity czar. Harriet Pearson, chief privacy of
12 |
“Internet Companies Seek Insurance Against ‘Denial of Service,’” E-Commerce Times, July 30, 2002, at <http://www.newsfactor.com/perl/printer/18804/>. |
13 |
The National Strategy to Secure Cyberspace recommends that (1) the director of the OSTP coordinate the development of a federal R&D agenda; (2) DHS ensure that coordination mechanisms exist among among academic, industry, and government R&D efforts; and (3) the private sector focus near-term R&D efforts on highly secure and trustworthy operating systems. The National Strategy is available online at <http://www.whitehouse.gov/pcipb>. The National Science Foundation has engaged a computer scientist to coordinate its homeland security-relevant R&D. Also, there is federal support for the new Institute for Information Infrastructure Protection, involving individuals previously associated with federal critical information infrastructure protection programs, which is working to develop a national R&D agenda. |
ficer of IBM, referred to research into autonomic computing networks (also called “self-healing networks”), in which the network detects intrusions and takes actions to shield itself. Such research may lead to lower-cost computer network security solutions, benefiting industry and improving the protection of critical infrastructures. But it raises challenging technical and legal issues in a world featuring interconnections among networks administered by a growing number and variety of parties in differing jurisdictions.
Awareness
Mr. Benhamou reported on the rise in recognition of critical infrastructure concerns in Silicon Valley. That region, a leader in the production and use of information infrastructure, has dealt recently with acute energy infrastructure problems and with a long-term rise in computer crime; it also features many companies with international operations, which raise additional concerns about vulnerabilities and their exploitation. Mr. Benhamou noted that many Silicon Valley firms have been the target of attacks on information infrastructure by Nigerian organized crime groups, for example, and individual executives have been targeted by terrorist groups. These incidents make clear that the stereotypical teenaged hacker is not the main concern. The highly publicized distributed denial-of-service attacks and worm incidents of 2000-2001 were seen as costly to victims, whose attention to Y2K had already underscored dependence on the information infrastructure. Thefts of or damage to intellectual property also have been growing for corporations.14 Against this backdrop, the events of September 11 heightened awareness and concern, and they spurred consideration of enhanced communication and coordination at three levels—within enterprises, within and among industries, and between industry and government—to respond to threats to infrastructure. A lingering challenge is how to achieve a greater understanding of the problem and possible solutions in smaller companies, particularly those that cannot afford an information technology support staff. Small businesses often are not aware that they need better computer security than what they have—if they have any at all. Frederick R. Chang, president and CEO of SBC Technology Resources, Inc., argues that the convergence of the voice and data networks compounds the problem and suggests possible solutions (see Box 4.1). The new awareness extends to an understanding that the practices that have helped companies to thrive,
BOX 4.1 Network Convergence and CIIP The convergence of the public switched telephone network (PSTN) with new communications technologies has created a complex telecommunications environment. With deregulation of the telecommunications industry, there are new entrants supplying broadband connectivity to the local and long-distance markets. Traditional telecommunications companies have a heritage of five nines, 99.999 percent availability, which equates to five minutes of downtime per year. The expectation is that a customer will always get a dial tone when he picks up the telephone. As a result of many factors (competition, speed to market, profitability and so forth), the Internet, wireless networks, and the next generation network are not being built to the same survivable standards as the PSTN. On the other hand, the rise of packet-switching technology—including to support telephone service—introduces a different approach that can often provide as good or better robustness and reliability through alternatives for data paths relative to what the PSTN provides through failure-resistant equipment. Some of the consequences are very important for crisis situations. For example, the nature of the PSTN is such that, if one can get a dial tone at all, some fairly good service quality guarantees come with it; however “poor service is better than nothing” is not a choice the user has. By contrast, the packet-based best-efforts character of the Internet may be able to get some signal through under very adverse conditions. Whether availability of a poor signal that is still usable for some purposes should be taken into account when measuring network availability is a fairly subtle question.1 Mr. Chang suggested that more effort be made to leverage experience in telephony to enhance telecommunications robustness. For example, some of the lessons and experiences from the NSTAC and NCC (including structure, procedures, coordination of planning, approaches to interconnection, and so on) could be applied to the data network. In addition, it is important that the security and robustness lessons learned by the data network (e.g., virtual private networks, firewalls, and authentication technologies) be employed in the PSTN’s digital initiatives such as voice over IP. These groups, along with the Network Reliability and Interoperability Council, are already reaching out to nontelephony providers, but there may be limits to how broad their reach can be. |
such as support for accessing corporate information networks and resources from afar, contribute risk as well as benefit. These practices provide a new, and more challenging, baseline for critical infrastructure protection than that of a few years ago.
Although awareness is a prerequisite to action, it does not guarantee it. As Mr. Benhamou describes, many experiences show that solo action can be costly—by, for example, attracting lawsuits from parties concerned about harm to their equity interests—and collective action is hard to achieve. Progress may begin within the corporation, for example by protecting whistle-blowers and by bringing in and supporting the work of professional risk managers. Ms. Pearson argued for thinking through how to simplify systems to facilitate control over information. Mr. Benhamou pointed to the financial-reporting disclosures associated with Y2K as an example of a positive incentive—a combination of disclosure and accountability—that could be replicated for motivating protection of critical information infrastructure.15 He noted that companies may understand the risks associated with, say, problems in the Domain Name System that may interfere with their use of the Internet and associated e-commerce, but that these companies are seldom called to account for how they prepare for possible problems. Of course, in some instances bad press leads to calls for accounting for preparedness (or responses), and avoidance of bad press can itself be a motivator.
SECURITY AND PRIVACY TENSIONS
Historically, the debate about security and privacy in the United States has been characterized as a zero-sum game—more security implies less privacy. Prior to September 11, the debate had begun to shift toward a realization that the security of information systems and the protection of personal data and privacy are mutually reinforcing and compatible goals. The OECD’s Guidelines for Security of Information Systems16 states:
[S]ecurity of information systems may assist in the protection of personal data and privacy. . . . Similarly, protection of personal data and
15 |
James Dempsey, deputy director of the Center for Democracy and Technology, noted that there are disagreements about what should be disclosed and to whom. For example, Senator Bennett has called for more public disclosure by companies about their information system vulnerabilities while at the same time promoting less public disclosure through support of a FOIA exemption for critical infrastructure information. Dempsey called for more debate to set the rules. |
16 |
Organization for Economic Cooperation and Development, 1992. Guidelines for Security of Information Systems. Available at <http://www.oecd.org/EN/document/0,,EN-document-43-nodirectorate-no-24-10249-13,00.html>. |
privacy . . . may serve to enhance the security of information systems. The use of information systems to collect, store and cross-reference personal data has increased the need to protect such systems from unauthorized access and use. . . . It is possible that certain measures adopted for the security of information systems might be misused so as to violate the privacy of individuals. For example, an individual using the system might be monitored for a non-security-related purpose or information about the user made available through the user verification process might permit computerised linking of the user’s financial, employment, medical and other personal data.
In the wake of September 11, the increasing number of measures aimed at protecting homeland security has fostered an increase in surveillance and intelligence-gathering activities, arousing concerns among privacy advocates. The discussion at the symposium anticipated changes that came into effect in 2002.
Whitfield Diffie noted that anonymity is a very powerful technique for protecting privacy. The decentralized and stateless design of the Internet is particularly suitable for anonymous behavior. Although anonymous actions can ensure privacy, they should not be used as the sole means for ensuring privacy as they also allow for harmful activities, such as spamming, slander, and harmful attacks without fear of reprisal. Security dictates that one should be able to detect and catch individuals conducting illegal behavior, such as hacking, conspiring for terrorist acts, and conducting fraud. For example, without trustworthy source information and/or trustworthy data regarding the route that a packet has taken from source to destination, it is difficult to defend against denial-of-service attacks. Today, it is easy to insert a phony source address into an Internet IP packet and, unless the originating ISP takes some action to reject packets originated by its users that don’t match the IP addresses assigned to those users, the source IP address cannot be used to push back attacks. Likewise, routers do not add any metadata to IP packets that pass through them to indicate the route that has been taken. Legitimate needs for privacy (such as the posting of anonymous bulletin board items) should be allowed, but the ability to conduct harmful anonymous behavior without responsibility and repercussions—in the name of privacy—should not.
James Dempsey observed that better system security might reduce the need for surveillance and other potential intrusions into privacy.17 However, surveillance can be a valuable tool in combating terrorists and hackers. The ability to track and monitor suspected terrorists and hackers and their supporters cannot be understated—it can lead to valuable clues
and trails, and it can lead to the evidence needed to catch and convict guilty parties. On the other hand, it is important to ensure adequate protections are in place so that surveillance can be conducted without loss of privacy. Collected information must be secured, protected, and prevented from being used against people except for the intended purpose of catching and incriminating hackers and terrorists. While better system security may not reduce the need for surveillance, properly conducted surveillance for legitimate purposes should not result in a loss of privacy.
However, the crisis-management mentality in the aftermath of September 11 once again pushed aside issues of privacy and civil liberties. Although the July 2002 version of the OECD computer security guidelines mentions privacy—“Efforts to enhance the security of information systems and networks should be consistent with the values of democratic society, particularly the need for an open and free flow of information and basic concerns for personal privacy”18—it no longer emphasizes the mutual and compatible nature of privacy and security. Within the United States, a number of legislative and procedural initiatives, beginning with the USA PATRIOT Act, appear to have elevated attention to security. Congressional hearings, editorials, Web sites, and so on have sustained discussions about the support for different objectives. Technical mechanisms have been proposed to aid government efforts to promote security, and the law is seen more and more as the lever for balancing interests. It is increasingly difficult to separate law (or technology or business practice) relating to CIIP from that pertaining to homeland security. As a result, given popular concern about homeland security, many experts fear that privacy may suffer.
At the symposium, speakers and participants argued that the seriousness and urgency of the problem make it even more important to consider the value of privacy in crafting a solution. For example, Harriett Pearson noted that a researcher at the IBM Privacy Research Institute has developed a technology called “privacy preserving data mining,” which allows information to be mined for patterns while preserving personally identifiable information. James Dempsey’s presentation (see Box 4.2 for excerpts) eloquently captures the tension that continues to impinge on CIIP policy making.
18 |
OECD. 2002. Guidelines for the Security of Information Systems and Networks: Towards a Culture of Security. Available at <http://www.oecd.org/pdf/M00033000/M00033182.pdf>. |
BOX 4.2 Privacy and Security [A]s a privacy advocate, I find I often have to overcome certain barriers to communication. . . . [I]n coming forward and criticizing some of the things that are being done, and saying that the privacy issues are not properly being taken account of, I sometimes find myself accused of not appreciating the nature of the threat, or not appreciating the urgency of the situation. I want to start out by putting that aside: . . . I care about the privacy issues precisely because I believe that this threat is so serious. . . . I think there is a fairly high likelihood that some of us in this room in the coming months and years will be victims of terrorist attacks, or will have family members who are. I see this as a very long term problem and a risk and a threat. But the seriousness of the risk, the urgency of the problem, only makes it more important that we get the solution right. It doesn’t tell us what the solution will be. All too often—particularly I have seen this recently in the legislative debate—the urgency of the threat is taken as an excuse for not engaging in the kind of dialogue and faith and examination that is necessary. I will commend the National Academies, the National Academy of Engineering, and the Computer Sciences and Telecommunications Board for taking on this project and trying to engage in some practical, rational discourse. Privacy in this debate is a value . . . that we share as part of our society, along with the other values that we have, including the value of security. The privacy advocates come to this debate and help us ask the questions that we need to ask: Are we doing the right things or not? Now, at this point it is clear that the systems that we are dependent upon are not secure. They are vulnerable to attack. They are possibly a point to be attacked, in combination . . . with physical attacks. Obviously we are facing people who are very clever, very thoughtful, very patient. We obviously need to build greater security into our systems. The difficult questions [are] what do we do, and second, how do we create the incentives to achieve the goals that we have? . . . [W]hat are the incentives? |
A TRUST NETWORK
A common theme at the symposium was the importance of trust. Trust provides the foundation for approaches based on procedure or business practice, as opposed to law or technical mechanisms.
-
Frederick R. Chang, “[O]ne of the successes you saw in the NSTAC and the NCC was that the people who were responsible for essentially the
-
nation’s infrastructure were together. There was information sharing. They established trust.”
-
Lieutenant General Kelley, “. . . trust is absolutely key to establishing this two-way dialogue.”
-
Ronald L. Dick, “the government protects the nation’s most critical infrastructures by building and promoting a coalition of trust . . . amongst all government agencies, between the government and the private sector, amongst different interests within the private sector itself and in concert with the greater international community. . . . InfraGard expands direct contact with the private sector and infrastructure owners and operators to build one thing: trust. . . .”
-
James Dempsey, “. . . in some of the legislation that is being proposed, the question of what should be kept secret and what should be shared and how you define this trust network is completely missing, completely left discretionary.”
-
Philip R. Reitinger, “I think information sharing really only works effectively when it is voluntary. When people say, I want to share information, that means that information sharing has to be based on trust. You have to build trust.”
-
Glenn Schlarman, “Trust is important and the entity providing information has to get something of value in return or else that will be the first and last time they share.”
Trust is not a new concept; it has been a central component of the government’s CIP efforts over the past several years. John G. Grimes, chair of the Industry Executive Subcommittee of NSTAC and vice president of Raytheon Co., argued that trust is a simple concept that is very difficult to implement in reality. He reported that at NSTAC—often cited as an example of a successful public-private partnership for CIP information sharing—it took time and energy to break down the walls and build a trust network.
So, why have past efforts failed to build trust between partners? One argument is that the government’s message to the private sector has varied, ranging from national security to the economic delivery of vital services to mixed messages in between. The transition from a focus on CIP to the larger concept of homeland security compounds the challenge of communicating what is wanted and why; it presents a bigger picture, which can be a good thing, but it may also make the objective so big and unfocused as to cause confusion.
The second problem is that the government interface within the private sector on CIP issues is quite confusing and not necessarily user friendly. The private sector, for example, often does not know which government entity it should be dealing with on CIP matters, whom it
should be sharing information with, or whom it can depend on within the government for up-to-date information. So far, one can argue that possible focal points include the Department of Homeland Security, FBI/NIPC, CIAO, the new Cybersecurity Board, and a mix of other government agencies: the FTC, FCC, SEC, DOE, DOD, and more. The new Department of Homeland Security is further altering the government landscape.19 It may centralize some federal responsibilities for CIP, although it seems clear that others will remain distributed among many agencies. After this major organizational change is set in motion, the government should clearly and consistently explain to the private sector what its objectives are for CIP, how it has organized itself to accomplish those objectives, who is responsible for what (e.g., what are the information flows), what kind of information should be shared and in what form, and why all of this is important (i.e., what the threat is and how the proposed actions will address the threat). This message should clearly and consistently articulate what protections already exist for information sharing and what safe harbors exist (or will be established) to encourage information sharing in light of FOIA and antitrust concerns in the private sector. A clear and consistent message from the government to the private sector will go a long way toward building the trust that is necessary to protect the nation’s critical information infrastructures.