Page 132

3—
Technical Challenges

Ongoing efforts to develop and deploy improved networking technologies promise to greatly enhance the capabilities of the Internet. Protocols for quality of service (QOS) could enable vendors to offer guarantees on available bandwidth and latencies across the network. Advanced security protocols may better protect the confidentiality of messages sent across the network and ensure that data are not corrupted during transmission or storage. Broadband technologies, such as cable modems and digital subscriber line (DSL) services, have the potential to make high-speed Internet connectivity more affordable to residential users and small businesses. In combination, these capabilities will enable the Internet to support an ever-increasing range of applications in domains as disparate as national security, entertainment, electronic commerce, and health care.

The health community stands to benefit directly from improvements in QOS, security, and broadband technologies, even though it will not necessarily drive many of these advances. Health applications—whether supporting consumer health, clinical care, financial and administrative transactions, public health, professional education, or biomedical research—are not unique in terms of the technical demands they place on the Internet: nearly all sectors have some applications that demand enhanced QOS, security, and broadband technologies. Nevertheless, particular health applications require specific capabilities that might not otherwise receive much attention. Use of the Internet for video-based consultations with patients in their homes, for example, would call for two-way, high-bandwidth connections into and out of individual residences, whereascontinue



The National Academies | 500 Fifth St. N.W. | Washington, D.C. 20001
Copyright © National Academy of Sciences. All rights reserved.
Terms of Use and Privacy Statement



Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.

OCR for page 132
Page 132 3— Technical Challenges Ongoing efforts to develop and deploy improved networking technologies promise to greatly enhance the capabilities of the Internet. Protocols for quality of service (QOS) could enable vendors to offer guarantees on available bandwidth and latencies across the network. Advanced security protocols may better protect the confidentiality of messages sent across the network and ensure that data are not corrupted during transmission or storage. Broadband technologies, such as cable modems and digital subscriber line (DSL) services, have the potential to make high-speed Internet connectivity more affordable to residential users and small businesses. In combination, these capabilities will enable the Internet to support an ever-increasing range of applications in domains as disparate as national security, entertainment, electronic commerce, and health care. The health community stands to benefit directly from improvements in QOS, security, and broadband technologies, even though it will not necessarily drive many of these advances. Health applications—whether supporting consumer health, clinical care, financial and administrative transactions, public health, professional education, or biomedical research—are not unique in terms of the technical demands they place on the Internet: nearly all sectors have some applications that demand enhanced QOS, security, and broadband technologies. Nevertheless, particular health applications require specific capabilities that might not otherwise receive much attention. Use of the Internet for video-based consultations with patients in their homes, for example, would call for two-way, high-bandwidth connections into and out of individual residences, whereascontinue

OCR for page 132
Page 133 video-on-demand applications require high bandwidth in only one direction. Human life may be at risk if control signals sent to medical monitoring or dosage equipment are corrupted or degraded, or if electronic medical records cannot be accessed in a timely fashion. Even when no lives are at stake, the extreme sensitivity of personal health information could complicate security considerations, and the provisions of health care at the point of need—whether in the hospital, home, or hotel room—could increase demand for provider and consumer access to Internet resources via a variety of media. This chapter reviews current efforts to improve the capabilities of the Internet and evaluates them on the basis of the needs of the health sector outlined in Chapter 2. Particular attention is paid to the need for QOS, for security (including confidentiality of communications, system access controls, and network availability), and for broadband technologies to provide end users with high-speed connectivity to the Internet. Also discussed are privacy-enhancing technologies, which are seen by many as a prerequisite for more extensive use of the Internet by consumers. The chapter identifies ways in which the Internet's likely evolution will support health applications and ways in which it may not. It gives examples of challenges that real-world health applications can pose for networking research and information technology research more generally. In this way, it attempts to inform the networking research community about the challenges posed by health applications and to educate the health community about the ways in which ongoing efforts to develop and deploy Internet technologies may not satisfy all their needs. Quality of Service Quality of service is a requirement of many health-related applications of the Internet. Health organizations cannot rely on the Internet for critical functions unless they receive assurances that information will be delivered to its destination quickly and accurately. For example, care providers must be able to retrieve medical records easily and reliably when needed for patient care; providers and patients must be able to obtain sustained access to high-bandwidth services for remote consultations if video-based telemedicine is to become viable. In emergency care situations, both bandwidth and latency may be critical factors because providers may need rapid access to large medical records and images from disparate sources connected to the Internet. Other applications, such as Internet-based telephony and business teleconferencing, demand similar technical capabilities, but the failure to obtain needed QOS in a health application might put human life at risk. Compounding the QOS challenge in health care is the variability of acontinue

OCR for page 132
Page 134 health care organization's needs over the course of a single day. The information objects that support health care vary substantially in size and complexity. While simple text effectively represents the content of a care provider's notes, consultation reports, and the name-value pairs of common laboratory test results, many health problems require the acquisition and communication of clinical images such as X rays, computed tomography (CT), and magnetic resonance imaging (MRI). The electronic forms of these images, which often must be compared with one another in multiple image sets, comprise tens to hundreds of megabytes of information that may need to be communicated to the end user within several seconds or less. Medical information demands on digital networks are thus notable for their irregularity and the tremendous variation in the size of transmitted files. When such files need to be transmitted in short times, very high bandwidths may be required and the traffic load may be extremely bursty. No capabilities have yet been deployed across the Internet to ensure QOS. Virtually all Internet service providers (ISPs) offer only best-effort service, in which they make every effort to deliver packets to their correct destination in a timely way but with no guarantees on latency or rates of packet loss. Round-trip times (or latencies) for sending messages across the Internet between the East and West Coasts of the United States are generally about 100 milliseconds, but latencies of about 1 second do occur—and variations in latency between 100 milliseconds and 1 second can be observed even during a single connection.1 Such variability is not detrimental to asynchronous applications such as e-mail, but it can render interactive applications such as videoconferencing unusable. Similarly, the rates of packet loss across the Internet range from less than 1 percent to more than 10 percent; high loss rates degrade transmission quality and increase latencies as lost packets are retransmitted. Furthermore, because many applications attempt to reduce congestion by slowing their transmission rates, packet loss directly affects the time taken to complete a transaction, such as an image transfer, over the network. Several approaches can be taken to improve QOS across the Internet, with varied levels of effectiveness. For example, Internet users can upgrade their access lines to overcome bottlenecks in their links to ISPs, but such efforts affect bandwidth and latency into and out of their own site only. They provide no means for assuring a given level of QOS over any distance. Similarly, ISPs can attempt to improve service by expanding the capacity of their backbone links. However, as described below, such efforts provide no guarantees that bandwidth will be available when needed and contain no mechanisms for prioritizing message traffic in the face of congestion. To overcome these limitations, efforts are under way to develop specific protocols for providing QOS guarantees across thecontinue

OCR for page 132
Page 135 Internet. These protocols promise to greatly expand the availability of guaranteed services across the Internet, but their utility in particular applications may be limited, as described below. Increasing Bandwidth One approach taken by ISPs to improve their data-carrying capacity and relieve congestion across the Internet has been to dramatically increase the bandwidth of the backbones connecting points of presence (POPs).2 Today's backbone speeds are typically on the order of 600 megabits per second (Mbps) to 2.5 gigabits per second (Gbps), but some ISPs have considerably more bandwidth in place. A number of ISPs today have tens of strands of fiber-optic cable between their major POPs, with each strand capable of carrying 100 wavelengths using current wavelength division multiplexing (WDM) technology. Each wavelength can support 2.5 to 10 Gbps using current opto-electronics and termination equipment. Thus, an ISP with 30 strands of fiber between two POPs theoretically could support 30 terabits per second (Tbps) on a single inter-POP trunk line.3 This is enough capacity to support approximately 450 million simultaneous phone calls or to transmit the 40 gigabyte (GB) contents of the complete MEDLARS collection of databases in one-hundredth of a second. Even with this fiber capacity in the ground, most ISPs currently interconnect their POPs at speeds significantly lower than 1 Tbps—a situation that is likely to persist for the next few years. The limiting factors are the cost and availability of the equipment that needs to be connected to the fiber inside the POP. This equipment includes Synchronous Optical NETwork (SONET)4 termination equipment and the routers or switches that are required to forward packets between POPs. The SONET equipment is expensive—as are the routers and switches that connect to the SONET equipment—so ISPs have an incentive to deploy only enough to carry the expected traffic load. More importantly, routers are limited in terms of the amount of traffic they can support. As of late 1999, the leading commercial routers available for deployment could support 16 OC-48 (2.5 Gbps) interfaces, with a fourfold increase (e.g., to 16 × OC-192) expected to be deployable in the next 1 to 2 years. Terabit and multiterabit routers with a capacity at least six times greater than a 16 × OC-192 router are under development. Despite these increases in capability, routers most likely will continue to limit the bandwidth available between POPs for the foreseeable future. The commercial sector understands the need for faster routers and is addressing it, at least to meet near-term demands for higher link speeds. Additional research on very high speed routers may be justified to provide longer-term improvements in data-carrying capacity.break

OCR for page 132
Page 136 Increases in the bandwidth of the Internet backbone alleviate some of the concerns about QOS but may not completely eliminate congestion. Demand for bandwidth is growing quickly, and it appears that ISPs are deploying additional bandwidth just fast enough to keep up. Current traffic measurements indicate that some Internet backbone links are at or near capacity. Factors driving the growth in demand for bandwidth include the increasing number of Internet users, the increasing amount of time the average user spends connected to the Internet, and new applications that are inherently bandwidth-intensive (and that demand other capabilities, such as low latency and enhanced security). Nielsen ratings for June 1999 put the number of active Web users at 65 million for the month and average monthly online time per user at 7.5 hours, up from 57 million users and 7 hours per user just 3 months earlier.5 As an example of increasing bandwidth demands, medical image files that now contain about 250 megabytes (MB) of data are expected to top several gigabytes in the near future as the resolution of digital imaging technology improves. Internet protocols further limit the capability of ISPs to provide QOS by simply increasing bandwidth. The Transmission Control Protocol (TCP), which underlies most popular Internet applications today, is designed to determine the bandwidth of the slowest or most congested link in the path traversed by a particular message and to attempt to use a fair share of that bottleneck bandwidth. This trait is important to the success of the Internet because it allows many connections to share a congested link in a reasonably fair way. However, it also means that TCP connections always attempt to use as much bandwidth as is available in the network. Thus, if one bottleneck is alleviated by the addition of more bandwidth, TCP will attempt to use more bandwidth, possibly causing congestion on another link. As a result, some congested links are almost always found in a network carrying a large amount of TCP traffic. Adding more capacity in one place causes the congestion to move somewhere else. In many cases, top-tier service ISPs attempt to make sure they have enough capacity so that the congestion occurs in other backbone providers' networks. The only way out of this quandary, apparently, is to provide so much bandwidth throughout the network that applications are unable to use it fully. Applications that do not use TCP are not the solution, either, because they also tend to consume considerable bandwidth. Such nonadaptive applications are typically those involving real-time interaction, for which TCP is not well suited. Internet telephony is a good example of such an application. Although an individual call might use only a few kilobits per second, many current Internet telephony applications transmit data at a constant rate, regardless of any congestion along their path. Because these applications do not respond to congestion, large-scale deployment can lead to a situation called congestive collapse, in which links are socontinue

OCR for page 132
Page 137 overloaded that they become effectively useless. Furthermore, when these applications share links with TCP-based applications, the latter will respond to congestion to the point where they may become unusable. Short of deploying additional bandwidth in the Internet and replacing nonadaptive applications with adaptive ones, the primary approach to addressing this problem is either to equip routers with new mechanisms to prevent congestive collapse or to provide suitable incentives to encourage the development of adaptive applications. More fundamental factors also limit the utility of increased bandwidth as a means of solving the QOS problem. Adequate bandwidth is a necessary but not sufficient condition for providing QOS. No user can expect to obtain guaranteed bandwidth of 100 Mbps across a 50-Mbps link; similarly, it is not possible to guarantee 10 Mbps each to 1,000 applications that share a common link unless that link has a capacity of at least 10 Gbps.6 The simple fact that Internet backbones are shared resources that carry traffic from a large number of users means that no single user can be guaranteed a particular amount of bandwidth unless dedicated allocation mechanisms are in place. In the absence of QOS mechanisms, it is impossible to ensure that delay-sensitive applications are protected from excessive time lags.7 In theory, ISPs could attempt to provide so much extra bandwidth to the Internet that peak demand could almost always be met and service quality would improve (a technique referred to as overprovisioning, used with some success in local area networks, or LANs). However, research indicates that overprovisioning is an inefficient solution to the QOS problem, especially when bandwidth demands vary widely among different applications, as is the case in health care. Overprovisioning tends not to be cost-effective for leading-edge, high-bandwidth applications—even those that can adapt to delays in the network. If the objective is to make efficient use of networking resources and provide superior overall service, then mechanisms that enable the network to handle heterogeneous data types appear preferable to the separation of different types of data streams (e.g., real-time video, text, and images) into discrete networks (Shenker, 1995). A number of efforts are under way in the networking community to develop mechanisms for providing QOS across the Internet. The two main approaches are differentiated services (diff-serv) and integrated services (int-serv). Although they are very different, both attempt to manage available bandwidth to meet customer-specific needs for QOS. Both diff-serv and int-serv will enable greater use of the Internet in some health applications, but it is not clear that these programs will meet all the needs posed by the most challenging health applications.break

OCR for page 132
Page 138 Differentiated Services Recent efforts in the Internet Engineering Task Force (IETF) have resulted in a set of proposed standards for diff-serv across the Internet (Blake et al., 1998). As the name implies, diff-serv allows ISPs to offer users a range of qualities of service beyond the typical best effort. The ISPs were active in the definition of these standards, and several are expected to deploy some variant of diff-serv in 2000. Differentiated services do not currently define any mechanisms by which QOS levels could be determined for different communications sessions on demand; rather, initial deployment is likely to be for provisioned QOS that is agreed upon a priori. As a simple example, a customer of an ISP might sign up for premium service at a certain rate, say 128 kilobits per second (kbps). Such a service would allow the customer to send packets into the network at a rate of up to 128 kbps and expect them to receive better service than a best-effort packet would receive. Exactly how much better would be determined by the ISP. If the service were priced appropriately, then the provider might provision enough bandwidth for premium traffic to ensure that loss of a premium packet occurred very rarely, say once per 1 million packets sent. This would provide customers with high assurance that they could send at 128 kbps at any time to any destination within the ISP's network. Many variations of this basic service are possible. The service description above applies to traffic sent by the customer; it is also possible to provide high assurance of delivery for a customer's inbound traffic. Similarly, an ISP could offer a service that provides low latency. It is likely that providers would offer several grades of service, ranging from the basic best-effort through premium to superpremium, analogous to coach, business, and first class in airline travel. A customer might sign up for several of these services and then choose which packets need which service. For example, e-mail might be marked for best-effort delivery, whereas a video stream might be marked for premium. Customers then would need to develop their own policies to determine which types of traffic flows would be transmitted at different QOS levels. Although diff-serv is an improvement over best-effort services, it has several limitations that might preclude its use for some health-related applications. First, research has shown that simple diff-serv mechanisms (e.g., those that classify QOS levels at the edge of the network and provide differential loss probabilities in the core) can be used to provide a high probability of meeting users' QOS preferences for point-to-point communications (Clark and Wroclawski, 1997). However, in the absence of significant overprovisioning and explicit signaling to reserve resources, such guarantees are probabilistic, which virtually precludes absolute, quantifi-soft

OCR for page 132
Page 139 able service guarantees. The QOS provided by diff-serv depends largely on provisioning of the network to ensure that the resources available for premium services are sufficient to meet the offered load. The provision of sufficient resources to make hard guarantees may be economically feasible only if premium services are significantly more expensive than today's best-effort service. Indeed, ISPs need to have in place some incentive mechanism (such as increased charges for higher-quality service) to ensure that customers attempt to distinguish between their more important and less important traffic. A second limitation is that diff-serv can be offered most easily across a single ISP's network. Current standards do not define end-to-end services, focusing instead on individual hops between routers in the network. There are no defined mechanisms for providing service guarantees for packets that must traverse the networks of several ISPs. For many service providers, offering diff-serv across their own networks is likely to be a valuable first step—especially for providers with a national presence that will be able to provide end-to-end service to large customers with sites in major metropolitan areas. Services like these are also valuable over especially congested links, such as the transoceanic links; again, these types of services could be offered by a single provider. Nevertheless, there obviously would be great value in obtaining end-to-end QOS assurance even when the two ends are not connected to the same provider. To some extent, the diff-serv standards have laid the groundwork for interprovider QOS, because packets can be marked in standard ways before crossing a provider boundary. A provider connecting to another provider is in some sense just a customer of that provider. Provider A can buy premium service from provider B and resell that service to the customers of provider A. However, the services that providers offer are not likely to be identical, so the prospect of obtaining predictable end-to-end service from many providers seems considerably less certain than does single-provider QOS. Third, diff-serv does not currently allow users to signal a request for a particular level of QOS on an as-needed basis (as is possible with the integrated services model, described below). Health care organizations have widely varying needs for bandwidth over time. For example, a small medical center occasionally might need to transmit a mammography study of 100 MB in a short time interval—creating a need for high bandwidth over that interval—but it is unlikely to need even close to that amount of bandwidth on average. Thus, a dynamic model of QOS would be preferable. Diff-serv does not preclude such a model; it simply provides a number of QOS building blocks, which could be used to build a dynamic model in the future. A variety of means for dynamically signaling diff-serv QOS are under investigation by networking researchers.break

OCR for page 132
Page 140 Finally, the diff-serv approach may not provide a means of differentiating among service levels with sufficient granularity to meet the QOS needs of critical applications, such as remote control of medical monitoring or drug delivery devices. In the interests of scalability, diff-serv sorts traffic into a small number of classes; as a result, the packets from many applications and sites share the same class and can interfere with each other. For example, a physician downloading a medical image could inadvertently disrupt data flows from in-house monitoring equipment if they are on the same network and share a diff-serv class. Although policing of traffic at the edges of the network helps to ensure that applications of the same class do not interfere with each other, it does not completely isolate applications. Stronger isolation, and thus a larger number of classes, may be required for some demanding applications. Integrated Services In contrast to the diff-serv model, int-serv (Braden et al., 1994) provides quantifiable, end-to-end QOS guarantees for particular data flows (e.g., individual applications) in networks that use the IP.8 The guarantees take the form of "this videoconference from organization A to organization B will receive a minimum of 128 kbps throughput and a maximum of 100 milliseconds end-to-end latency." To accommodate such requests, int-serv includes a signaling mechanism called resource reservation protocol (RSVP) that allows applications to request QOS guarantees (Braden et al., 1997).9 Int-serv provides a service model that in some ways resembles that of the telephone network, in that service is requested as needed. If resources are available to provide the requested service, then the service will be provided; if not, then a negative acknowledgment (equivalent to a busy signal) is returned. For this reason, int-serv already is being used in some smaller networks to reserve bandwidth for voice communications. Several obstacles stand in the way of the deployment of int-serv across the Internet. The major concern is scalability. As currently defined, every application flow (e.g., a single video call) needs its own reservation, and each reservation requires that a moderate amount of information be stored at every router along the path that will carry the application data. As the nework itself grows and the number of reservations increases, so does the amount of information that must be stored throughout the network.10 The prospect of having to store such information in backbone routers is not attractive to ISPs, for which scalability is a major concern.11 Additional impediments arise from difficulties in administering int-serv reservations that cross the networks of multiple ISPs. Methods are needed for allocating the costs of calls that are transmitted by multiplecontinue

OCR for page 132
Page 141 ISPs; new ways of billing users and making settlements among ISPs may be required. These are QOS policy issues, discussed below. It is clear that the management of reservations that traverse multiple ISPs, each with its own administrative and provisioning policies, will be quite complex. Solutions to these problems must address the possibility that one or more ISPs might experience a failure during the life of a reservation, resulting in the need to reroute the traffic significantly (Birman, 1999). Such concerns, if not successfully addressed, could slow or thwart the deployment of int-serv capabilities throughout the Internet. Alternative Quality of Service Options Given the difficulties of existing approaches, a promising avenue of research focuses on QOS options that lie somewhere between diff-serv and int-serv. The goal of such approaches is to provide finer granularity and stronger guarantees than are provided by diff-serv while avoiding the scaling and administrative problems associated with int-serv's per-application reservations. One such approach, which is being pursued in the Integrated Services over Specific Link Layers working group of the IETF, combines the end-to-end service definitions and signaling of int-serv with the scalable queuing and classification techniques of diff-serv.12 Another approach, referred to as virtual overlay networks (VONs), uses the Internet to support the creation of isolated networks that would link multiple participants and offer desired levels of QOS, including some security and availability features (Birman, 1999). This approach would require routers to partition packet flows according to tags on the packets called flow identifiers. This process, in effect, allows the router to allocate a predetermined portion of its capabilities to particular tagged flows. Traffic within a tagged flow would compete with other packets on the same VON but not with traffic from other flows. An individual user (e.g., a hospital) could attempt to create multiple VONs to serve different applications so that each network could connect to different end points and offer different levels of service. Substantial research would be required before a VON could be implemented. Among the open questions are how to specify properties of an overlay network, how to dynamically administer resources on routers associated with an overlay network, how to avoid scaling issues as the number of overlays becomes large, and how to rapidly classify large numbers of flows. Quality of Service Policy Because QOS typically involves providing improved service to some sets of packets at the expense of others, the deployment of QOS technolo-soft

OCR for page 132
Page 142 gies requires a supporting policy infrastructure. In some applications, it is acceptable for QOS assurance to be lost for some short period of time, provided that such lapses occur infrequently. In other applications, however, the QOS guarantee must be met at all times, unless the network has become completely partitioned by failures. Work is needed to develop means of providing solid guarantees of QOS for critical information. Such mechanisms must scale well enough to be deployable in the Internet and will involve matters of policy (whose traffic deserves higher priority) as well as of technology. In the int-serv environment, QOS policy is required to answer questions such as whether a particular request for resources should be admitted to the network, or whether the request should preempt an existing guarantee. In the former case, a decision to admit a reservation request might be based on some credentials provided by the requesting organization. For example, if a particular health care organization has paid for a certain level of service from its ISP and the request carries a certificate proving that it originates from that organization, then the request is admitted. In the latter case, a request might contain information identifying it as critically important (e.g., urgent patient monitoring information) so that it could preempt, say, a standard telephone call that previously had reserved resources. It is difficult to predict all the possible scenarios in which policy information might play a role in the allocation of QOS. The RSVP provides a flexible mechanism by which policy-related data (e.g., a certificate identifying a user, institution, or application) can be carried with a request. The Common Open Policy Service protocol has been defined to enable routers processing RSVP requests to exchange policy data with policy servers, which are devices that store policy information, such as the types of request that are allowed from a certain institution and the preemption priority of certain applications. Policy decisions are likely to be complex, because of the nature of health care and the number of stakeholders involved in decision making. Accordingly, the design of policy servers, which are responsible for storing policy data and making policy decisions, would benefit from the input of the health care community. Policy also has a role in a diff-serv environment. For example, if an institution has an agreement with an ISP that it may transmit packets at a rate of up to 10 Mbps and will receive some sort of premium service, then the question of exactly which packets get treated as premium and which as standard is one of policy. The institution may wish to treat e-mail traffic as standard and use its allocation of premium traffic for more time-critical applications. There may be some cases in which data from the same application will be marked as either premium or standard, depending on other criteria. For example, a mammogram that will not be read bycontinue

OCR for page 132
Page 167 plus $500 to $1,000 for the antenna (Skoro, 1999). Geosynchronous systems are limited by significant propagation delays (i.e., latency), which may preclude their use in some interactive applications; furthermore, their data-carrying capacity is distributed among a large number of users. LEO systems overcome some of the problems with delay, but the satellites move fast and have smaller coverage areas, meaning that large numbers of satellites are needed to provide global coverage and techniques are needed to manage the handoff of connections between satellites. High-power transmitters are needed to achieve high data rates, which implies large antennas and/or high-frequency operation. At higher frequencies, signals degrade more quickly in rain and other adverse weather conditions. Overall, the deployment of broadband Internet services has been slow, albeit increasing, in the United States. Only about 1 percent of all U.S. households with Internet access had broadband connections in 1999 (Clark, 1999). A number of factors are at play, including technology, economics (both the cost of building broadband networks and the costs of service), and policy. Most U.S. households have yet to subscribe to broadband services because these connections are not offered in their geographic area, or because they are too expensive or not viewed as useful. The spotty coverage and high cost of high-bandwidth access technologies mean, unfortunately, that those who could benefit most from the health care applications of the Internet—such as people in rural areas with limited access to medical specialists—are the least likely to have high-speed Internet access. Work in many areas, both technical and policy-related, will be required to enhance network access for health applications. In some cases, technical work will be pursued by the computing and communications industries without the participation of the health community. Even so, by voicing its needs, the health community will help ensure that they are met. In other cases, the requirements of the health care community may motivate research. Again, the articulation of specific needs will be necessary, and participation in research may be needed as well. The following section identifies several needs that are of particular interest to the health care community. Privacy-Enhancing Technologies A popular cartoon depicts a dog sitting in front of a computer monitor and is captioned, "On the Internet, nobody knows you're a dog." At one level, this statement is true—an ordinary Internet user can choose cryptic pseudonym or screen name so that a typical e-mail recipient or chat room participant cannot easily identify the individual behind thecontinue

OCR for page 132
Page 168 name. But, unless users encrypt e-mail before sending it, every router that forwards the message will be able to read it. Even if the message is encrypted, each router in its path knows the network address from which it was sent and its destination. The user's ISP knows the name and address of the individual who is paying for the service. If the user sends the message from a workplace, then the employer has the right to read it; even a free, public access system is not entirely safe because others may be looking over the user's shoulder. If the user browses the Web, then the Web server reached will very likely be able to learn a lot about the user's computer system, including the make, operating system, and browser. Accordingly, a user querying a database for information on a sensitive disease or condition might wish to take precautions. There are powerful incentives for Web servers to monitor their visitors, because the data extracted have commercial value—they allow businesses to know which parts of their Web site are interesting to which visitors, thus supporting targeted advertising. Consumers may benefit from such advertising because they learn of new products in a manner that coincides with their tastes, but the implied lack of privacy can be a deterrent to the use of the Internet in certain health care applications. Patients express considerable concern about health information. To protect their privacy, some patients withhold information from their care providers, pay their own health expenses (rather than submit claims to an insurance company), visit multiple care providers to prevent the development of a complete health record, or sometimes even avoid seeking care (Health Privacy Working Group, 1999). The Internet may ease some such concerns because it enables consumers to find health information without visiting their care providers, and it may eventually allow them to seek consultation from, or be examined by, multiple providers in different parts of the country. But without additional privacy protections, a host of new companies could collect information about personal health interests from consumers who browse the Web, exchange e-mail with providers, or purchase health products online. Profiles of patients' online activities can divulge considerable information about personal health concerns. Patients have little control over how that information might be used—or to whom it may be sold. Concerns about anonymity extend beyond consumer uses of the Internet. Care providers and pharmaceutical researchers, too, express concerns about the privacy of their Internet use. Some care providers wonder if their use of the Internet to research diagnostic information might be construed as a lack of knowledge in certain areas. If such information were tracked by—or made available to—employers or consumer groups, then it could hurt providers' practices. Pharmaceutical companies are concerned that the use of online databases by their researcherscontinue

OCR for page 132
Page 169 may divulge secrets about the company's proprietary research. These concerns can be addressed in a number of ways, both technical and policy-oriented, but they need to be put to rest if the Internet is to be used more pervasively for health applications. Some mechanisms are available that users can exploit to reduce their exposure to prying eyes on the Internet. Most attempt to protect the anonymity of users, so that the sender of a message or a visitor to a Web site cannot be identified by the recipient or the Web server. Encryption is the basic engine that underlies all of these mechanisms. Until now, most research on anonymous communication has been carried out informally and without specific attention to health care applications. Most of the existing mechanisms were designed and built in the context of the Internet, and the future development of Internet infrastructure may be intertwined with their use. The benefits and dangers of supporting anonymous communication mechanisms have been the subjects of recurrent (and appropriate) discussions. Health care offers one of the most compelling cases for the benefits. Such mechanisms could make people feel safe in seeking out information about their own health problems, thereby leading to earlier diagnosis and better treatment. They could also be used to solicit reports about the spread of, for example, sexually transmitted diseases and other health problems that individuals may prefer to report anonymously. In addition, anonymous communication mechanisms can help users limit the capabilities of others to build databases of their behavior or can reduce the extent to which they are the targets of undesired commercial solicitations. For all of these reasons, it would be appropriate for the funders of health care research to support investigations into anonymous communication technologies for future Internet architectures. Anonymous E-mail Encryption of e-mail can prevent intermediaries in the network from reading the messages but cannot prevent them from knowing that the sender and receiver are communicating; likewise, it cannot necessarily hide the identity of the sender from the receiver.28 The first mechanisms developed to support anonymous e-mail messages were called re-mailers. These mechanisms would permit a user to register a pseudonym with the server. Mail coming from the user would then be re-mailed by the server, which would strip out identifying material in headers and make it appear that the mail originated at the re-mailer. The re-mailer could forward replies sent to ''pseudonym@remailer" to the registered user. For additional protection, the user could encrypt traffic sent to the re-mailer, so that a wiretapper with connections to the re-mailer's inputs and outputscontinue

OCR for page 132
Page 170 could not easily defeat the mechanism. The wiretapper still could probably identify which user was sending mail to which destination by looking at the timing and lengths of messages sent and forwarded by the re-mailer. A single re-mailer remains a point of trust and vulnerability, because it knows the mapping between identities and pseudonyms. This vulnerability has been exploited in legal attacks: for instance, the operator of a widely used Finnish re-mailer dis  operations when he found that, under Finnish law, he could be forced to reveal the identities of his subscribers. Some U.S. companies have revealed pseudonym-identity mappings when subpoenaed in civil cases. To provide stronger protection, David Chaum (1981) proposed a network of re-mailers, called mixes. In this scheme, each e-mail message traverses a sequence of mixes and then is reencrypted for transit across each link. In addition, each mix collects a set of messages over a period of time and recorders the set before forwarding them, so that even an observer who could trace the sequence of arrivals and departures from all mixes would be unable to trace a message through the network. Ad hoc networks of re-mailers that incorporate some of these approaches are now operating on the Internet. A commercial service, Anonymizer.com, provides an anonymous re-mailing facility that permits a sender, free of charge, to specify a chain of re-mailers. Protected Web Browsing Because forwarding of e-mail does not require a real-time connection from sender to receiver, it is reasonably easy to protect sender anonymity, at least partially. Web browsing, because it depends on a reasonably prompt interaction between client and server, is more difficult to protect. The timing of message arrival and departure may make it obvious to an observer that two parties are communicating, even if the message contents and addresses are obscured. The problem of how to hide the identity of a user browsing the Web from a server that it accesses can be broken into two parts: first, how to prevent an eavesdropper from being able to trace the path of the traffic and, second, how to prevent the server from sending traffic over the path that causes the client (against the user's wishes) to reveal information that could identify the user.29 Most of the techniques developed for protecting Web browsing have been, or could be, adapted to support anonymous e-mail and other functions (e.g., file transfer, news, VPNs) as well. A straightforward approach to providing anonymous Web browsing uses a trusted intermediary, analogous to a simple re-mailer. The user forwards the universal resource locator (URL) of interest to the intermediary, which strips any identifying information from the requests, perhapscontinue

OCR for page 132
Page 171 even providing an alias, and forwards the request to the intended server. To the server, the request appears to have come from the intermediary.30 The intermediary also forwards any data returned to the appropriate requester. Anonymizer.com, the Rewebber (formerly Janus), and Proxymate (formerly Lucent Personal Web Assistant) all provide services of this sort. The communication between the client and the trusted intermediary can be protected from simple eavesdropping by using SSL over this link. The Rewebber also supports anonymous publishing by providing encrypted URLs. A user wishing to retrieve data from an anonymous server obtains an encrypted URL for that server (this encrypted version may be freely distributed). The Rewebber then decrypts the URL, forwards the request to the hidden server, collects the reply, and returns it to the user. Technology to hide the communication path, based on an enhancement of Chaum's Mix networks, has been developed and prototyped by the Naval Research Laboratory in its Onion Routing Project (Reed et al., 1998). This scheme creates a bidirectional, real-time connection from client to server by initiating a sequence of application-layer connections within a set of nodes acting as mixes. The path through the network is defined by an "onion" (a layered, multiply-encrypted data structure) that is created by the user initiating the connection and transmitted to the network. Only the onion's creator knows the complete path; each node in the path can determine only its predecessor and successor, so an attack on the node operators will be difficult to execute. This strategy also limits the damage that a compromised onion routing node can do; as long as either the first or last node in the path is trustworthy, then it is difficult for an attacker to reconstruct the path. All the packets in the network have a fixed length and are mixed and re-encrypted on each hop. In the event that the submitted traffic rate is too low to assure adequate protection, padding (dummy packets) is introduced. These defenses can be expected to make it extremely difficult to use traffic analysis to deduce who is talking to whom, even if an eavesdropper can see all links. Onion routing needs a separate screening mechanism to anonymize the data flowing between client and server, so that the server is blocked from sending messages to the client that will cause client software to reveal its identity. Although the Onion Routing Project has implemented an anonymizing proxy to perform this type of blocking, a server can play any of an increasing number of tricks to determine the client's identity. Other projects, such as Proxymate, have specifically concentrated on hiding the identify of the client from the server and have devised more robust techniques for doing so than those developed under the Onion Routing project (Bleichenbacher et al., 1998). These techniques can be combined with onion routing to provide strong protection against both traffic analysis and servers that might try to identify their clients. A system forcontinue

OCR for page 132
Page 172 protecting personal identity on the Web that appears to be closely based on onion routing is being offered commercially by Zero Knowledge Systems (1998) of Montreal. A different approach has been prototyped by AT&T researchers in their Crowds system (Reiter and Rubin, 1998). Instead of creating a separate network of mixes to forward traffic, each participant in a Crowd runs a piece of software (called a jondo) that forwards traffic either to other nodes in the Crowd or to its final destination. In effect, when a member of a Crowd receives a packet, it flips a weighted coin. If the coin comes up heads, then the participant decrypts the packet and sends it directly to its Internet destination address. Otherwise, it forwards the packet to another randomly chosen jondo. The Web server receiving the packet can only identify the jondo that last forwarded the packet; it cannot deduce the packet's true origin. Return traffic follows the same randomly generated path in the reverse direction. Anonymous Payment In the physical world, individuals who do not want stores to track their purchases can pay cash. The standard approach for buying items on the Internet, however, is to use a credit card, which is guaranteed to reveal the purchaser's identity. Several schemes based on cryptographic mechanisms can enable anonymous payment over the Internet. Chaum (1989) pioneered research in this field, but the rise of e-commerce has triggered much additional work in recent years. The basic idea is to create the electronic analog of a coin—a special number. The merchant must be able to determine that the coin is valid (not counterfeit) without requiring the identity of the individual presenting it. Because computers can copy numbers so easily, a basic problem is to prevent a coin from being spent twice. Although Chaum and others have invented schemes that solve this problem and yet provide anonymity (at least for users who do not try to commit fraud), it has proven difficult to transfer these solutions into the world of commerce. Law enforcement authorities express concern over such technologies because of the potential for their use (or misuse) in money laundering and tax evasion. Anonymous Data Released from Sensitive Databases For many years, the U.S. Bureau of the Census has been charged with releasing statistically valid data drawn from census forms without permitting individual identities to be inferred. The problem of constructing a statistical database that can protect individual identities has long been known to researchers (Denning et al., 1979; Schlörer, 1981; Denning andcontinue

OCR for page 132
Page 173 Schlörer, 1983). To limit the possibility of identification, statisticians have developed several techniques, such as restructuring tables so that no cells contain very small numbers of individuals and perturbing individual data records so that statistical properties are preserved but individual records no longer reflect specific individuals (Cox, 1988). Medical records have often been disclosed to researchers under the constraint that the results of the research not violate patient confidentiality, and, in general, researchers have lived up to this requirement. Recently, researchers have shown how easily even data stripped of obvious identifying information (name, address, social security number, telephone number) may still disclose individual identity, and they have proposed both technical approaches to reduce the chance of confidentiality compromises and guidelines for future release policies (Sweeney, 1998). The benefits of having full access to relevant data for research purposes and the difficulty of rendering data anonymous without distorting it are likely to require a continuing trust between researcher and subject. An earlier report by the Computer Science and Telecommunications Board (1997a) discussed systemic flows of information in the health care industry and proposed specific criteria for universal patient identifiers. In particular, the report called for technical mechanisms that would help control linkages among health care databases held by different organizations, reveal when unauthorized linkages were made, and support appropriate linking. Because Internet connectivity greatly facilitates such linkages, it is appropriate to renew the call for research into such mechanisms in the present report. Conclusion As the discussion in this chapter demonstrates, ongoing efforts to enhance the capabilities of the Internet will produce many benefits for the health community. They will provide mechanisms for offering QOS guarantees, better securing health information, expanding broadband access options for consumers, and protecting consumer privacy. At the same time, the technologies expected to be deployed across the Internet in the near future will not fully meet the needs of critical health care applications. In particular, QOS offerings may not meet the need for dynamically variable service between communicating entities. Security technologies may not provide for the widespread issuance of certificates to health care consumers. And the Internet will not necessarily provide the degree of reliability needed for mission-critical health applications. Although much can be done with the technologies currently planned, additional effort will be needed to make the Internet even more useful to the health community.break

OCR for page 132
Page 174 One way to ensure that health-related needs are reflected in networking research and development is to increase the interaction between the health and technical communities. As researchers attest, most networking research is conducted with some potential applications in mind. Those applications are shaped by interactions with system users who can envision new applications. To date, interaction between health informatics professionals and networking researchers has been limited. By contrast, the interests of industries such as automobile manufacturing and banking are well represented within the networking community, in part because of their participation in the IETF and other networking groups. The health community may need to better engage these groups to ensure that health interests are considered. Bibliography Birman, K.P. 1999. The Next Generation Internet: Unsafe at Any Speed? Department of Computer Science Technical Report, Draft of October 21. Cornell University, Ithaca, N.Y. Blake, S., et al. 1998. An Architecture for Differentiated Services. IETF Request for Comment (RFC) 2475, December. Bleichenbacher, D., E. Gabber, P. Gibbons, Y. Matias, and A. Mayer. 1998. "On Secure and Pseudonymous Client-Relationships with Multiple Servers," pp. 99-108 in Proceedings of the Third USENIX Electronic Commerce Workshop, Boston, September. Braden, R., S. Shenker, and D. Clark. 1994. Integrated Services in the Internet Architecture: An Overview. IETF Request for Comment (RFC) 1633, June. Braden, R., L. Zhang, S. Berson, S. Herzog, and S. Jamin. 1997. Resource ReSerVation Protocol (RSVP): Version 1 Functional Specification, IETF Request for Comment (RFC) 2205, September. Chaum, D. 1981. "Untraceable Electronic Mail, Return Addresses, and Digital Pseudonyms," Communications of the ACM 24(2):84-88. Chaum, D. 1989. "Privacy Protected Payments: Unconditional Payer and/or Payee Untraceability," pp. 69-93 in Proceedings of SMARTCARD 2000, D. Chaum and I. Schaumuller-Bichl, eds. North-Holland, Amsterdam. Clark, D. 1999. "The Internet of Tomorrow," Science 285(July 16):353. Clark, D., and J. Wroclawski. 1997. An Approach to Service Allocation in the Internet. IETF Draft Report, July. Massachusetts Institute of Technology, Cambridge, Mass. Available online at <http://diffserv.lcs.mit.edu/Drafts/draft-clark-diff-svc-alloc-00.txt>. Computer Science and Telecommunications Board (CSTB), National Research Council. 1994. Realizing the Information Future: The Internet and Beyond. National Academy Press, Washington, D.C. Computer Science and Telecommunications Board (CSTB), National Research Council. 1996. The Unpredictable Certainty: Information Infrastructure Through 2000. National Academy Press, Washington, D.C. Computer Science and Telecommunications Board (CSTB), National Research Council. 1997a. For the Record: Protecting Electronic Health Information. National Academy Press, Washington, D.C.break

OCR for page 132
Page 175 Computer Science and Telecommunications Board (CSTB), National Research Council. 1997b. Modeling and Simulation: Linking Entertainment and Defense. National Academy Press, Washington, D.C. Computer Science and Telecommunications Board (CSTB), National Research Council. 1999. Trust in Cyberspace. National Academy Press, Washington, D.C. Cox, L.H. 1988. "Modeling and Controlling User Inference," pp. 167-171 in Database Security: Status and Prospects, C. Landwehr, ed. North-Holland, Amsterdam. Denning, D.E., and J. Schlörer. 1983. "Inference Controls for Statistical Databases," IEEE Computer 16(7):69-82. Denning, D.E., P.J. Denning, and M.D. Schwartz. 1979. "The Tracker: A Threat to Statistical Database Security," ACM Transactions on Database Systems 4(1):76-96. Dierks, T., and C. Allen. 1999. The TLS Protocol Version 1.0. IETF Request for Comment (RFC) 2246, January. Ellison, Carl, and Bruce Schneier. 2000. "Ten Risks of PKI: What You're Not Being Told About Public Key Infrastructure," Computer Security Journal 16(1):1-7. Goldberg, I., and D. Wagner. 1998. "TAZ Servers and the Rewebber Network: Enabling Anonymous Publishing on the World Wide Web," First Monday 3(4). Available online at <http://www.rewebber.com>. Halabi, B. 1997. Internet Routing Architectures. Cisco Press, Indianapolis, Ind. Hawley, G.T. 1999. "Broadband by Phone," Scientific American 281(4):102-103. Health Privacy Working Group. 1999. Best Principles for Health Privacy. Institute for Health Care Research and Policy, Georgetown University, Washington, D.C. Huitema, C. 1995. Routing in the Internet. Prentice-Hall, Englewood Cliffs, N.J. Institute of Medicine (IOM). 1997. The Computer-Based Patient Record: An Essential Technology for Health Care, rev. ed. Dick, R.S., E.B. Steen, and D.E. Detmer, eds. National Academy Press, Washington, D.C. Jacobson, V. 1988. "Congestion Avoidance and Control," Computer Communication Review 18(4):314-329. Kent, S., and R. Atkinson. 1998a. Security Architecture for the Internet Protocol. IETF Request for Comment (RFC) 2401, November. Kent, S., and R. Atkinson. 1998b. IP Authentication Header. IETF Request for Comment (RFC) 2402, November. Kent, S., and R. Atkinson. 1998c. IP Encapsulating Security Payload (ESP). IETF Request for Comment (RFC) 2406, November. Marbach, W.D. 1983. "Beware: Hackers at Play," Newsweek 102(September 5):42-46. Paxson, V. 1997. "End-to-End Routing Behavior in the Internet," IEEE/ACM Transactions on Networking 5(October):601-615. Perlman, R. 1992. Interconnections: Bridges and Routers. Addison-Wesley, Reading, Mass. Peterson, L., and B. Davie. 2000. Computer Networks: A Systems Approach. Morgan Kaufmann, San Francisco. Reed, M.G., P.F. Syverson, and D.M. Goldschlag. 1998. "Anonymous Connections and Onion Routing," IEEE Journal of Selected Areas in Communication 16(4):482-494. Reiter, M.K., and A.D. Rubin. 1998. "Crowds: Anonymity of Web Transactions," ACM Transactions on Information Systems Security 1(1):66-92. Reuters. 1999. "AMA, Intel to Boost Online Health Security," October 13. Schlörer, J. 1981. "Security of Statistical Databases: Multidimensional Transformation," ACM Transactions on Database Systems 6(1):95-112. Shenker, S. 1995. "Fundamental Design Issues for the Future Internet," IEEE Journal of Selected Areas in Communication 13(7):1176-1188. Available online at <www.lcs.mit.edu/anaweb/pdf-papers/shenker.pdf>. Skoro, J. 1999. "LMDS: Broadband Wireless Access," Scientific American 281(4):108-109.break

OCR for page 132
Page 176 Sweeney, L. 1998. "Datafly: A System for Providing Anonymity in Medical Data," in Database Security XI: Status and Prospects, T.Y. Lin and S. Qian, eds. Chapman & Hall, New York. Zero Knowledge Systems, Inc. 1998. The Freedom Network Architecture, Version 1.0. Available from ZKS, 3981 St. Laurent Blvd., Montreal, Quebec, Canada. December. Zimmerman, Philip. 1994. The Official PGP Users Guide, Technical report. MIT Press, Cambridge, Mass. Notes 1. Evidence of such latencies can be seen in data collected by the National Laboratory for Applied Network Research, which are available at <http://www.nlanr.net>. 2. ISPs typically have POPs in major urban areas; a large provider might have 30 or more POPs in the United States. 3. The 30 Tbps figure was calculated by multiplying the number of strands per fiber (30) by the number of wavelengths that can be transmitted over each fiber (100) and the capacity of each fiber at each wavelength (10 Gbps). A terabit is 1012 (one thousand billion) bits per second. 4. SONET is a standard developed by telephone companies for transmitting digitized voice and data on optical fibers. 5. See <http://209.249.142.16/nnpm/owa/NRpublicreports.usagemonthly>. 6. The 10 Gbps figure results from multiplying 10 Mbps by 1,000 applications (10 Mbps × 1,000 = 10 Gbps). 7. For example, even if available bandwidth were 10 times greater than the average required, the load on certain links over short time periods could be large enough to impose large delays over those links. 8. IP is a connectionless, packet-switching protocol that serves as the internetwork layer for the TCP/IP protocol suite. It provides packet routing, fragmentation of messages, and reassembly. 9. Because of its reliance on RSVP, the int-serv model sometimes is referred to as the RSVP model. 10. With RSVP, the load on the router can be expected to increase at least linearly as the number of end points increases. Growth may even be quadratic—related to the square of the number of end points (Birman, 1999). 11. An example of a scaling issue for today's ISPs is the size of routing tables, which currently hold about 60,000 routes (address prefixes) each. Entries in the routing table consume memory, and the processing power needed to update tables increases with their size. It is important that such tables grow much more slowly than do the numbers of users or individual applications, making it infeasible to store RSVP information if it grows in direct proportion to the number of application flows. 12. The charter of the Integrated Services Over Specific Link Layers working group of the IETF is available online at <http://www.ietf.org/html.charters/issll-charter.htm>. 13. The Department of Defense has a long-standing interest in using multicast technology to support distributed simulations. See CSTB (1997b). 14. One of the more notorious cases occurred when the "414" group broke into a machine at the National Cancer Institute in 1982, although no damage from the intrusion was detected. See Marbach (1983). 15. Unix's Network File System (NFS) protocol, commonly used to access file systems across an Internet connection, has weaknesses that enable a "mount point" to be passed to unauthorized systems. Surreptitious programs called Trojan horses can be exploited to perform actions that are neither desired by nor known to the user.break

OCR for page 132
Page 177 16. Most U.S. health care providers continue to maintain patient records on paper, but current trends in clinical care, consumer health, public health, and health finance all indicate a shift to electronic records. Without such a shift, the health community's ability to take full advantage of improved networking capabilities would be severely limited. With such a shift, the need for convenient, effective, and flexible means of ensuring security will be paramount. 17. Tools such as Back Orifice can enable a hacker using the Internet to remotely control computers using Windows 95, Windows 98, or Windows NT. Using Back Orifice, hackers can open and close programs, reboot computers, and so on. The Back Orifice server has to be willingly accepted and run by its host before it can be used, but it is usually distributed claiming to be something else. Other such clandestine packages also exist, most notably Loki. 18. For a discussion of key distribution centers, see CSTB (1999), pp. 127-128. 19. For a discussion of some of the limitations of PKI systems, see Ellison and Schneier (2000). 20. It should be noted that when using SSL, data are decrypted the moment they reach their destination and are likely to be stored on a server in unencrypted form, making them vulnerable to subsequent compromise. A number of approaches can be taken to protect this information, including reencryption, which presents its own challenges, not the least of which is ensuring that the key to an encrypted database is not lost or compromised. 21. Whereas one organization may issue a certificate to anyone who requests one and fills out an application, another may require stronger proof of identity, such as a birth certificate and passport. These differences affect the degree of trust that communicating parties may place in the certificates when they are presented for online transactions. 22. Additional information on the Intel initiative is available online at <http://www.intel.com/intel/e-health/>. 23. Participating organizations in the HealthKey initiative are the Massachusetts Health Data Consortium, the Minnesota Health Data Institute, the North Carolina Healthcare Information and Communications Alliance, the Utah Health Information Network, and the Community Health Information Technology Alliance, based in the Pacific Northwest. Additional information on the program is available online at <http://www.healthkey.org>. 24. Users can do this, for example, by registering their public keys with a public facility, such as the PGP key server at the Massachusetts Institute of Technology. 25. Computer scientists generally consider system (or network) availability to be an element of security, along with confidentiality and integrity. As such, availability is discussed within the security section of this chapter. Other chapters of this report discuss availability as a separate consideration to highlight the different requirements that health applications have for confidentiality, integrity, and availability. 26. Cable modems and DSL services are typically not attractive to businesses, either because the number of connected hosts (IP addresses) is limited or the guaranteed minimum delivered bandwidth is low. In the San Francisco Bay Area, asymmetric DSL delivers anywhere from 384 kbps to 1.5 Mbps, depending on many factors. In other areas, DSL with 256 kbps/64 kbps down/up link speed costs approximately $50 per month, but the costs skyrocket quickly to roughly $700 per month for 1.5 Mbps/768 kbps down/up link speeds. 27. Quality of service mechanisms, such as integrated services, might help ameliorate contention for cable bandwidth, but only if the technology is widely deployed. 28. There is at least one way to hide the identity of the sender: All e-mail applications can be spoofed. 29. Intel Corporation introduced an identifying number into its Pentium microprocessors to help servers identify client machines in the hopes of facilitating electronic commerce. Public concern over the privacy implications of this capability caused the company to take the additional step of providing a means to prevent the number from being revealed. 30. This is essentially what a filtering firewall does: hides the identities (IP addresses) of those behind it.break