Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 1
Overview and Recommendalions OVERVIEW The rhetoric of the Internet revolution surrounds us. The transforma- tion of a research network used by a few tens of thousands of researchers into a global communications infrastructure vital to many aspects of life is celebrated as folk history and pointed to as the basis for a new economic order. Electronic commerce has transformed the way in which many individual consumers, companies, and governments buy and sell prod- ucts and services. E-mail, chat rooms, and other forms of communication have become common in the workplace and many homes. The Internet provides near-instant access to a wide range of multimedia content and has become an important channel for software distribution. Where is the Internet going, and how is it getting there? All indica- tions are that the Internet revolution given its impact, "revolution" seems the appropriate label is not nearly over. lust during the course of the authoring committee's work, there were a number of developments that are likely to have long-lasting impact; salient among them are the widening deployment of broadband residential Internet service and the beginnings of commercial deployment of mobile wireless devices that have Internet connectivity. Other recent developments include the ad- vent of new interconnection models and businesses and the widespread use of new content delivery mechanisms designed as overlays to the Internet. Meanwhile, innovation continues in the applications and ser- vices that run over the Internet, exemplified by the rise of interactive chat and games and various forms of Internet-based telephony. Napster and
OCR for page 2
2 THE INTERNET'S COMING OF AGE its kin, which enable decentralized, peer-to-peer distribution of informa- tion, are challenging conventional business models and stimulating yet more applications and new businesses. The unprecedented speed at which software can be distributed over the Internet means that dissemi- nation of an innovation is not limited by the production and distribution of a physical artifact. Further complicating the picture is uncertainty about which develop- ments will prove to be transient and which will have a lasting impact. While the World Wide Web has indeed had a great impact, mid-199Os hype about "push" technologies proved unfounded, given their com- paratively limited impact on either Internet users or businesses. lust a few years ago, experts and pundits predicted that congestion of the Internet backbone was an imminent peril, a forecast that proved incorrect, thanks to improved backbone speeds. Such uncertainty means that the planning process for businesses, policy makers, and others focused on the Internet and its uses can easily be overtaken by events and that the importance of specific events is hard to appraise, especially in the short term. This uncertainty was a confounding factor in the project that culmi- nated in this report technical issues can be resolved in multiple ways in a dynamic environment, and the consequent diversity of opinion some- times makes it hard to reach consensus. The middle of a revolution is a difficult point from which to gauge long-term outcomes. Inherent uncertainty clashes with growing political pressures on policy makers to respond to apparent trends and to the side effects of Internet activities. The actions of the businesses that provide Internet services, content, and applications fill the daily news. Increas- ingly, these businesses are the subject of public scrutiny and governmen- tal inquiry into the implications of their actions, which range from merg- ers involving Internet service providers to practices surrounding personal information gathered from people visiting Web sites. The public debate about the Internet often reveals significant gaps in understanding of the Internet, and those gaps can compromise the decisions and investments that should be made in order to gain the most from what the Internet has to offer. To shed light on appropriate actions and responses to the Internet revolution, this report, written by a committee with an in-depth under- standing of the Internet's technologies and its core businesses, undertakes an assessment of the Internet along several lines: · Reviewing the fundamental technical design principles that have helped shape the Internet's success; · Considering the state of the art as Internet technology continues to
OCR for page 3
OVERVIEW AND RECOMMENDATIONS 3 evolve, with an eye toward identifying technical issues that merit attention; · Exploring operational and management issues that require atten- tion by those who develop, operate, and use the Internet; and · Developing guiding principles for governments to use as they con- front the Internet-related issues that arise in different spheres. With these tasks in mind, the committee's assessment focuses on five themes: (1) the Internet's basic design features; (2) its scalability, reliabil- ity, and robustness; (3) interconnection and openness; (4) collisions be- tween the Internet and other communications-based industries, particu- larly those that long predate the Internet; and (5) broader social policy issues. This chapter covers the key points made in the main text and goes on to recommend where investment will be required to head off future problems and to maximize the economic and social benefits that can flow from the use of the Internet. It concludes by articulating some guiding principles for those who formulate Internet policy and regulation. Success by Design The Internet is a composite of tens of thousands of individually owned and operated networks that are interconnected, providing the user with the illusion that they are a single network. A customer who purchases Internet service is actually purchasing service from a particular Internet service provider (ISP) connected to this network of networks. The ISP in turn enters into business arrangements for connectivity with other service providers to ensure that the customer's data can move smoothly among the various parts of the Internet. The networks that make up the Internet are composed of communications links, which carry data from one point to another, and routers, which direct the communications flow between links and thus, ultimately, from senders to receivers. Communications links to users may employ different communications media, from tele- phone lines to cables originally deployed for use in cable television sys- tems to satellite and other wireless circuits. Internal to networks, espe- cially larger networks, are links typically optical fiber cables that can carry relatively large amounts of traffic. The largest of these links are commonly said to make up the Internet's "backbone," although this defi- nition is not precise and even the backbone is not monolithic. The networks that compose the Internet share a common architecture (how the components of the networks interrelate) and protocols (stan- dards governing the interchange of data) that enable communication within and among them. The architecture and protocols are shaped by
OCR for page 4
4 THE INTERNET'S COMING OF AGE fundamental design principles adopted by the early builders of the Internet, including the following: · "Hourglass" architecture. The Internet is designed to operate over different underlying communications technologies, including those yet to be introduced, and to support multiple and evolving applications and services. It does not impede or restrict particular applications (although users and ISPs may make optimizations reflecting the requirements of particular applications or classes of applications). Such an architecture enables people to write applications that run over it without knowing details about the configuration of the networks over which they run and without involving the network operators. This critical separation be- tween the network technology and the higher-level services through which users actually interact with the Internet can be visualized as an hourglass, in which the narrow waist represents the basic network service provided by the Internet and the wider regions above and below repre- sent the applications and underlying communications technologies, respectively. · End-to-end architecture. Edge-based innovation derives from an early fundamental design decision that the Internet should have an end- to-end architecture. The network, which provides a communications fab- ric connecting the many computers at its ends, offers a very basic level of service, data transport, while the intelligence, the information processing needed to provide applications, is located in or close to the devices at- tached to the edge of the network. · Scalability. The Internet's design enables it to support a growing amount of communications growth in the number of users and attached devices and growth in the volume of communications per device and in total, properties referred to as "scale." Nonetheless, as is discussed be- low, the Internet currently faces and will continue to face scaling chal- lenges that will require significant effort by those who design and oper- ate it. · Distributed design and decentralized control. Control of the network (from the standpoint of, for instance, how data packets are routed through the Internet) is distributed except for a few key functions, namely, the allocation of address blocks and the management of top-level domain names in the Domain Name System. No single entity (organization, corporation, or government body) controls the Internet in its entirety. These design principles mean that the Internet is open from the stand- point of users, service providers, and network providers, and as a result it has been open to change in the associated industry base as well as in the technologies they supply and use. A wide range of applications and
OCR for page 5
OVERVIEW AND RECOMMENDATIONS . 5 services, some leveraging the commonality of the Internet protocol (IP) and others also leveraging standards layered on top of IP, most notably e-mail and the Web interface, have flourished. Observations about these design principles have already begun to be introduced into regulatory proceedings, and the merit of sustaining them is recognized by princi- pals in the Internet technical community, including the members of this committee. Sustaining the Growth of the Internet The power of the Internet's basic design is reflected in its ability to sustain vigorous growth in three dimensions the number of users (and devices) connected, the amount of data that each user or device typically transmits, and the number of ways in which people use the network. While its rapid growth rate makes it difficult to determine the extent to which the Internet has become entwined in daily life, all indications are that the Internet plays a vital role that will only continue to expand. Widely understood to be a place to "live," work, and play, the Internet has reached mission-critical status for many individuals, businesses, or- ganizations, and applications. To meet the expected demands, the Internet will have to continue to scale up into the foreseeable future. While the fundamental design prin- ciples have so far proven durable in the face of growth, sustained growth including support for faster communications and the ability for more devices to connect to the network will pose challenges. But with growth come needs beyond simple support for more or faster connectiv- ity. Making the Internet and its constituent components more reliable and robust and less vulnerable to system or component failures and attacks is also of increasing importance. A comprehensive, detailed com- pilation of all the challenges posed by the growth of the Internet would easily fill an entire report; in this report, the committee describes several of the challenges in some detail, aiming to provide sufficient information to allow experts and nonexperts alike to understand their essential features. Scaling Challenges Scaling challenges at all levels, from the Internet's core to the applica- tions that run over the Internet, will require continuing, persistent atten- tion by infrastructure operators, equipment vendors, application devel- opers, and researchers. The research and development that underlie the Internet core's growth and the processes by which new protocols are developed, deployed, and modified in response to shortcomings have
OCR for page 6
6 THE INTERNET'S COMING OF AGE generally been satisfactory. The challenges described below are ones that especially need continued or heightened attention by researchers and Internet operators. Past experience with application protocols that scale poorly, in com- bination with an appreciation of the ease and rapidity with which new application protocols can come into widespread use as a consequence of the Internet's open architecture, gives rise to expectations of future scal- ing surprises. Like the earlier versions of the Web protocol HTTP, new Internet applications are not necessarily well designed for widespread use, and some of them will encounter performance challenges as the Internet continues to grow. Reengineering popular applications so they continue to work well as the scale of their use expands is likely to be an ongoing challenge. The Internet's Domain Name System (DNS) also faces scaling chal- lenges. Two sources of pressure the flat structure of much of the name system and the registration of millions of names reflect market de- mands. They are generating a growing load and concentrating it on a small number of servers. Possible solutions include alternative server architectures that can cope better with the load or new naming architec- tures (in place of or on top of the DNS) that spread the load over a larger number of servers. There are also scaling challenges that are less immediate, such as those associated with routing the mechanisms by which the Internet passes around information about system addresses and locations. In fact, some of the addressing issues discussed in this report stem from routing issues. The Internet routing infrastructure threatens to become over- whelmed by the volume and complexity of information being distributed and perhaps by the volume of information that each router is required to maintain. Indeed, some believe that the current system that enables rout- ers to decide where to send data packets as they move through the net- work will require a fundamental rethinking. Scaling up the Address Space The Internet's basic protocol, IP, was designed to provide only roughly 4.3 billion unique identifiers, a limitation that is becoming in- creasingly problematic as the number of computers attached to the Internet continues to grow. The seriousness and urgency attached to a potential or actual address shortage depend largely on one's vantage point. Overall, only roughly one-fourth of the total pool of Internet ad- dresses is observed to be in use today, but about half of this pool has been delegated by the regional registries the handful of organizations that assign addresses according to global region to ISPs and other organiza-
OCR for page 7
OVERVIEW AND RECOMMENDATIONS lions. Large blocks of addresses are held by organizations, including ISPs, government, research and educational institutions, and businesses that claimed them in the early days of the Internet. The balance of the delegated addresses is allocated in smaller blocks by the regional address registrars to ISPs or other organizations. Unlike many Internet scaling problems, where the challenge is to find a new solution, concerns about address scarcity have led to simultaneous moves down two different paths. One response has been the creation of a replacement to the current protocol, IPv4. Called IPv6, this new solution provides billions of billions of unique addresses. Support for IPv6 has been included in a number of hardware and software products and tools, and strategies supporting a transition to IPv6 have been developed. But the costs of moving to IPv6, reflecting the large number of components that would have to be modified, have dampened enthusiasm for it, and it has seen only limited deployment to date. The low deployment rate, in turn, diminishes the incentives for switching. The other response has been the installation in many networks, in- cluding those of both customers and ISPs, of a work-around technology known as network address translation (NAT), which allows individual computers in a group to be assigned private addresses even as they share a single Internet address. This response offers some advantages, such as easier management of addresses on local area networks, but has signifi- cant architectural shortcomings. Where true end-to-end connectivity is less important, such as for ISPs supporting users who engage mainly in basic Web browsing, NAT may prove to be an adequate work-around, at least in the short term. On the other hand, if support is desired for peer- to-peer applications or users that run servers, then NAT, with its tricky work-arounds, is a much less attractive solution. Widespread use of NATs also brings new complications: when NATs are connected to NATs, basic connectivity and the proper operation of some protocols can be inhibited. NAT is also unattractive where it is desired to deploy large numbers of Internet-connected devices with globally unique identifiers. In light of recent activity and in anticipation of continued growth in the mobile Internet device market, where it is projected that the number of devices will exceed the available address space, there has been renewed interest in IPv6. Indeed, the developers of so-called third-generation (3G) wire- less services have, at this stage, committed to using IPv6. At present, many concerns stem less from a shortage of addresses than from the cost or hassle associated with obtaining an allocation in a climate where regional address registrars and ISPs are motivated to be frugal as they hand out addresses. Address assignments reflect needs that the requesting organization has been able to substantiate on the basis of current use or credible projections that it can make of future needs;
OCR for page 8
8 THE INTERNET'S COMING OF AGE they also reflect the overall availability of addresses at the time that the assignment is made. Thus, organizations and regions that have already been allocated greater numbers of Internet addresses and thus do not face a looming shortage are less likely to find IPv6 attractive, particularly in the short run. In contrast, organizations that are building new networks and seeking to greatly expand the number of users (and thus IP addresses) face costs (fees and effort expended in justifying their requests) to obtain an allocation from a regional address registry or their ISP, and they are more likely to advocate IPv6 deployment. Disparities among organiza- tions and geographical regions in address allocation, which tend to favor those who made earliest use of the Internet, also mean that address scar- city may be perceived as an equity issue associated with perceived dis- parities in control over the Internet. While there has been no crisis thus far, there is still considerable risk associated with exhaustion of the IPv4 address space. In the short term, the costs and problems associated with address scarcity will not be im- posed uniformly. If there is no migration to IPv6, address scarcity will be a serious problem for a subset of Internet users in the short term and a more pervasive problem in the long term. The number of computers attached to the Internet can be expected to continue to grow, reflecting both more users and more devices per user. This growth will be most pronounced and will come soonest in regions and countries where the Internet has made the fewest inroads today, where the number of poten- tial users is large and penetration is expected to be great, and where providers are seeking to deploy very large numbers of devices with full Internet connectivity, such as would be the case if there were an explosion in the development and sales of Internet-capable appliances for the home and/or the 3G mobile phones discussed above. A key question is just how far off the "long term" is, when the impacts of scarcity will be widely and deeply felt. The answer depends on many factors that are difficult to project. And even with a substantial commitment to an eventual switch- over to IPv6, the use of NAT and NAT-like IPv4-to-IPv6 translators will adversely affect the end-to-end transparency of the Internet in the mean- time. Robustness and Reliability There is widespread acknowledgment that it is important to make the Internet as a whole as well as its constituent networks and individual components more robust and reliable. Reactions to a series of distrib- uted denial-of-service attacks in 2000 illustrate the extent to which prob- lems are viewed with concern by government officials and the public. Some challenges, including the need to fix known problems, are well
OCR for page 9
OVERVIEW AND RECOMMENDATIONS 9 understood today, but more information is needed to comprehend the full spectrum of risks and vulnerabilities. Because the Internet is com- posed of thousands of distinct networks run by different ISPs and be- cause ISPs typically do not publicly report outages, much less their cause, little is known about the primary causes of Internet failures. Indeed, little is known about how often there are major failures that affect a large number of customers. In the absence of this sort of information, it is very difficult to start a program to improve the Internet's robustness and reli- ability. Even with better information on risks and vulnerabilities, a better understanding of the underlying technologies for reliable and robust net- works is needed to design and implement fixes, especially in the face of less predictable applications and traffic running over the Internet. A bright spot is that those who develop security technologies and practices have learned much about how the Internet's components can be attacked and have been working with vigor on techniques to make the Internet less vulnerable to attackers, efforts that can also suggest ways of improving the Internet's robustness to inadvertent failures. A number of technolo- gies have been developed to improve robustness to secure Internet sys- tems, detect and prevent intrusion, and authenticate transactions. Imple- mentation of these measures, however, has tended to lag behind the state of the art, and an array of management actions will be needed to better align practices with the technology. Quality of Service The Internet's best-effort quality of service (QOS) has been successful in supporting a wide range of applications running over the Internet. The debate over whether mechanisms supporting other forms of QOS are needed is a long-standing one within the Internet community. It has shifted from an original focus on mechanisms that would support multi- media applications over the Internet to mechanisms that would support a broader spectrum of potential uses, from enhancing the performance of particular classes of applications over constrained network links to pro- viding ISPs with mechanisms for value-stratifying their customers. There is significant disagreement among experts (including the experts on this committee) on how effective QOS mechanisms would be and on the rela- tive priorities that should be attached to, on the one hand, investing in additional bandwidth and, on the other, deploying QOS mechanisms. A key feature of this debate is differing opinion on the extent to which a rising tide of capacity in the Internet will alleviate most performance problems. Contributing to the debate is incomplete knowledge of the causes of performance problems within the best-effort network and the
OCR for page 10
10 THE INTERNET'S COMING OF AGE actual benefits that would be obtained by deploying various QOS mecha- nisms within operational networks. Another open issue is whether there is a role for Internet QOS on links that are inherently constrained (e.g., wireless) or on links where adding capacity may be much more expensive than adding capacity within the Internet backbone (e.g., the links between local area networks or residences and ISPs). Service quality is a weak-link phenomenon. Providing end-to-end QOS requires ISPs to agree as a group on multiple technical and economic parameters, including technical standards for signaling, the semantics of how to classify traffic and what priorities the categories should be as- signed, and the addition of QOS considerations to their interconnection business agreements. The reality of today's Internet is that end-to-end enhancement of QOSis a dim prospect. It may be that localized deploy- ment of QOS, such as on the links between a customer's local area net- work and its ISP, is a useful alternative to end-to-end QOS, but the effec- tiveness of this approach and the circumstances under which it would prove useful are both poorly understood, as is whether such piecemeal deployment could contribute to a balkanization of the Internet. QOS deployment has also been the subject of interest and speculation by outside observers. One view is that QOS would be an enabler of new applications and business models while another is that the introduction of QOS capabilities into the Internet would undermine the equal treat- ment of all communications across the network, irrespective of source or destination. Mechanisms that enable disparate treatment of customer Internet traffic have led to concerns that they could be used to provide preferential support for particular customers or content providers (e.g., those having business relationships with the ISP). What users actually experience will depend on multiple factors: what the technology makes possible, the design of marketing plans, preferences that customers ex- press, and what capabilities ISPs opt to implement in their networks- which will depend in part on their determination of how effective par- ticular QOS mechanisms would be. Additional insights into the role of QOS mechanisms in the Internet will come through several avenues: better understanding of the factors that contribute to network performance, including the limits to perfor- mance that can be obtained using best-effort service; better understand- ing of the effectiveness of QOS approaches in particular circumstances; and greater experience with QOS in operational settings. Keeping the Internet Interconnected and Open One of the Internet's hallmarks has been its openness. This openness appears in a variety of distinct although related ways, including openness
OCR for page 11
OVERVIEW AND RECOMMENDATIONS 11 to new entrants and openness to innovation. Keeping the Internet open has a number of goals, including continuing innovation in Internet ser- vice, preserving access to the full set of content and services that are made available over the Internet, and fostering competition as a means of ensur- ing innovation, access, and affordability. Access to the Local Loop The first key openness issue is access to facilities in the local loop (the final communications hop into the premises), especially perceived advan- tages for those who already own links today, the incumbent local tele- communications carriers and cable operators. In the local loop, openness issues are frequently linked to the term "open access," which refers to the ability of residential or small-office customers to have a choice of alterna- tive ISPs and to have access to content and services that are made avail- able over the Internet even when they are not supported directly by the customer's ISP (i.e., when there is no business arrangement between the ISP and the provider of the content or service). Because the local loop is the point of entry for many Internet users, outcomes here can have signifi- cant consequences for the shape of the Internet as a whole. It is unclear whether issues of open access will be resolved in the near term through regulatory action (e.g., new unbundling requirements), legal decisions, actions by industry itself (perhaps in response to consumer pressure), or consumer choice as a result of facilities-based competition, or whether they will become persistent features of the Internet policy debate. An- other Computer Science and Telecommunications Board body, the Com- mittee on Broadband Last-Mile Technologies, is currently investigating these and other issues related to broadband services for homes and small offices, so they are not considered in detail here. However, a number of points in this report are likely to help inform thinking about the issue, including the discussions of what constitutes transparent, open Internet service; related trends in the ISP business; and the likely roles for QOS technologies on the Internet. Interconnection The second key openness issue is the nature of the interconnection agreements whereby many independently operated networks are inter- linked to create the Internet. To become an ISP, a new provider must have one or more agreements with other ISPs to ensure that its customers can communicate with the customers of all the other ISPs. Interconnection has three dimensions physical (point-to-point or connection at a public exchange), logical (transit or peering routing), and financial (generally
OCR for page 18
18 THE INTERNET'S COMING OF AGE and its associated policy framework come into collision with competing new Internet-based industries. The committee also explored several other places where the Internet is challenging social policies. Many of the con- cerns existed in other contexts before the emergence of the Internet, but the Internet, by virtue of its support for comparatively easy information access and distribution and the relative speed with which new applica- tions of it can be developed and deployed, amplifies these concerns. The subset of issues explored by the committee privacy, anonymity, and identity; authentication; taxation of commerce transacted over the Internet; and universal service is not comprehensive (nor could it be so in a study of this size). Rather, these issues were chosen as significant points of interaction between the Internet and the broader society. RECOMMENDATIONS The Internet's coming of age has been marked by increased attention across the board. The businesses and organizations that design, build, and operate the Internet's constituent networks are working to shape that network to meet the demands of users. The work being done on the inside has been changing in character, reflecting the Internet's increasing importance to customers, growth in the number and kinds of applica- tions, and changes in the nature of the business itself (e.g., while they are interdependent, ISPs are also competitors). Accompanying these efforts are increased attention and heightened expectations from the outside- individuals, organizations, corporations, and government bodies that reflect the importance of the Internet as an infrastructure for society. The assimilation of the Internet into society and the economy involves a grow- ing role for a second business community in addition to the businesses that design, build, and operate networks: these are the businesses that provide content, applications, and services that run over the Internet. In some cases, of course, the same entity may be involved in both kinds of business, but overall, this second community tends to be distinct, larger, and more differentiated, and it raises a wider range of concerns. How- ever, although much of the attention now being paid to the Internet re- lates to the behavior (as regards, for example, online privacy and other consumer protection issues) of these businesses that leverage the Internet, the committee concentrates its recommendations on the businesses that design, build, and operate the Internet. The committee's overview of social policy concerns completes the picture by illuminating actions that leverage the Internet and that, di- rectly or through policy responses, may influence future decisions about how the Internet develops. Sound recommendations that respond to the particulars of these social policy concerns unlike the general principles
OCR for page 19
OVERVIEW AND RECOMMENDATIONS 19 articulated below would require further examination of the contexts and behaviors associated with each concern. The principal conclusion of the committee, which underlies the discussion below, is that the Internet is fundamentally healthy and that most of the problems and issues discussed in this report can be addressed and solved by evolutionary changes within the Internet's current architectural framework and associated processes. Multiple actors the research community, industry, govern- ment, and the users themselves have important roles to play in ensuring the Internet's continued well-being and progress. The recommendations provided here cover both the general and the specific, reflecting overall principles as well as more targeted opportunities. The Technology Base Exhortations about the importance of research and development on scaling, reliability, and the like are not new, but the committee makes recommendations in these areas to underscore their importance at this point in time. Research and development have enabled the Internet to become a mainstream infrastructure, but the job is far from done: use of the Internet and dependence on it can be expected to grow. Staying on the Internet growth curve, so frequently projected by pundits and ana- lysts and expected by the Internet's users, will require continued, sus- tained effort in many places. Some of the challenges are shorter term: these the research community and industry infrastructure seem well placed to solve, as they have in the past, through sustained effort and incremental enhancements. Others are longer-term, enduring challenges and will need more fundamental breakthroughs. Many research advances would provide benefits to all who operate and use the Internet, not just a single player. This outcome argues for using public funds to support such work even in the face of considerable private investment in the Internet, particularly where self-interest or near-term gains are insuffi- cient motivators for industry investment. Research and development that address scaling challenges and en- hance reliability and robustness should continue to receive support from both industry and federal research funding agencies. Priority scaling issues include the continuing need to improve the scalability of applications deployed over the Internet, scaling issues associated with the DNS infrastructure, and long-term scaling issues related to addressing and routing in the Internet. Key research and development areas related to reliability and robustness include (1) the development of improved trust models that better describe the business relationships of organiza- tions and what sessions and relationships they authorize; (2) research on
OCR for page 20
20 THE INTERNET'S COMING OF AGE technologies to cope with attacks such as technologies for intrusion de- tection and isolation, including capabilities that would provide faster and more focused isolation of attacks in a manner that scales to the Internet's increasing speeds and complexity; (3) the design of mechanisms and pro- tocols that better protect one part of the Internet from attacks and opera- tional errors in other parts and from damage to components without disrupting the basic requirement for global connectivity; and (4) fast link and node failure detection and healing mechanisms as well as interdomain routing protocols that provide greater recovery speed. Researchers, research funders, and network operators should work together to find opportunities that would allow more network research to be done in realistic operational settings. A common theme across the technical challenges discussed in this report is that they have to do with properties of the Internet as a system, including how it scales or how it handles failures or deliberate attack. These challenges are hard to study in small-scale systems, which are what researchers generally have to work with, and hard to study through simulation, because both theory and models pertaining to the operation of very large networks such as the Internet are weak. The need for researchers to have better access to real- world artifacts has been noted in earlier studies.1 The payoff from better access to Internet networks would be an improved understanding of net- work behavior, particularly behavior related to large scale and high con- gestion, that could lead to insights that would enable improvements in operational networks. For example, research aimed at better understand- ing where and how quality-of-service mechanisms would best benefit a particular class of applications needs to be done on a network with realis- tic congestion and cannot be done through simulation unless one has good models of how a congested network behaves. Implementing this recommendation will require overcoming the reluctance of ISPs to make their networks available because they fear that researchers may induce malfunctions or disclose proprietary information when they "play around" in ISP backbones. It will also require attention to the lag in capabilities between research instrumentation and the equipment found in high-capacity ISP networks. These inhibiting factors are not confined to commercial networks operated by ISPs; they also arise in research net- 1Computer Science and Telecommunications Board (CSTB), National Research Council (NRC). 2000. Making IT Better. Washington, D.C.: National Academy Press; Computer Science and Telecommunications Board (CSTB), National Research Council (NRC). 1994. Academic Careers for Experimental Computer Scientists and Engineers. Washington, D.C.: Na- tional Academy Press.
OCR for page 21
OVERVIEW AND RECOMMENDATIONS 21 works that are used in operational modes, such as for applications research. Industry and researchers should continue to investigate the eco- nomics of interconnection and technologies to support interconnection. Improved understanding of the economics that underlie interconnection in the Internet may be useful for better understanding how the Internet's interconnection arrangements are evolving and may lead to new models that improve the overall interconnection of the Internet or that help ad- dress concerns such as barriers to entry. Key topics include how to best approach the value relationships that exist across the Internet; identifying economic alternatives beyond simple peering and transit; and exploring the organizational dimension of interconnection and openness issues, in- cluding the implications for industry structure and performance. At the same time, industry should continue to explore new business models for interconnection and for fostering a commercial environment that encour- ages competition and innovation. There are also challenges on the techni- cal side: research on routing could provide better control and protection of interconnecting providers, thus increasing the range of possible inter- connection alternatives available to ISPs. Government, industry, and other stakeholders should continue to foster the development of open standards for the Internet. Each Internet player will be tempted to diverge from the common standard if it looks like it might be able to capture the entire market (or a large portion of it) for itself. However, a common, open standard maximizes overall social welfare as a result of the network externalities obtained from the larger market. When competent open standards are made available, they can be attractive in the marketplace and may win out over proprietary ones. The government's role in supporting open standards for the Internet has not been, and should not be, to directly set or influence standards. Rather, its role should be to provide funding for the networking research commu- nity, which has led to both innovative networking ideas as well as specific technologies that can be translated into new open standards. Where there are societal expectations associated with particular ex- isting industries, such as expectations for 911 emergency service as part of telephony, analogous capabilities for the Internet should be devel- oped and demonstrated through research and experimentation in the marketplace rather than by mandating particular technical solutions. Whether, when, and how regulation is introduced can affect innovation in this area because the underlying telephony technologies and service offerings are themselves evolving rapidly and have yet to prove them-
OCR for page 22
22 THE INTERNET'S COMING OF AGE selves in the marketplace. For example, the interoperation of PSTN and Internet telephony systems raises longer-term questions: How should number portability be implemented? How can customers be provided with numbering and naming? Both research and market-based experi- mentation will be important in developing the best and most efficient means of implementing telephony services. Designers and Operators The Internet's developers and operators have devised technologies and processes that will do much to keep the Internet healthy and grow- ing. However, improvements in scalability, reliability, and robustness will involve more than technical advances per se; questions of implemen- tation are also important and in many cases require collective action by many thousands of entities. Business imperatives generally motivate in- dividual organizations and companies to act in ways that promote their individual business success, but such actions do not necessarily provide broad-based, global benefits for the Internet as a whole. Indeed, there are many places where long-term, overall benefits for the Internet as a whole are traded off for shorter-term, local benefits to particular subsets of Internet users and operators. For example, while the Internet industry and its customers stand to gain in the long term from a shift to IPv6, the costs for individual organizations will, at least in the short term, probably outweigh the benefits they themselves obtain. Another example is the scalability of applications: applications whose deployment adversely af- fects the performance experienced by all Internet users may, nonetheless, provide local benefits (because a short time to market can yield more immediate returns) and result in the capture of a greater market share (because the Internet is what economists call a tippy market). One pos- sible driver of collective action is the prospect of governmental regulatory intervention. But the extent to which enlightened self-interest can be a motivator will depend on the specific issues and circumstances. Several private, nonprofit organizations play critical roles with re- spect to the Internet, including the principal standards bodies (the Internet Engineering Task Force and the Internet Architecture Board); organiza- tions that deal with operational issues (e.g., the North American Network Operators Group); and the Internet Corporation for Assigned Names and Numbers (ICANN), which has assumed overall responsibility for manag- ing the Internet's addresses and names. The absence here of recommen- dations for these organizations should not be taken as an indication that the actions and evolution of these organizations are not important. The committee's lack of commentary on them should not be read as either critical of or supportive of either side in debates such as that surrounding
OCR for page 23
OVERVIEW AND RECOMMENDATIONS 23 the role of ICANN. While the committee has not examined these issues in depth, it believes that these institutions make important contributions to the operation and development of the Internet, notwithstanding the unstable circumstances.2 Because the committee's membership includes several individuals who work closely with these organizations, the com- mittee decided not to issue conclusions related to the specifics of the organization's work but urges continued, close attention by Internet op- erators, users, and policy makers alike. As a first step to improving robustness, the ISP industry should develop an approach for reporting outages and make the information available for studying the root cause of failures and identifying actions and technologies that would improve the Internet's robustness. While anecdotal reports of failures are available from both the popular press and various Internet community forums, these sources generally lack suf- ficient detail and are not systematically collected, making it hard to assess Internet reliability and robustness trends or conduct root-cause analysis. The availability of these data will make it possible to properly analyze the robustness of the Internet, identify key related issues, and provide the information needed for research into how to make the Internet more ro- bust. The committee recognizes that there is currently no consensus on what data ought to be reported and that there would be strong resistance to a mandated reporting of irrelevant information. It also anticipates that some form of reporting of outages is likely to become a requirement, at least in the United States, which suggests that the industry should work to devise a program that represents a balance of interests as an alternative to the imposition of government-developed reporting standards; the vol- untary program initiated by the Network Reliability and Interoperability Council is a first step. Cooperative consideration of an approach for reporting outages and failures should determine what information ought to be collected as well as to whom it should be reported. Since the pri- mary purpose of collecting this information is to inform industry activi- ties as well as research aimed at improving reliability and robustness, it will not be necessary that all of the information be reported publicly the operators themselves and the research community would be the main beneficiaries of some of the detailed information. A process for gathering systematic data on failures should be understood to be distinct from inde- pendent monitoring of ISP performance, which is best performed by in- dependent organizations that gather data on behalf of consumers. 2Another CSTB committee is expected shortly to begin an examination of issues sur- rounding the assignment of domain names in the Domain Name System such as conflicts between DNS names and trademarks.
OCR for page 24
24 THE INTERNET'S COMING OF AGE Internet service providers, content and service providers, and users should continue to adopt technologies and practices that improve the reliability and robustness of the Internet as a whole. The Internet's trust model distributes responsibility for robustness across many actors, in- cluding ISPs, network operators, and end users, placing responsibility on each to adopt the best practices and technologies. Also, the Internet's composite nature and international scope mean that no one can impose overall requirements for such things as reporting problems, minimum operational standards, or controls on malicious actions. This limitation makes it even more important to develop industry agreements address- ing robustness that are international in scope, and it underscores the im- portance of developing technical mechanisms that permit one piece of the Internet to protect itself from another. NATs (and the somewhat NAT-like IPv4-to-IPv6 translators) are a necessary short-term measure but should not substitute for a long-term transition to IPv6. Investment in the development and deployment of IPv6 technology, along with promotion of the long-term benefits of IPv6 for customers and ISPs alike, should be continued. In addition, there should be a concerted effort to address other pressing issues that IPv6 does not now completely address. IPv6 alone does not resolve other, related issues faced by the Internet. For example, while it does provide some aids for automatic configuration, it does not adequately simplify the management of internal networks interfaced to the Internet. Nor does it solve the scaling problems mentioned above with respect to the computational complexity of updating routing tables as the number of addresses increases. Also, while it includes stronger authentication and confidentiality safeguards than IPv4, it does not respond to other security considerations that may be critical to minimizing vulnerability to attack. Decisions made by industry, government, and consumers should all take into account the significant long-term benefits of open, trans- parent IP service. The preservation of open IP service would have a number of benefits for both ISPs and customers. Because of its critical role in the continued dynamism and growth of the Internet, government should include considerations of openness in its inquires relating to the Internet and should favor policy decisions that are consistent with main- taining open IP service. Government also has a role to play in convening dialog and supporting research about openness issues. By the same to- ken, concerns about the vertical integration of the data transport and content businesses and about content control, as seen in recent debates about access to cable broadband Internet systems, could be eased if ISPs
OCR for page 25
OVERVIEW AND RECOMMENDATIONS 25 committed to providing their customers with open IP service. From this standpoint, the continued delivery of open IP service would be an en- lightened move in the long-term interest of the industry. ISPs should make public their policies for filtering or prioritizing customer IP traffic. Many filtering and traffic prioritization policies work to the mutual benefit of both the provider and the customer. But given their subjectivity, all would benefit from an environment in which such policies are publicly disclosed, allowing customers to understand the na- ture of service offerings and reducing the likelihood that ISPs will be perceived as manipulating the nature of their services such as favoring their own content behind the scenes, against the interests of consumers. Further, such disclosure might foster a market in which ISPs compete on the terms of their policies and in which a particular ISP offers different service options so as to better meet the needs of its customers. Also, those who monitor the industry or rate the quality of ISPs could use such infor- mation to inform consumers about the advantages and disadvantages of the various ISP service offerings. Government Policy Responses By lowering the cost of communications and increasing the function- ality and utility of the communications infrastructure, the Internet has enabled significant changes. Experiencing a revolution on Internet time is extraordinarily challenging. Changes come quickly and unpredictably. Fads appear suddenly and fade away just as rapidly. Nor is the speed of events the only challenge. The distributed nature of the Internet, with its thousands of ISPs and software vendors and its millions of individual users all contributing to the overall shape of the network, makes it very difficult to understand what is happening. The technology is changing swiftly, and in many cases the perceived problem may fix itself or evolve into an entirely different problem. In such a dynamic environment, flex- ibility is essential and regulatory caution is a virtue. This should be a period of watchful waiting. The present policy of nonregulation of the Internet should be ac- companied by close monitoring of the Internet's structures and opera- tion by government, the Internet industry, and Internet users to ascer- tain enduring trends and identify what problems, if any, are due to persistent as opposed to transient phenomena. While this recom- mendation is intended to apply across the structure and operation of the Internet as a whole, the committee sees several important places where it should be applied:
OCR for page 26
OCR for page 27
OVERVIEW AND RECOMMENDATIONS 27 Principle 1. Focus laws and regulations on the activities and behav- iors of concern rather than on the network architecture or its constituent networks. Use existing laws and regulations first, provided they are consistent with the capabilities and design of the relevant technologies. In many cases, existing laws are adequate to address Internet-related is- sues, and they should be the default approach. One risk posed by Internet- specific legislation or regulation is that of measures whose implementa- tion would force modifications to the Internet's architecture. The adverse effects of new laws and regulations on that architecture should be weighed against their usefulness for addressing a particular problem. Indeed, requiring enforcement of a particular policy within the network could entail breaking the hourglass transparency of the Internet. Existing laws and regulations will not prove adequate in all circumstances, however; the salient instance at present is Internet telephony. Principle 2. Where Internet-specific government intervention is required, laws and regulations should establish the framework and overall parameters, while industry and other nongovernment stake- holders should devise appropriate implementations. The rapid evolu- tion of the Internet and its interactions with societal interests argue for caution in setting rules and crafting legislation. The extent to which spe- cific actions are required today is unclear, in part because it is unclear which circumstances will endure or to what extent voluntary actions in response to public and government pressures are at least in part address- ing some concerns. However, today's heated national and international debate in areas such as privacy and anonymity illustrates that not all stakeholders believe status quo approaches will prove satisfactory, so governmental institutions will surely be monitoring progress and may, at some stage, intervene through new regulation or laws. The committee does not recommend where government intervention should or should not be undertaken. As noted above, it finds too many of the elements of the situation to be too dynamic, and it in any case did not conduct a complete assessment of social policy issues. But if it is determined that voluntary action alone is not sufficient, a legislative or regulatory ap- proach should be adopted that reflects the dynamic, evolving nature of Internet applications and services and the Internet marketplace. Legisla- tive and regulatory actions should establish a framework for desired out- comes and define the principles and parameters that bound online con- duct. A flexible approach also helps create an environment that fosters alternative solutions, both in terms of new practices and new technolo- gies, and that can both satisfy the established principles and provide additional benefits such as easier implementation, decreased costs, and greater investment in innovation.
OCR for page 28
28 THE INTERNET'S COMING OF AGE Principle 3. Keep a broad geographic perspective when thinking about Internet issues. Over the Internet, it can be as easy to interact with a person, organization, or company thousands of miles away as with someone in the next town. Issues surrounding sales tax collection have shown how the Internet weakens geographical boundaries and how local and national social and economic interests and concerns come into play as political institutions attempt to address the geographical challenge. Com- merce is but one of many instances where the Internet's global nature raises issues and stresses existing regimes; another instance is cultural as well as community identity. The global nature of the Internet also means that many issues will have to be addressed in international forums, in the interest of harmonizing approaches to transborder problems and estab- lishing reciprocity and other arrangements in the event of transborder responses to problems. In accordance with principle 1, Internet-related issues are best resolved, wherever possible, by the established law of the relevant domain or established rules for handling cross-border activities. Pursuant to principle 2, solutions that seek to establish performance ob- jectives rather than specify implementation details are preferable. In some areas, existing national and multilateral frameworks (and adaptive pro- cesses) will be sufficient to address concerns. Harmonization will, how- ever, present an ongoing challenge, and resolution may necessitate coun- tries making compromises on the specific approaches; global scope implies, among other things, a need to frame U.S. policy in the context of policy in other parts of the world, which can affect the design and en- forceability of measures taken in the United States.
Representative terms from entire chapter: